Microsoft makes available many business and technical case studies based on their tools and services. I like to read through these to see how other companies have taken the challenges that IoT and cloud development present their industry specific problems.
My goal is usually to find some reusable process, approach, or thinking that I can add to my own toolbox. My latest reads were these two articles related to the municipalities in Norway that were wasting water through leakage in their water delivery systems.
The first is the basic case study that you can read here:
Basically, this is your standard monitoring IoT pattern with some ML anomaly detection baked in to help alert operators of possible leaks. The team did face a few unique challenges in developing this solution.
Most of the telemetry device data came from SCADA (supervisory control and data acquisition), which I did a little of my own research on to get a better understanding of these types of systems actually work in municipal water suppliers.
The following gives a pretty good overview:
There’s an interesting section in the above article that explains how telemetry data is viewed and how alerts are set up.
“The HMI is configured to alarm on a number of I/O point readings. By defining “normal” or “safe” operational states for specific I/O points, if conditions ever deviate from those parameters, the control center receives a visual alarm. The alarms are also integrated into a pager system and an on-call operator receives a cell phone message that they acknowledge and handle appropriately.“
To me, this is where the value of connecting a more versatile tool like Azure IoT Hub comes in. How do you go about defining the normal or safe operating parameters?
As is discussed in the DevOps article associated with the Powel use case (the second article I mentioned earlier), you can use machine learning to analyze the data and determine anomalies in the water flow, such as fire hydrant usage, leaks, or other unusual water flow conditions.
This case study has a lot to learn from, and here are some of the thoughts I had while reading through.
Overall, I like that Azure IoT is an integration piece to the existing SCADA system. Instead of attempting to replace that investment, the team was able to piggyback on that solution by creating gateways from the SCADA system to Azure IoT Hub and the rest of their Azure platform. Azure here is a supplement to their existing system allowing the team to bring the SCADA system’s telemetry data to be captured, run through data insights, and used to train machine learning to search for the type of anomalies that indicate water leaks.
I like the choice of micro services to provide access to the various slices of the system. They are using an API stack to integrate multiple data views into a Single Page Application for their tenants.
— my suggested next steps
As I read through this I kept thinking that the next steps needs to be to move some of that anomaly detection closer together the source. Unless there’s a pressing need to capture all the SCADA system telemetry, it makes sense to me to convert the SCADA gateways into IoT Edge devices that can host the anomaly detection algorithms outside IoT Hub serving as a filter to block telemetry data that is considered normal.
I think there a great tools available to help in the initial planning stage of a big project. Value Stream Mapping and a Hack-a-thon were a great place to start. From these two events the team learned a lot of lessons quickly, which should be the goal at the outset of a large project. Learn fast while building something that informs your stockholders and solidifies your goals.
Discipline in the DevOps approach was another great thing to see the team take on. Security was top of mind, while production first delivery was something the team struggled with, adapting that mindset allowed them to almost double their productivity.
Their Approach to Machine Learning
They used Power BI as their initial data analysis tool. Typically, they would have used Python and visualization tools in a Jupyter notebook to do their data analysis, but this team took advantage of the out-of-box visualization and integration with Stream Analytics Services to make some of their initial decisions.
Machine Learning Studio was their machine learning environment of choice. Using it in combination with Power BI to discover anomalies, experiment with features and algorithms, and produce the solution was a great use of Azure’s IoT suite.
The most amazing thing about this case study is that Powel was able to achieve a tremendous amount of value in as little as 5 days.
My biggest takeaway
1 – Azure IoT plays exceptionally well with others. Gutting a legacy IoT system to replace it with Azure IoT isn’t necessary. Azure IoT could expand your legacy capabilities without losing that initial asset.
2 – Just because you’re developing a fast POC or MVP doesn’t mean you shouldn’t think security and production first. In fact, I want to see what I can use from the initial DevOps list and apply it to my own projects.
This was their DevOps discipline:
- Underwent a value stream mapping session to find areas of improvement.
- Created a multi-team project with work flowing from one team to the next.
- Used a microservice approach to deliver software updates in a “fast ring” style without downtime.
- Defined acceptance criteria according to the ISO standards for software quality.
- Used the SDL and the Microsoft Threat Modeling tool to assess security risks early.
- Architected the code using Domain-Driven Design (DDD) principles.
- Used a long-lived feature branching strategy, using Git as source control.
- Adhered to the SOLID principles of object-oriented design.
- Developed code using a Test-Driven Development (TDD) approach.
- Set up an automated build and test pipeline for each component of the solution (CI).
- Set up release management (CD+RM).
- Used A/B testing based on an opt-in policy.
- Expected to monitor running applications using Application Insights.
- Used Infrastructure as Code.
- Used Slack as a collaboration tool between departments.
We worked long and hard during each day of the hackfest. Everyone was high-spirited and excited about the work, and when Friday afternoon came, we had a working proof of concept (POC) tying in all of the API components with the Web App and output from the Machine Learning algorithm.“
Overall, this was a great case study to learn how Azure can work with existing telemetry gathering networks to bring modern tools and practices to legacy IoT systems.