Posted on Leave a comment

AutoML and Domain Driven Design

Can Domain-Driven Design (DDD) improve AutoML implementations? I believe it can, as many Machine Learning experiments involve the same problem-solving approaches used in software development.

This article provides a brief overview of AutoML and no-code development. It then discusses the most common approach to DDD for software development. With a specific use case in mind, I’ll walk through a scenario with AutoML as the tactical architecture. I’ll explain how DDD should be used to make strategic and tactical decisions regarding AutoML development.

By the end, you’ll have a basic understanding of AutoML and DDD. You’ll also understand how to apply DDD as a framework to build the right ML solution for the domain problem with organizational stakeholders.

Introduction to AutoML

AutoML is the process of automating the tasks of applying machine learning to real-world problems, according to Wikipedia. So, what is the use case for no-code AutoML?

Many organizations struggle to move beyond the proof-of-concept stage. This can be due to a lack of staff or data estate to support the efforts, the technical complexity of building out the infrastructure to support machine learning in production, or an unclear definition of the business objectives they wish to meet in the problem space.

AutoML helps reduce the risk of failure by providing cloud-native, low- or no-code tools to guide users through the process of curating a dataset and deploying a model. No-code development has enabled organizations to reach their goals without the need for experts. Popular platforms like Microsoft’s Power Platform, Zoho Creator, Airtable, BettyBlocks, and Salesforce have made no-code development a regular part of an organization’s IT toolset. This puts development tools closer to domain experts, allowing organizations to meet their objectives without the usual IT project overhead.

Critics of the no-code movement point to limited capabilities compared to traditional software development, dependency on vendor-specific systems, lack of control, poor scalability, and potential security risks. However, some organizations may find these risks worth the opportunities and solutions that no-code provides.

AutoML has both critics and champions. Organizations should be aware that AutoML comes with tradeoffs alongside its benefits. Champions of AutoML will point to the following advantages over traditional machine learning development processes:

  • Accessibility: AutoML requires minimal knowledge of machine learning concepts and techniques, so you don’t need a data scientist or data engineer to guide you through the process.
  • Collaboration: Platforms like AutoML, Databricks, and Amazon SafeMaker Studio enable data collaboration in one platform, allowing teams to share data, models, and results with each other.
  • Consistency: Automating the optimization of models reduces the chances of human error, improving the consistency of machine learning model results.
  • Customization: Platforms like Azure ML and Amazon SageMaker Studio make it easy to customize machine learning environments and set specific requirements for models and parameters.
  • Efficiency: AutoML addresses faster ways to preprocess data, select the correct model, and tune hyperparameters, reducing tedious and time-consuming tasks.
  • Scalability: AutoML platforms are typically built on cloud architecture, making it easier to handle large datasets and complex problems.

Critics of AutoML warn that using it instead of traditional machine learning could lead to dependence on data quality, ethical concerns, lack of control, lack of interpretability, and lack of transparency.

Data quality is essential: many AutoML platforms require clean data with no issues. Without data engineers or a data quality process, it’s unlikely to have clean data. Poor data quality or noisy data can result in inaccurate models.

Ethical considerations must also be taken into account. Algorithms may perpetuate existing biases and discrimination if the data used to train them is unbalanced or biased.

AutoML’s abstraction of the complexities of model creation is beneficial, but it also means users can’t control what happens during the pipeline process. If the algorithms developed from AutoML are difficult to understand, organizations may not have insight into how decisions are being made, and may unknowingly release models with flawed biases.

Without understanding how the model is making decisions, it’s hard to fully grasp the strengths and weaknesses of a model, leading to a lack of transparency.

Additionally, the models generated from AutoML may not be able to handle specialized problems or reach the performance expected of modern ML models.

AutoML is a process of automating the tasks of applying machine learning to real-world problems. It offers cloud native, low to no-code tools that help guide users from a curated dataset to a deployed model. There are benefits and tradeoffs to using AutoML, such as accessibility, collaboration, customization, scalability, and efficiency, but also potential ethical concerns, lack of control, interpretability, transparency, and limited capabilities. Is there a way that we can apply a common, and well-established framework that helps us better exploit the positive elements of AutoML while reducing the negative side-effects brought up by its critiques? I believe we can, and I think Eric Evans’ approach to creating a ubiquitous language for the software development team and the domain experts within an organization is the best place to start.

A Quick Overview of Domain-Driven Design for Software Development

DDD is a software development practice that focuses on understanding and modeling the complex domains that systems operate in. It emphasizes the importance of gaining a deep understanding of the problem domain and using this knowledge to guide system design. DDD is a flexible practice based on principles and concepts rather than rigid rules. I use DDD because, as a developer, it encourages me to think more about the domain problem and desired business outcomes than on the technical approach to creating software and infrastructure. It’s a lightweight way of building a shared language with someone using a common language. The best example of this I found was in the book Architecture Patterns with Python by O’Reilly Media.

Imagine that you, our unfortunate reader, were suddenly transported light years away from Earth aboard an alien spaceship with your friends and family and had to figure out, from first principles, how to navigate home.

In your first few days, you might just push buttons randomly, but soon you’d learn which buttons did what, so that you could give one another instructions. “Press the red button near the flashing doohickey and then throw that big lever over by the radar gizmo,” you might say.

Within a couple of weeks, you’d become more precise as you adopted words to describe the ship’s functions: “Increase oxygen levels in cargo bay three” or “turn on the little thrusters.” After a few months, you’d have adopted language for entire complex processes: “Start landing sequence” or “prepare for warp.” This process would happen quite naturally, without any formal effort to build a shared glossary.

I love that this example shows the natural process of discovery and how it creates a shared understanding of the spaceship’s behavior. DDD is a big topic, so I won’t try to cover it all here. The important thing to understand is that DDD is meant to be practiced. It’s a process based on discussions and drives towards building deep, shared knowledge about a specific problem.

Why do I believe that someone wishing to learn machine learning, even as an AutoML user, should begin their own DDD practice? I would point to these key concepts of DDD:

  1. Bounded context: Isolate and well-define a specific part of the problem domain to manage complexity and prevent misunderstandings between different parts of the domain model. Avoid “boiling the ocean” and taking on more work than can be managed. Bounded context can represent a team, line of business, department, set of related services, data elements, or parameters.
  2. Domain expert: Someone with a deep understanding of the problem domain who can provide valuable insights and guidance to the development team. Without access to domain expert, it’s difficult to build a solution of real value.
  3. Domain model: Representation of the key concepts and relationships in the problem domain, expressed in a shared language. Not an exact replica of reality, but captures the essence of what makes the organization’s model unique.
  4. Event storming: Collaborative technique to identify and model key events and processes in the problem domain. Uncovers hidden complexity and ensures the domain model reflects the needs of the business.
  5. Ubiquitous language: Shared language used by all members of the development team to communicate about the problem domain. Ensures everyone is using the same terminology and concepts.

Why would an AutoML developer want to know DDD?

DDD encourages the use of domain-specific language and concepts in modeling, which can make it easier for domain experts to understand and interpret the results of the models. It also emphasizes the importance of understanding the business context and domain-specific knowledge when solving problems. This can help AutoML developers to build more accurate and effective models.

DDD provides a common language and set of concepts that can help data scientists communicate more effectively with domain experts and other stakeholders. It also emphasizes the importance of designing solutions that are maintainable and adaptable over time, helping AutoML developers to build models that are more robust and resilient to change.

Finally, DDD encourages collaboration between domain experts and technical experts, which can help AutoML developers to better understand the problem they are trying to solve and the impact their solutions will have on the business.

A Use Case: A Machine Learning Model to Diagnose the Flu

I have explored how AutoML and Domain-Driven Design can be used together as a framework to help AutoML developers. Our aim is to take advantage of AutoML’s positive aspects while minimizing its negative tradeoffs. I have discussed why an AutoML developer might choose to use DDD as a framework, so in this section I will explain how to implement the process.

I have chosen a relatively simple use case, one that has been extensively studied in terms of building classifiers for the problem domain. Therefore, I will take a basic approach to a non-novel problem, focusing on the DDD process rather than the complexity of the problem domain.

Using domain-driven design (DDD) to build a machine learning model to help diagnose patients with the flu could involve the following steps:

  1. Identify the bounded context: The first step would be to identify the bounded context, or specific part of the problem domain, that the model will operate in. In this case, the bounded context might be the process of diagnosing patients with the flu. A conversation with the people making the request for a solution can help establish a scope and general goals. Additionally, tools like Simon Wardley’s Wardley Mapping and Teresa Torres’ Opportunity Solution Trees can uncover what type of business outcomes the organization is attempting to address and the service or supply chain associated with meeting customer needs. Questions such as what the requesters expected outcomes are from a solution that can diagnose patients who have the flu, if it will lessen time in a waiting room, or if it will make patient intake faster should be asked.
  2. Identify the domain experts: The development team should then identify domain experts who have a deep understanding of the problem domain and can provide valuable insights and guidance. These domain experts might include medical professionals who are experienced in diagnosing and treating patients with the flu. The goal is to establish clarity around vocabulary, expected behaviors, and begin to build a vision for the existing strategic, business systems.
  3. Define the ubiquitous language: The development team should work with the domain experts to define a shared language, or ubiquitous language, that everyone can use to communicate about the problem domain. This might include defining key terms and concepts related to the flu and its symptoms.
  4. Conduct event storming: The development team should use a collaborative technique called event storming to identify and model the key events and processes involved in diagnosing patients with the flu. This might include identifying the symptoms that are most indicative of the flu and the tests that are typically used to diagnose it. A large whiteboard or a shared online collaboration tool can be used to define domain events, commands, policies, and other important elements that make up a working system.
  5. Build the domain model: Using the insights and knowledge gained through event storming, the development team should build a domain model that represents the key concepts and relationships in the problem domain. This might include building a model that predicts the likelihood of a patient having the flu based on their symptoms and test results.
  6. Use AutoML to build and tune the machine learning model: The development team should then use automated machine learning (AutoML) to build and tune the machine learning model. This might involve selecting an appropriate model type, preprocessing the data, and optimizing the hyperparameters. The AutoML user should build a better understanding of the type of settings they want to establish for their model. The type of model should address the primary problem type uncovered during the DDD process. If it wasn’t, the AutoML developer should return to additional sessions with the domain experts to tune and solidify the design of the solution.

Overall, by applying DDD principles to the development of the machine learning model, the development team can create a model that is closely aligned with the business needs and can evolve and adapt over time. DDD is not just a process to follow when planning the project, but one to continue throughout the development and deployment of the solution. Involve domain experts in the ML model’s lifecycle as long as it creates value.

DDD and AutoML

AutoML is a process of automating machine learning tasks to solve real-world problems. It offers cloud-native, low- to no-code tools to guide users from a curated dataset to a deployed model. Benefits include accessibility, collaboration, customization, scalability, and efficiency. However, there are potential ethical concerns, lack of control, interpretability, transparency, and limited capabilities.

Domain-Driven Design (DDD) is a software development practice that focuses on understanding and modeling complex organization domains. It encourages developers to think more about the domain problem and desired business outcomes than the tactical approach. DDD is a flexible practice built on principles and concepts, not hard rules. It is a lightweight method of building a common language with domain experts.

Does this mean DDD is right for every AutoML endeavor? Not necessarily. When experimenting with data and working with light predictions, bringing the framework of DDD is likely overkill. But when working with complex domains, like healthcare or modern manufacturing, involving domain experts is common. DDD is useful for conducting valuable discussions, capturing important vocabulary, and uncovering unique domain behaviors. It is a tool to include in the professional’s toolbox, even when working with low- or no-code solutions. Understanding how the organization will use the solution to meet desired outcomes is essential for success. DDD helps bridge the gap between desired outcomes and machine learning models.

Posted on Leave a comment

Personal Post on My Continuing Journey with IoT Edge Computing

shawn deggans personal blog post

I made the biggest career change of my life recently.

I left the consulting firm I had been employed with for the past four years. It wasn’t an easy decision. I gave up a technical leadership role, I left a lot of people who I loved working with, and I gave up the security of a regular paycheck.

What was behind my decision?

Focus.

Focus was my primary reason for leaving. Two years ago I began a journey to learn and apply everything I would need to know to be a competent IoT Edge Architect. I began that journey with the hopes that my career would be heavily focused on helping organizations solve interesting problems using modern data analytics, IoT systems, and containerized machine learning on the edge.

That never really happened. I had the occasional opportunity to work with machine learning, Kubernetes, and some advanced analytics, but the bulk of interesting data work was done by other people while I focused on platform development.

I didn’t allow those IoT skills to go static though, because I did the occasional side work with partners focused on IoT, but my day job always came first. It reached the point that the day job wouldn’t allow time for anything other than the day job. I didn’t want those IoT skills to go stale, so I had to make the difficult decision. Do I stay where I am and try to be happy or do I pursue the career working with the technology I know actually moves the needle for organizations?

So here I am, completely independent. Ready to focus.

I got pretty good with infrastructure deployment and DevOps, so that’s the majority of the work the firm put me on. And they put me on a lot of it. Systems architecture and design became my everything for a while. Let me be clear that there’s absolutely nothing wrong with systems design work. It can be incredibly rewarding. It’s actually a big part of IoT Edge development. It just became clear to me that I was never going to have the opportunity to help build anything interesting on the infrastructure I was creating.

During my time with the firm, I went from a senior application developer to a cloud systems architect. It took me four years to make it happen, but it was definitely a necessary milestone for the next stage of my journey.

What is the next stage of my journey?

I’m returning my focus to IoT Edge computing.

What I want to do for my career is implement edge and IoT systems from multiple types of systems to multiple cloud and server solutions using modern communication systems, analytics, and security. I mean, for something that’s supposed to be a focus, that’s pretty broad. However, it all fits together nicely for certain, specialized use cases.

I have a lot of experience and a few certifications in Azure, so I have no plans to walk away from Azure any time soon, but I’ve had the opportunity to work with a few other products like InfluxDB, Milesight, Chirpstack, Eclipse Mosquitto, and I don’t want to limit myself to one cloud or one set of solutions. Much of my focus moving forward will be more on the IoT Edge System Design. The theory, the best practices, the understanding of why certain technologies are used over other technologies.

Basically, in order to improve my IoT Edge expertise, I need to say no to a lot of other types of work. Am I capable of helping an organization migrate all their SQL data systems from on-premises to the cloud? Yes, it’s completely within my wheelhouse. Could I build a distributed e-commerce solution using .Net applications in Docker containers for high availability using the best security Azure has to offer? Yes, also within my skillset. Will I take on any of that work? No. I won’t. And that’s the point I’ve reached in my career. I’m going to be very selective about the type of work I take on, and the type of clients who I work with.

That’s my goal. Focus. Be one of the best.

What can you expect from this blog?

It won’t change too much, but I will create more content. I do like doing technical deep dives, so you will see more of these. I also like to explore use cases. You’ll see more of that, especially in the Controlled Environment Agriculture industry. I believe this is an area that will need more help as our environment undergoes more changes in the coming years. If we want food in the future, we will need to know how to control our environments in a way that is economical and sustainable.

I will also write more about IoT architectures for multiple clouds and scenarios. sensors, endpoints, and power management. I want to look at how Claude Shannon’s Information Theory shapes the way we communicate between the cloud and devices. I will probably write far more about networking than you want to read, but it’s an area that I used to hate that I’ve now grown unreasonably in love with. Obviously, lots of discussion around Edge Computing, protocols, Fog computing, Augmented Reality, Machine Learning, MLOPs, DevOps, EdgeOps, and of course security.

That’s the technical side, but I also want to start working more with the community. DFW, Texas has quite the community of IoT organizations and engineers. I hope to connect with these individuals and organizations and capture some of ways I can help them, or they help me and record those outcomes here.

What about money?

Ah, yes. I do actually have bills to pay. So I will need to make money. Luckily, I’m in the financial position that I don’t necessarily need to earn a fulltime income immediately. I’m also fortunate enough to have work from one of my business partners that fits directly within my goals. We’re building an InfluxDB for some time series data and using Pandas to build some forecasting. I’ve also had my own LLC for a few years now, so the business side of things won’t be a hurdle.

But I do have additional plans. Next month I’m presenting a few ways that we can partner together, if you are interesting in working on an IoT Edge project. I’m opening my calendar to a small group of people for bookings through my company called, “Green Nanny LLC.” That’s a name you’ll see me mention more in the future as I build it out to its full intention.

Here are just a few of the services I’ll offer:

  • Assessments – are you ready for IoT Edge? Do you know what it is and how it applies to modern, distributed solutions? What does it look like, operationally? This helps you make better decisions and pick a direction for next steps.
  • Architecture design sessions – let’s think through the art of the possible using machine learning, IoT, and modern data analytics. What does your future system need to look like to support edge workloads?
  • Briefings – if we were to implement a project, what would it take? Do you need a proof-of-value to start or are you ready for a well-architected IoT solution?
  • Implementations- how can I help your team implement an IoT Edge solution?
  • POV – let’s see if we can create a proof of value in a very short period of time?
  • Team training – how can I help your team learn how to implement IoT Edge?
  • Well-architected review and recommendations- can we improve your existing solution? Do you need help with reliability, cost optimization, security, operational excellence, or performance?
  • Managed Service – what are some options for managing your IoT products using a managed services provider? I don’t provide this service myself, but I still have many connections who can help make this a reality.
  • Retainer – Do you just need someone with a lot of IoT Edge knowledge to bounce questions off of? Not sure where you want to start on your own IoT Edge journey, but would like a guide?

I’m excited for the future

I think and feel that I made the right decision for this move at the right time. My skills as an Azure Architect have put me in an excellent position to transition into something more specialized. The consulting work I did with the firm clarified the type of work I like to do, and the type of work that I’m good at. I see a lot of promise in my area of focus and a lot of financial opportunity for myself, my partners, and the clients who I will serve.

Posted on Leave a comment

I Read and Summarized the 3rd Edition of IoT Signals So You Don’t Have to

A report built based on survey data taken by the company named, ‘hypothesis’. This is a combined effort between Microsoft and Hypothesis to capture the most current state of IoT from the view of business leaders in certain sectors; manufacturing, energy, mobility, and smart places.

The survey was multi-national and the report includes data captured from in-depth interviews.

Things to know about IoT in 2021

The following are high-level conclusions drawn from the interviews and survey data:

  • IoT continues to drive organizations toward a more productive future
  • COVID-19 has accelerated IoT strategies and fueled business growth
  • AI, Edge Computing, and Digital Twins are essential to advance IoT strategies
  • Although IoT projects are maturing, technological complexity persists
  • Organizations are keeping a close eye on data security

Who they talked to

  • Business decision makers, developers, and IT decision makers who work at enterprise-size companies with greater than 1k employees
    • 71% were familiar with IoT
    • 95% of those familiar, have influence and decision making power on IoT strategies
      • 10% of those familiar were not in adoption of IoT
      • 90% of those familiar were in adoption of IoT

Overall Research Learnings

Big Picture

This year, IoT continues to be widely adopted. 90% of organizations surveyed are IoT adopters. IoT projects can be categorized into four stages:

  • Learn
  • Trial / Proof of Concept
  • Purchase
  • Use

Of the 90%, at least 82% have a project that reached the “use” stage.

The state of most projects overall:

  • 29% in Learn
  • 25% in Trial / POC
  • 22% in Purchase
  • 25% in Use

IoT adoption and value globally (Australia, Italy, and the US lead as primary adopters)

  • 90% of the surveyed leaders in countries fitting criteria are adopters
  • 25% have projects in use
  • Average time to reach “use” is 12 months
  • 66% plan to use IoT more in the next 2 years

IoT Adoption and Value by Industry

  • Manufacturing
    • 91% of the surveyed leaders in countries fitting criteria are adopters
    • 26% have projects in use
    • Average time to reach “use” is 13 months
    • 68% plan to use IoT more in the next 2 years
  • Energy
    • 85% of the surveyed leaders in countries fitting criteria are adopters
    • 22% have projects in use
    • Average time to reach “use” is 15 months
    • 61% plan to use IoT more in the next 2 years
  • Mobility
    • 91% of the surveyed leaders in countries fitting criteria are adopters
    • 23% have projects in use
    • Average time to reach “use” is 14 months
    • 61% plan to use IoT more in the next 2 years
  • Smart Places
    • 94% of the surveyed leaders in countries fitting criteria are adopters
    • 24% have projects in use
    • Average time to reach “use” is 13 months
    • 69% plan to use IoT more in the next 2 years

Why Adopt IoT

Overall Top 5 reasons:

  • Quality Assurance : 43%
  • Cloud Security: 42%
  • Device and Asset Security: 40%
  • Operations Optimization: 40%
  • Employee Productivity: 35%

The report includes evidence that those companies who employ IoT to improve products and service see a higher increase in overall ROI.

Manufacturing Top 5:

  • Quality and compliance
  • Industrial automation
  • Production flow monitoring
  • Production planning and scheduling
  • Supply chain and logistics

Energy (Power and Utilities) Top 5:

  • Smart grid automation
  • Grid asset maintenance
  • Remote infrastructure maintenance
  • Smart metering
  • Workplace safety

Energy (Oil and Gas)

  • Workplace safety
  • Employee safety
  • Remote infrastructure maintenance
  • Emissions monitoring and reduction
  • Asset and predictive maintenance

Mobility

  • Inventory tracking and warehousing
  • Manufacturing operation efficiency
  • Surveillance and safety
  • Remote commands
  • Fleet management

Smart places

  • Productivity enablement and workplace analysis
  • Building safety
  • Predictive maintenance
  • Regulations and compliance management
  • Space management and optimization

Benefits of IoT

Top 5 benefits organizations are reaping from IoT

  • Increases in efficiency of operations
  • Improved safety conditions
  • Allows employees to be more productive
  • Allows for better optimization of tools and equipment
  • Reduces chance for human error

Common measures of success in IoT

  • Quality
  • Security
  • Production Efficiency
  • Reliability
  • Cost efficiency

Less common measures of success

  • More informed decision making
  • Direct impact on increased revenue
  • Sustainability
  • % of project deployed using IoT

Challenges of IoT

Top 5

  • Still implementing our current solution
  • Security risk isn’t worth it
  • Want to work out existing and future challenges before adding or using IoT more
  • Too complex to implement because of technology demands
  • Too complex to implement because of business transformation needed

Top 5 reasons POCs fail

  • High cost of scaling
  • Lack of necessary technology
  • Pilots demonstrate unclear business value or ROI
  • Too many platforms to test
  • Pilot takes too long to deploy

Top 5 security concerns

  • Ensuring data privacy
  • Ensuring network-level security
  • Security endpoints for each IoT device
  • Tracking and managing each IoT device
  • Making sure all existing software is updated

The report includes a section on best practices, and notes that despite security being a big concern, very few are implementing these best practices:

Top 5 best practices

  • Assume breaches at every level of IoT project
  • Analyze dataflows for anomalies and breaches
  • Define trust boundaries between components
  • Implement least privileged access
  • Monitoring 3rd party dependencies for common vulnerabilities

IoT Implementation Strategy

Most of the companies surveyed prefer to work with outsourced resources to implement their IoT strategy. They do prefer bespoke solutions.

Those who outsource see these positive benefits:

  • Increases efficiency of operations
  • Improves safety conditions
  • Reduces changes for human error

Those who do not outsource tend to hit these challenges:

  • Too complex to implement because of business transformation needed
  • Too long to implement
  • No buy-in from senior leadership

Sustainability

Companies with more near-term zero carbon footprint goals are more motivated to implement IoT to help than those who have a longer range target.

Impact of COVID-19

When asked if C-19 was an influence in investing:

  • 44% more investment
  • 41% stay the same
  • 7% less
  • 4% too early to tell

Emerging Technologies

Those who are adopting IoT are more likely to adopt other innovative technology associated with IoT:

  • Digital Twins
  • Edge Computing
  • AI at the Edge

This is collectively known as AI Edge

AI Implementation

84% Have a strategy:

  • 31% are implementing
  • 26% developed, but not implemented
  • 26% developing

16% do not have a strategy

  • 11% want to develop
  • 5% have no plans

79% of respondents claim that AI is a core or secondary component of their overall IoT strategy

Top 5 reasons for AI in IoT Adoption:

  • Predictive maintenance
  • Prescriptive maintenance
  • User experience
  • Visual image recognition and interpretation
  • Natural language recognition and processing

Top 5 Barriers to using AI within IoT

  • Too complex to scale
  • Lack of infrastructure
  • Lack of technical knowledge
  • Implementing AI would be too complex
  • Lack of trained personnel

AI Adoption and Value by Industry

Total:

  • 84% – have an AI strategy
  • 31% – Implementing
  • 26% – developed
  • 26% – developing
  • 79% – use AI in IoT solution

Manufacturing:

  • 84% – have an AI strategy
  • 31% – Implementing
  • 23% – developed
  • 30% – developing
  • 83% – use AI in IoT solution

Energy

  • 90% – have an AI strategy
  • 26% – Implementing
  • 28% – developed
  • 36% – developing
  • 89% – use AI in IoT solution

Mobility

  • 81% – have an AI strategy
  • 36% – Implementing
  • 25% – developed
  • 20% – developing
  • 85% – use AI in IoT solution

Smart Places

  • 88% – have an AI strategy
  • 39% – Implementing
  • 28% – developed
  • 21% – developing
  • 75% – use AI in IoT solution

Edge Computing

Edge Computing Implementation Progress

79% have a strategy:

  • 29% implementing
  • 26% developed but not implemented
  • 24% developing

21% do not have a strategy:

  • 15% want to develop
  • 6% have not plans

81% Edge Computing as a core or secondary component:

  • 42% Core
  • 39% Secondary
  • 18% Considering, not yet adopted
  • 1% not considering

Top 5 Reasons to adopt Edge Computing

  • Cloud security
  • Device and asset security
  • Quality assurance
  • Securing the physical environment
  • Operations Optimization

Top 5 barriers to adoption

  • Lack of architectural guidance
  • Lack of trained personnel
  • Lack of infrastructure
  • Difficulty managing security
  • Lack of clarity on edge hardware choices

Edge Computing Adoption and Value by Industry

Total:

  • 79% – Have Edge Computing strategy
  • 29% – implementing
  • 26% – developed
  • 24% – developing
  • 81% – Use Edge Computing in IoT Solutions

Manufacturing:

  • 83% – Have Edge Computing strategy
  • 37% – implementing
  • 28% – developed
  • 18% – developing
  • 77% – Use Edge Computing in IoT Solutions

Energy:

  • 85% – Have Edge Computing strategy
  • 38% – implementing
  • 25% – developed
  • 23% – developing
  • 85% – Use Edge Computing in IoT Solutions

Mobility:

  • 85% – Have Edge Computing strategy
  • 18% – implementing
  • 30% – developed
  • 37% – developing
  • 88% – Use Edge Computing in IoT Solutions

Smart Places:

  • 85% – Have Edge Computing strategy
  • 29% – implementing
  • 26% – developed
  • 30% – developing
  • 83% – Use Edge Computing in IoT Solutions

Digital Twins

77% have a strategy:

  • 24% implementing
  • 29% developed, but not implemented
  • 24% developing

23% do not have a strategy:

  • 14% want to develop
  • 9% have no plans

81% Use and Impact of DT on IoT Solutions

  • 41% feature as core component
  • 40% feature as secondary component
  • 18% considering, but not yet adopted
  • 1% not considering

Top 5 benefits of using DT within IoT:

  • Improve overall quality
  • Increase revenue
  • Reduce operations costs
  • Enhance warranty cost and services
  • Reduce time to market for a new product

Top 5 barriers:

  • Challenges managing the value of data collected
  • Complexity of systems needed to handle digital twins
  • Integration challenges
  • Lack of trained personnel
  • Challenges modeling the environment

Digital Twins Adoption and Value by Industry

Total:

  • 77% – have a DT strategy
  • 24% – implementing
  • 29% – developed
  • 24% – developing
  • 81% – use DT in IoT solution

Manufacturing:

  • 79% – have a DT strategy
  • 31% – implementing
  • 25% – developed
  • 23% – developing
  • 86% – use DT in IoT solution

Energy:

  • 79% – have a DT strategy
  • 26% – implementing
  • 29% – developed
  • 24% – developing
  • 82% – use DT in IoT solution

Mobility:

  • 76% – have a DT strategy
  • 15% – implementing
  • 39% – developed
  • 23% – developing
  • 77% – use DT in IoT solution

Smart Places

  • 82% – have a DT strategy
  • 27% – implementing
  • 35% – developed
  • 22% – developing
  • 85% – use DT in IoT solution

By Industry

Smart Places

94% IoT Adopters

  • 27% – learn
  • 25% – POC
  • 25% – Purchase
  • 24% – Use

Top Benefits:

  • Increase the efficiency of operations
  • Improves safety conditions
  • Allows for better optimization of tools and equipment

Top 5 reasons for adoption:

  • Productivity enablement
  • Building safety
  • Predictive maintenance
  • Space management and optimization
  • Regulation and compliance management

Top 5 challenges

  • Still implementing current solution
  • Security risk isn’t worth it
  • Too complex to implement because of the need for business transformation
  • Want to work out exiting and future challenges before adding or using
  • Too complex to implement because of technology demands

Manufacturing

91% IoT Adopters

  • 27% – Learn
  • 26% – POC
  • 21% – Purchase
  • 26% – Use

Top Benefits

  • Increases the efficiency of operations
  • Increases production capacity
  • Reduces chance for human error

Top 5 Reasons for Adoption:

  • Quality and compliance
  • Industrial automation
  • Production flow monitoring
  • Production planning and scheduling
  • Supply chain and logistics

Top 5 Challenges to using IoT more

  • Still implementing current solution
  • Too complex to implement because of technology demands
  • Security risk isn’t worth it
  • Want to work out challenges before adding or using IoT more
  • Don’t have human resources to implement or manage

Mobility

91% IoT Adopters:

  • 30% – Learn
  • 26% – POC
  • 21% – Purchase
  • 23% – Use

Top benefits of IoT:

  • Increase efficiency of operations
  • Allows employees to be more productive
  • Improves safety conditions and increases production capacity

Top 5 Reasons for Adoption

  • Inventory tracking and warehousing
  • Manufacturing operations efficiency
  • Surveillance and safety
  • Remote commands
  • Fleet management

Top 5 challenges to using IoT More:

  • Want to work out challenges before adding or using IoT more
  • Too complex to implement because of technology demands
  • Still implementing our current solutions
  • Security risk isn’t worth it
  • Too complex to implement because of business transformation needed

Energy

80% IoT Adopters (Power and Utilities)

  • 28% – Learn
  • 26% – POC
  • 23% – Purchase
  • 23% – Use

Top Benefits

  • Increase the efficiency of operations
  • Increases production capacity
  • Allows employees to be more productive

Top 5 reasons for adoption:

  • Smart grid automation
  • Grid asset maintenance
  • Remote infrastructure maintenance
  • Smart metering
  • Workplace safety

Top Challenges:

  • Too complex because of technology demands
  • Security risk isn’t worth it
  • don’t have human resources to implement and manage

94% IoT Adopters (Oil & Gas)

  • 28% – Learn
  • 27% – POC
  • 24% – Purchase
  • 20% – Use

Top Benefits

  • Increase customer satisfaction
  • Improves business decision-making
  • Increases production capacity and the efficiency of operations

Top 5 reasons for adoption:

  • Workplace safety
  • Employee satisfaction
  • Remote infrastructure maintenance
  • Emissions monitoring and reduciton
  • Asset and predictive maintenance

Top challenges:

  • Lack of technical knowledge
  • Don’t know enough
  • Too complex to implement because of business transformation needed

Final Thoughts

Things worth noting:

  • IoT is not going away, in fact more money, time, and investment goes into it each year
  • Most organizations are looking to add AI, Edge Computing, and Digital Twins to their solutions
  • May organizations are outsourcing their IoT work and seeing more benefits because of this
  • Top challenges are around knowledge, skill, security, and implementation at scale

The original report

Posted on 2 Comments

How do you use AI to improve reliability?

If you’re an IT service provider, systems reliability is one of the concerns high on your lists. Back in my earlier days of development I worked for a medium sized electronics parts company. They had the unique business model of selling very small parts, like transistors and capacitors as individual parts. Back before the whole Maker movement took hold, it used to be nearly impossible to buy these small parts without buying a giant spool. The manufacturer of these parts wanted them sold by the hundreds and thousands, not ten or twenty.

So the website we maintained sold thousands of such electronics and tools all over the world. We would even buy bulk from China, and then turn around and sell the individual parts back to China with a significant markup. All that to say, site reliability was a huge concern. Enough so that one day when the site was down, the president of the company came to our little development corner and passed back-and-forth while we did all we could to get our beast of a monolithic .NET version 2 web application to behave under a high traffic load.

I learned the true value of systems reliability that day. The president and CEO of the company raced Porsches. He loved Porsches. We had pictures of Porsches all over the executive area of the company. I think he had at least two or three that were his regular drive and likely the same number modified for racing. Porsches were his thing. So when he told us that with each half hour the site was down we were costing as much as a brand new Porsche–I think it was more like, “you guys are literally crashing a brand new Porsche into the wall every thirty minutes this site is down,” we understood the severity of our failing application.

Photo by Supreet on Pexels.com

It was then that the value and importance of site reliability really solidified in my mind. It’s something as a solutions developer that I’m regularly taking into account with my designs. But can an AI help me do a better a job? Can I make a system even better by adding AI to the systems?

What is AIOps? Isn’t it the same as MLOPs or DevOps or Triceratops?

AIOps is basically Artificial Intelligence for IT operations. Basically, enterprise systems create a lot of data with all their various logs and system events. These logs are sometimes centralized, if you have a unified logging strategy, but most of the time the logs are in different servers, in the cloud, on-premises, and even on IoT and Edge devices. The goal of AIOps is to use that data to produce a view of your assets with a goal toward seeing its dependencies, it’s processes, its failures, and get an overall idea of how the asset’s performance could be improved.

AIOps can help by automating common tasks, recognizing serious issues, and streamlining communication between the different areas of responsibility within organizations. Sounds magical? Where do I get an AIOPs? Can I just plug it in and start getting these benefits?

Well, not quite. Like many of the best solutions in IT, it’s not a switch that you can just turn on or a box that you can just add to your network. Just like DevOps, AIOps is a journey. It’s a discipline. It’s a process. I know, nothing is ever as easy as it seems it should be, but the value to AIOps for some organizations does outweigh the drawback.

Photo by Alex Knight on Pexels.com

Where does AIOps fit within an operations team?

AIOps can help out in the following areas of your organization:

  • Systems
  • DevOps
  • Customer Support

For systems, the most common use is for hardware systems failure predictions. For most of us in the cloud this is something we don’t generally consider. But if you’re using a hybrid model and still have some of those old rack mounted servers running important mission critical jobs, using AIOps for hardware failure prediction is likely something you’ll care about. AIOps can also help with device and systems provisioning. Managing VM pools and container clusters based on website traffic or workloads is easily within the grasp of a machine learning algorithm.

DevOps is probably one of the first places to start experimenting with AIOps. Using AI to aid in deployments, especially if you have hundreds of rollouts of software a day, can help detect anomalies and catch latent issues. Anomaly detection comes into play for your monitoring strategy, and AI is the perfect partner to help with incident management. If any of these are your pain points, you might need to add an AI to your DevOps team.

And of course for customer experience issues around site and system failures, there are bots, decision support systems, and automated communications options that provide greater detail than just a simple alert.

This is just a high level overview of some of the possible solutions. There are hundreds of ways that AI can not only help your IT operations teams, but reach deeper into your business operations. AI help monitor industrial equipment for failures, retail systems for security and compliance, and can help with supply chain optimization.

Posted on Leave a comment

AI Edge Engineering at University of Oxford

This week I’m wrapping up a continuing education course from Oxford focused on the approaches and tools of the AI Edge Engineering discipline. I’m overwhelmed with how much I’ve learned and how much I still have to learn. Without a doubt, this course will help set the direction of my career moving forward.

I wasn’t new to the concept of AI Edge Engineering. Before entering the course I had already passed my AZ-220 and AI-100. Two certifications I saw as fundamental to practicing AEE on the Azure platform. These certifications focus on Azure’s IoT tools with the AZ-220 and Azure’s AI services and Machine Learning tools with AI-100. It’s important to note that the AI-100 primarily focuses on applying existing AI in solutions. It only covers Data Science from a high level; you might be expected to understand what clustering, categorizing, and regression are, but you aren’t expected to build them from scratch or use any DS tools to build them for the certification test. That’s appropriate for a 100 level certification. Despite having these certifications, I wasn’t ready for the depth we would take on in the course in such a short period.

Luckily, the course used cohort learning as a mechanism to complete some of the more challenging projects. Our group efforts afforded us the opportunity to trade opinions, approaches, and skills to achieve project deliverables. This is also a skill in the AEE field. Few people will have all the skills needed to apply artificial intelligence at the edge. This means that organizations who wish to use AEE will need to team members who have varied specialized skills and knowledge of the bigger picture of AEE. Our projects made this very clear to me.

I won’t go into detail on what we learned and how we learned it, because much of that is the IP of the course and Oxford, but I will say that we dived deep in the following general areas:

  • The basic concepts of delivering AI to the edge using IoT and Edge platforms
  • Cloud- all the clouds (Azure, AWS, GCS)
  • Cloud concepts like PaaS, SaaS, and Cloud Native Services
  • All the big pieces of Machine Learning and Machine Learning concepts
  • 5G networks
  • Device design and development
  • DevOps, MLOPS, and even AIOPs

That doesn’t even cover all the guest lectures and insights into AI and AEE application demonstrated. And without the fantastic instruction of course director Ajit Jaokar and his amazing team of tutors and instructors, we wouldn’t have been able to learn so much in such a short period of time. Ajit’s passion for this specialty was clear in every class, which made it a joy to attend. It was definitely worth waking up at 3 AM to attend a class remotely just to spend the time with others who have such a strong appreciation for this burgeoning discipline. This course succeeds because of the people behind it. I have to include the choice of students as well in that success. My study group was full of passionate, knowledgeable, curious, and delightful professionals. We plan to stay in contact well after the course ends. I expect to see amazing accomplishments from their careers.

We wrap up the course on Tuesday and submit all our final homework projects over the next couple of months. I won’t be officially done with the course until May. However, I won’t ever consider myself officially done with AEE, from a learning perspective. I’m taking what I’ve learned from Oxford and building a continuing learning track to master most of the concepts covered in the course.

It’s even clearer to me now that the engineering skills of delivering greater levels of intelligence closer to the source of events and data, so that they may act upon those events and data is a skillset that will be in higher demand in the coming years. I believe this course has helped me to begin building a better roadmap toward mastery of AI Edge Engineering.

Posted on Leave a comment

The AI Edge Talent Stack

Building a Talent Stack

Last week I started my studies with the Oxford continuing education course, Artificial Intelligence: Cloud and Edge Implementations (online). For me to attend this class, I have to wake up at 3 am on Saturday morning, but so far that little sacrafice has been well worth it. What I’m hoping to get out of this course are a few key skills. First, I want a better understanding of data science taught from people who know it, practice it, and apply it to real world scenarios. Recognizing cats and hot dogs is a good start, but I want AI that helps people to derive greater value from their existing information systems. Additionally, I want to take my IoT skills to the next level. And of course these things come together to build the AI Edge MLOPs process.

I was recently introduced to the concept of a Talent Stack from Linda Zhang’s blog and it’s something I’ve been unknowingly building over the last year. I’m seeing that stack start to look like the following:

  • IoT device programming
  • IoT networking
  • IoT cloud architecture
  • Lambda data architectures
  • Data analysis
  • Machine Learning
  • Orchestrated container management (k8s)
  • Python, Scala, and Apache Spark
  • MLOPs
  • Reactive Engineering
  • Domain Driven Design (Event Storming)
  • Systems Thinking
  • Wardely Mapping

The Oxford course touches many of those areas where I want to build skills.

What does this stack do?

A diagram depicting the relationship of cloud computing to fog computing to digital twins

It’s clear that technology is pushing us away from direct interaction with a single machine, like a mobile device or a laptop, and closer toward an environment of conntected devices. I don’t belive that traditional user interfaces will necessarily go away, but their role in capturing data will be deminished when AI on devices allow us to better communicate with smart objects around us. We will likely always use some type of mobile device, and we will likely always have some type of personal computer powerful enough to perform more demanding tasks, but when we can we will interact with smart devices.

I believe these smart devices will be found in the places we work, where we shop, and where we exercise our leisure. Businesses, cities, and the entertainment industry will need people who are skilled in building safe, secure, unbiased (but perhaps opinionated,) AI and integrating these AI with cloud computing, fog computing, and using digital twins for real objects to interact with reality.

To help add value to this new world I’m learning all the skills necessary to help teams and businesses achieve these goals. This is an area where I see a lot of potential for growth.

As I grow on this journey I want to help other developers who are interested in taking on these new challenges. I’m focusing some of my future blog posts on this subject so that we can start building a community around the concepts related to AI Edge Engineering. As that matures, I’ll share more of what that will look like. Until then, keep reading to follow along on my adventure.

Photo by Matt Hardy on Pexels.com
Posted on Leave a comment

AZ-220 and AI-100 = AI Edge Engineer

Microsoft doesn’t have an official certification for an AI Edge Engineer yet, but I have a feeling one is on the horizon. The term Edge continues to insert itself into a number of products. There’s even a version of Azure Stack Edge developed specifically for Edge compute devices. If this whole Edge concept is new to you, don’t worry, you’ll hear a lot more about it in the coming years.

IoT Alone isn’t Enough

IoT isn’t new. It’s been with us for 20 years. Longer if you count the many systems developed for the manufacturing and agriculture industries. The idea is to gather telemetry data, things like machine vibrations, humidity levels, or light levels, and send that data to a central data store for processing. The processing tends to involve searching to actionable insights. Can I build a correlation between machine vibration levels and machine breakdowns. If I can, I can add predictive maintenance triggered on vibration. Same goes for watering plants in a field or adjusting light levels in an office building.

IoT is still valid for these operations, however the demand is increasing for devices and intelligent actions that happen closer to the source. Why send constant streams of telemetry data to the cloud when you know the parameters that trigger an event? Why not bring the event trigger closer to the device? And what if you need greater levels of intelligence, like vehicle recognition in a shipping yard or warning zones on drilling platforms that recognize when a human not wearing the appropriate safety attire has crossed into that zone? For these types of situations you need something more sophisticated than the basic telemetry gathering tools. You need devices with the sort of AI typically hosted in the cloud brought to the edge of the cloud. Edge devices. More powerful systems that can serve as telemetry gathering systems, decision systems, and routing gateways. This is what Edge Computing is all about.

Developing for The Edge

Microsoft has developed a suite of tools that make working with edge technologies fairly simple. It’s not outside the realm of possibility that in the next few years Edge developers and Edge Engineers will be in high demand. The technologies used today are IoT Hub, IoT Edge Development kits, Edge enabled devices (which can be as simple as a PC), containers like Docker. Cognitive services like Computer Vision can be loaded onto these devices in the form of modules that are managed by Edge Agents and routed to the cloud or other devices using Edge Hubs and routing. These developers will need an understanding of developing for IoT devices, as well as development for the cloud. Azure provides many services for making this possible. There are services like IoT Hub and the Device Provisioning Service that make managing devices and integration with backend services possible. There’s also a SaaS platform called IoT Central that covers the majority of the use cases for specific industries that want to begin their journey into IoT Edge data processing.

These are the platforms that enable AI at the Edge using Microsoft Azure technologies, but these certainly aren’t the only tools that are taking advantage of this particular pattern. AWS, Google, NVIDIA, and many other manufacturers and services have staked a claim in this burgeoning field.

A New Type Of DevOps

DevOps is the word that has come to replace agile as the term to best describe the marriage of development and operations in the service of quickly delivering working systems to market. MLOPs is a similar term used to describe the process of bringing machine learning and AI to market. I’m not aware of a term specific for Edge development, perhaps EdgeOps should be the label, but what’s important to understand is that for this type of development there is a transformative workflow.

For a greenfield engagement, you probably won’t start with AI. You’ll probably start with telemetry in order to gain insights and develop actionable plans based on the data. This means you need a deployment pipeline that allows a device to go from being a simple hub or gateway for telemetry data to a more sophisticated application platform. Intelligence from the cloud moves from cloud services to Edge containers that are utilized by the Edge devices. This requires an evolutionary Edge network that can grow more responsive and cost effective as the business evolves its goals and strategies around gathered data.

This is an enormous opportunity developers and engineers, as well as value added providers, delivery partners, and definitely the organizations who embrace this new way to capture and manage data and events.

Why Did I Follow this Path?

For most of my career I’ve worked with different organizations that have struggled with their data. There are a large number of anti-pattens deployed by companies that make their data nothing more than a trailing indicator asset. It’s rarely a form of strategic information and even more rarely an active asset helping to shorten the loop between value delivery and cash flow. I decided not too long ago that I wanted the focus of my career to be toward that goal; helping organizations use their systems and data to shorten that loop. This isn’t always a technology solution. Sometimes achieving this goal requires organization change. I’m regularly looking for people, tools, and processes that I can utilize to help deliver on that goal.

Becoming an AI Edge Engineer as well as diving deeper into the world of Data Science seem the natural extension of that goal. If you are interested in exploring this path, you should take a look at Microsoft’s related training materials as well as the Oxford course mentioned on the training page: https://docs.microsoft.com/en-us/learn/paths/ai-edge-engineer/

Additionally, I’ll continue to post my lab work related to IoT Edge development and my thoughts on how this applies to helping businesses use their data to achieve their financial goals on this blog.

Posted on Leave a comment

Passing the AI-100 Exam

I recently passed the AI-100 Certification Test from Microsoft and wanted to take the opportunity to discuss my experience studying for the test, passing the test, and finally how I plan to apply what I’ve learned to my larger career journey of helping organizations build better systems.

First, I actually had no intention of taking the AI-100 this soon. I had planned to take it, but I was thinking more along the lines of next year when I had more time to work with more AI projects than I had. My experience was pretty much limited to building out a proof of concept Form Recognizer solution for a client. The opportunity to take the test came up last month when it appeared we needed this particular certification to help unlock a benefit around Microsoft Teams.

Basically, one of our team leads reached out to a handful of us with a request to take it and to take it soon. I committed to a month, hoping that would be enough time to get experience with the material in order to pass.

First, I can’t recommend Linux Academy enough. I took the course on that platform by Dan Sasse for the AI-100 and it gave me a great overview of everything I needed to study. The practice test for this course was also excellent for preparing me for the certification.

In addition to Dan’s course, I also worked through a handful of Microsoft’s Learning paths related to Azure Machine Learning Services and Studio. I initially didn’t like ML Studio, but after working with it I began to see the real benefits. These specialized studios are something that have become a trend with Microsoft products. Working with AI you will get used to the various interfaces developed for LUIS, QnA Maker, and so on.

Taking the actual test is always the most nerve wrecking for me. I’ve been developing for years – I’m an old guy in this field – but I still get sweaty palms when faced with these types of tests. So I had to precondition myself before diving in and taking the test. If you are taking the test, I would suggest you do what I did. Take your time.

So here’s my process for taking the test:

  1. Take your time. Take a few deep breaths before each question.
  2. Read the questions slowly and carefully to make sure you understand exactly what the objective is. Microsoft no longer creates “trick,” questions, but it does sometimes give more detail than needed to discover the answer.
  3. If it’s a multiple choice question, begin with the process of elimination. Be sure you are sure about what answers it is not. That will usually leave you with 2 answers.
  4. If you have any doubts about the answers you’ve selected for the question, mark it for review later. There were at least 2 questions I had marked for review that I was able to change because the answer seemed obvious to me when I reviewed them.

Lastly, go into the test with the attitude that everything is going to be fine no matter what. If you pass the test, well that’s great! That was the goal. It means you studied enough to know the subject material and were lucky enough to get the right combination of questions to match your knowledge – yes, I do believe luck has a hand in passing the tests. You can help luck by studying and knowing the materials, but sometimes the questions just don’t align well with your knowledge. Which leads me to the second point; don’t beat yourself up if you fail. Failure is an opportunity to learn. Be sure to look over your results and focus on the areas you did poorly in.

I did great on all my service knowledge, but even though I passed the test I know I have a weakness in deploying AI services. I need to brush up on certain technologies around that part of the discipline.

I count this as a fun certification. You get to work with a lot of cool technology and dive a little into the world of Data Science and Machine Learning. I would encourage anyone who is interested in working with AI to attempt this certification. Even if you don’t plan to be a data scientist, you will likely one day need to work with the output of the data science pipeline—an AI. And Microsoft has given the AI Engineer a lot of great tools to work with.

How do I plan to use this new certification? Well, by developing AI solutions of course. Right now I have a prototype QnA bot created that I’m building out. I only have a web connection created for it now, but the plan is to build it out to other channels like Facebook Messenger, Twilio SMS, and even email. I have also found many uses for the Form Recognizer and the possibilities for LUIS are endless.

Yesterday I even spent the afternoon building out a residential property assessment AI in Tensorflow – yes, and I’m not even a data scientist, so it’s not that hard if you have some development experience and you aren’t too scared of math.

AI will be embedded in everything soon. Edge Computing is on the rise. As a developer, you can’t afford to ignore this sector of the development world. And I think the AI-100 is the perfect way to jump in and start learning more about Artificial Intelligence.