Posted on Leave a comment

Personal Post on My Continuing Journey with IoT Edge Computing

shawn deggans personal blog post

I made the biggest career change of my life recently.

I left the consulting firm I had been employed with for the past four years. It wasn’t an easy decision. I gave up a technical leadership role, I left a lot of people who I loved working with, and I gave up the security of a regular paycheck.

What was behind my decision?

Focus.

Focus was my primary reason for leaving. Two years ago I began a journey to learn and apply everything I would need to know to be a competent IoT Edge Architect. I began that journey with the hopes that my career would be heavily focused on helping organizations solve interesting problems using modern data analytics, IoT systems, and containerized machine learning on the edge.

That never really happened. I had the occasional opportunity to work with machine learning, Kubernetes, and some advanced analytics, but the bulk of interesting data work was done by other people while I focused on platform development.

I didn’t allow those IoT skills to go static though, because I did the occasional side work with partners focused on IoT, but my day job always came first. It reached the point that the day job wouldn’t allow time for anything other than the day job. I didn’t want those IoT skills to go stale, so I had to make the difficult decision. Do I stay where I am and try to be happy or do I pursue the career working with the technology I know actually moves the needle for organizations?

So here I am, completely independent. Ready to focus.

I got pretty good with infrastructure deployment and DevOps, so that’s the majority of the work the firm put me on. And they put me on a lot of it. Systems architecture and design became my everything for a while. Let me be clear that there’s absolutely nothing wrong with systems design work. It can be incredibly rewarding. It’s actually a big part of IoT Edge development. It just became clear to me that I was never going to have the opportunity to help build anything interesting on the infrastructure I was creating.

During my time with the firm, I went from a senior application developer to a cloud systems architect. It took me four years to make it happen, but it was definitely a necessary milestone for the next stage of my journey.

What is the next stage of my journey?

I’m returning my focus to IoT Edge computing.

What I want to do for my career is implement edge and IoT systems from multiple types of systems to multiple cloud and server solutions using modern communication systems, analytics, and security. I mean, for something that’s supposed to be a focus, that’s pretty broad. However, it all fits together nicely for certain, specialized use cases.

I have a lot of experience and a few certifications in Azure, so I have no plans to walk away from Azure any time soon, but I’ve had the opportunity to work with a few other products like InfluxDB, Milesight, Chirpstack, Eclipse Mosquitto, and I don’t want to limit myself to one cloud or one set of solutions. Much of my focus moving forward will be more on the IoT Edge System Design. The theory, the best practices, the understanding of why certain technologies are used over other technologies.

Basically, in order to improve my IoT Edge expertise, I need to say no to a lot of other types of work. Am I capable of helping an organization migrate all their SQL data systems from on-premises to the cloud? Yes, it’s completely within my wheelhouse. Could I build a distributed e-commerce solution using .Net applications in Docker containers for high availability using the best security Azure has to offer? Yes, also within my skillset. Will I take on any of that work? No. I won’t. And that’s the point I’ve reached in my career. I’m going to be very selective about the type of work I take on, and the type of clients who I work with.

That’s my goal. Focus. Be one of the best.

What can you expect from this blog?

It won’t change too much, but I will create more content. I do like doing technical deep dives, so you will see more of these. I also like to explore use cases. You’ll see more of that, especially in the Controlled Environment Agriculture industry. I believe this is an area that will need more help as our environment undergoes more changes in the coming years. If we want food in the future, we will need to know how to control our environments in a way that is economical and sustainable.

I will also write more about IoT architectures for multiple clouds and scenarios. sensors, endpoints, and power management. I want to look at how Claude Shannon’s Information Theory shapes the way we communicate between the cloud and devices. I will probably write far more about networking than you want to read, but it’s an area that I used to hate that I’ve now grown unreasonably in love with. Obviously, lots of discussion around Edge Computing, protocols, Fog computing, Augmented Reality, Machine Learning, MLOPs, DevOps, EdgeOps, and of course security.

That’s the technical side, but I also want to start working more with the community. DFW, Texas has quite the community of IoT organizations and engineers. I hope to connect with these individuals and organizations and capture some of ways I can help them, or they help me and record those outcomes here.

What about money?

Ah, yes. I do actually have bills to pay. So I will need to make money. Luckily, I’m in the financial position that I don’t necessarily need to earn a fulltime income immediately. I’m also fortunate enough to have work from one of my business partners that fits directly within my goals. We’re building an InfluxDB for some time series data and using Pandas to build some forecasting. I’ve also had my own LLC for a few years now, so the business side of things won’t be a hurdle.

But I do have additional plans. Next month I’m presenting a few ways that we can partner together, if you are interesting in working on an IoT Edge project. I’m opening my calendar to a small group of people for bookings through my company called, “Green Nanny LLC.” That’s a name you’ll see me mention more in the future as I build it out to its full intention.

Here are just a few of the services I’ll offer:

  • Assessments – are you ready for IoT Edge? Do you know what it is and how it applies to modern, distributed solutions? What does it look like, operationally? This helps you make better decisions and pick a direction for next steps.
  • Architecture design sessions – let’s think through the art of the possible using machine learning, IoT, and modern data analytics. What does your future system need to look like to support edge workloads?
  • Briefings – if we were to implement a project, what would it take? Do you need a proof-of-value to start or are you ready for a well-architected IoT solution?
  • Implementations- how can I help your team implement an IoT Edge solution?
  • POV – let’s see if we can create a proof of value in a very short period of time?
  • Team training – how can I help your team learn how to implement IoT Edge?
  • Well-architected review and recommendations- can we improve your existing solution? Do you need help with reliability, cost optimization, security, operational excellence, or performance?
  • Managed Service – what are some options for managing your IoT products using a managed services provider? I don’t provide this service myself, but I still have many connections who can help make this a reality.
  • Retainer – Do you just need someone with a lot of IoT Edge knowledge to bounce questions off of? Not sure where you want to start on your own IoT Edge journey, but would like a guide?

I’m excited for the future

I think and feel that I made the right decision for this move at the right time. My skills as an Azure Architect have put me in an excellent position to transition into something more specialized. The consulting work I did with the firm clarified the type of work I like to do, and the type of work that I’m good at. I see a lot of promise in my area of focus and a lot of financial opportunity for myself, my partners, and the clients who I will serve.

Posted on 1 Comment

Operationalize Machine Learning for Manufacturing

The “Smart” manufacturing revolution is here. It’s proved its effectiveness in Predictive Maintenance, Quality Control, Logistics and Inventory, Product Development, Cybersecurity, and paired with IoT it expands into Robotics, Control Systems, and Digital Twins.

Manufacturers who want to build a data science team and operationalize machine learning models for must build an MLOPs practice. And by must, I mean this is essential, not optional. I’ve seen what happens when a team is built of unskilled engineers and attempts to take shortcuts on the journey to put machine learning into production without MLOPs and Responsible AI.

Why do you need MLOPs?

First, I think it’s important to understand the benefits of machine learning to manufacturers. I’ve already touched on some of the areas where Machine Learning is helpful. But let’s talk about the value that MLOPs brings to your organization.

  1. It helps to reduce the risk of data science by making the work of scientists more transparent, trackable, and understandable. When coupled with a Responsible AI governance policy, MLOPs can help manufacturing leaders understand the risks of the models running in their environments.
  2. Generate greater, long-term value. Modern software development has taught us that development doesn’t end with production. Systems must be monitored to ensure they are meeting the needs of the business, even as that business shifts and changes under market pressures. Machine Learning is no different. In fact, models must be reviewed and maintained in some type of regular cadence. Unlike software, that is relatively static in nature, an ML model is code with data. As data changes, so must the model. It is an evolving system and must be observed and maintained even after production deployment.
  3. Scaling is one of the big problems of ML. It’s fairly easy to monitor one or two models in production manually, but once you scale to hundreds, even thousands of models, you will need systems in place to support the required maintenance of models.
  4. Maintenance is another thing to consider. Models must be updated. Sometimes many times a day. This means a delivery pipeline from initial hypothesis to production that is fast, scalable, and includes all the required tests and metrics gathering systems needed to monitor and control the lifecycle of the model.

The Machine Learning Lifecycle and Why It’s Challenging

I’m not going to dive into the Machine Learning Lifecycle in this article, but I will point out a few areas where that make it challenging to a manufacturer’s traditional IT department.

  • Data is a dependency. Very few ML models are of any value without a powerful data system to provide data for the learning portion of machine learning. Manufacturers often have data. Lots of data, in fact. Many manufacturers have been collecting telemetry data from operational technology for many years. But often that data is locked away in proprietary systems or behind multiple firewalls. There are tools to work with systems on-premises. Many data scientists are happy building experiments on their laptop. But this doesn’t scale to larger datasets or working with streaming data. Ideally, some type of lambda architecture will be needed to gather, prepare, and process the data for the machine learning process. Often, it’s preferred that this system be in the cloud where it’s accessible to entire teams and data engineers and leverage the compute power of the cloud to build and operationalize ML models.
  • Domain language issues. Data science brings a whole new set of tools and ideas to the traditional IT department. Data scientists are rarely software developers. They work in Notebooks and use programming languages like R, Python, and Julia. If you’re IT department isn’t used to the tooling and the processes of working with these tools, transferring from a Jupyter Notebook to an inferencing endpoint hosted on Kubernetes might be a challenge for your IT department.
  • Again, data scientists are not programmers. There are some who can program, but the code they tend to write is in support of training and testing ML models. It’s not generally written to be robust enough to standup to the type of software requirements demanded from a microservice. Your data scientists will likely need support from engineers who understand the deployment side of data science. You will need a team of people to help support your ML journey.

MLOPs and DevOps

MLOPs and DevOps do have a lot in common. They both focus on some of the same things, so if you already have a robust DevOps process in place in your organization, taking on MLOPs will be less of a challenge. However, if your IT department has never heard of DevOps, you might be in for a harder journey than expected.

  • DevOps is a proponent of automation. The idea is to reduce the friction from software development to production deployment by adding systems that automate the manual steps of the software deployment process. MLOPs has these same goals.
  • DevOps is a teams practice built on trust within the teams. This includes increased collaboration, communication, and an overall understanding from all the teams of the service life-cycle. Developers understand the requirements and rigors of operations and do their best to bake in what’s needed to support operations members of their team. Operations understand the business value of new features and systems and know how to monitor telemetry and systems to ensure risks are minimized. MLOPs has some of these same ideas, with an even greater emphasis on monitoring the model in production.
  • Both MLOPs and DevOps prioritize the concept of continuous deployment, experimentation, observability, and resilient systems.

The big difference between these two systems:

  • DevOps tends to deliver code
  • MLOPs delivers code and data

Use MLOPs to Reduce Risk

MLOPs can help protect you from risks associated with Machine Learning. As a practice, MLOPs encourages many of the same things that DevOps does, like teamwork, open and honest visibility into work, standard operating procedures, and a break in dependencies from traditional siloed IT operations.

  • Personnel dependency risks. What if your data scientist leaves the organization? Can you hire someone to take over that role?
  • Model accuracy can reduce over time. Without the systems in place to monitor models and the system in place to quickly deploy new models, you might find that you’re systems are compromised because they are delivering bad predictions or they are responding incorrectly to certain events.
  • What if there is a high dependency on the model and it’s not available? When systems are dependent on the accuracy of a model and that model isn’t available, could it bring production to halt? Or worse, put people’s lives at risk? These are all considerations that must be made when evaluating putting an ML Model into production.

Taking on MLOPs

MLOPS is not free. There are operational costs associated with the practice. This is not a cost you can avoid. Too much depends on the accuracy of your models, especially in a production environment to take on the risks without the assurances that a good MLOPs practice brings to manufacturers.

Here are the important things to remember:

  • Pushing a model to production is not a final step. It’s one step in the life-cycle of a model.
  • Monitor the model performance and make sure it’s meeting the accuracy requirements expected
  • Use Responsible AI practices
    • Intentionality
    • Accountability
    • Fairness
    • Privacy
    • Security
  • Again, MLOPs is not optional, nice to have, or an afterthought. If you are releasing models to production, you must start an MLOPs practice.