Posted on Leave a comment

Personal Post on My Continuing Journey with IoT Edge Computing

shawn deggans personal blog post

I made the biggest career change of my life recently.

I left the consulting firm I had been employed with for the past four years. It wasn’t an easy decision. I gave up a technical leadership role, I left a lot of people who I loved working with, and I gave up the security of a regular paycheck.

What was behind my decision?

Focus.

Focus was my primary reason for leaving. Two years ago I began a journey to learn and apply everything I would need to know to be a competent IoT Edge Architect. I began that journey with the hopes that my career would be heavily focused on helping organizations solve interesting problems using modern data analytics, IoT systems, and containerized machine learning on the edge.

That never really happened. I had the occasional opportunity to work with machine learning, Kubernetes, and some advanced analytics, but the bulk of interesting data work was done by other people while I focused on platform development.

I didn’t allow those IoT skills to go static though, because I did the occasional side work with partners focused on IoT, but my day job always came first. It reached the point that the day job wouldn’t allow time for anything other than the day job. I didn’t want those IoT skills to go stale, so I had to make the difficult decision. Do I stay where I am and try to be happy or do I pursue the career working with the technology I know actually moves the needle for organizations?

So here I am, completely independent. Ready to focus.

I got pretty good with infrastructure deployment and DevOps, so that’s the majority of the work the firm put me on. And they put me on a lot of it. Systems architecture and design became my everything for a while. Let me be clear that there’s absolutely nothing wrong with systems design work. It can be incredibly rewarding. It’s actually a big part of IoT Edge development. It just became clear to me that I was never going to have the opportunity to help build anything interesting on the infrastructure I was creating.

During my time with the firm, I went from a senior application developer to a cloud systems architect. It took me four years to make it happen, but it was definitely a necessary milestone for the next stage of my journey.

What is the next stage of my journey?

I’m returning my focus to IoT Edge computing.

What I want to do for my career is implement edge and IoT systems from multiple types of systems to multiple cloud and server solutions using modern communication systems, analytics, and security. I mean, for something that’s supposed to be a focus, that’s pretty broad. However, it all fits together nicely for certain, specialized use cases.

I have a lot of experience and a few certifications in Azure, so I have no plans to walk away from Azure any time soon, but I’ve had the opportunity to work with a few other products like InfluxDB, Milesight, Chirpstack, Eclipse Mosquitto, and I don’t want to limit myself to one cloud or one set of solutions. Much of my focus moving forward will be more on the IoT Edge System Design. The theory, the best practices, the understanding of why certain technologies are used over other technologies.

Basically, in order to improve my IoT Edge expertise, I need to say no to a lot of other types of work. Am I capable of helping an organization migrate all their SQL data systems from on-premises to the cloud? Yes, it’s completely within my wheelhouse. Could I build a distributed e-commerce solution using .Net applications in Docker containers for high availability using the best security Azure has to offer? Yes, also within my skillset. Will I take on any of that work? No. I won’t. And that’s the point I’ve reached in my career. I’m going to be very selective about the type of work I take on, and the type of clients who I work with.

That’s my goal. Focus. Be one of the best.

What can you expect from this blog?

It won’t change too much, but I will create more content. I do like doing technical deep dives, so you will see more of these. I also like to explore use cases. You’ll see more of that, especially in the Controlled Environment Agriculture industry. I believe this is an area that will need more help as our environment undergoes more changes in the coming years. If we want food in the future, we will need to know how to control our environments in a way that is economical and sustainable.

I will also write more about IoT architectures for multiple clouds and scenarios. sensors, endpoints, and power management. I want to look at how Claude Shannon’s Information Theory shapes the way we communicate between the cloud and devices. I will probably write far more about networking than you want to read, but it’s an area that I used to hate that I’ve now grown unreasonably in love with. Obviously, lots of discussion around Edge Computing, protocols, Fog computing, Augmented Reality, Machine Learning, MLOPs, DevOps, EdgeOps, and of course security.

That’s the technical side, but I also want to start working more with the community. DFW, Texas has quite the community of IoT organizations and engineers. I hope to connect with these individuals and organizations and capture some of ways I can help them, or they help me and record those outcomes here.

What about money?

Ah, yes. I do actually have bills to pay. So I will need to make money. Luckily, I’m in the financial position that I don’t necessarily need to earn a fulltime income immediately. I’m also fortunate enough to have work from one of my business partners that fits directly within my goals. We’re building an InfluxDB for some time series data and using Pandas to build some forecasting. I’ve also had my own LLC for a few years now, so the business side of things won’t be a hurdle.

But I do have additional plans. Next month I’m presenting a few ways that we can partner together, if you are interesting in working on an IoT Edge project. I’m opening my calendar to a small group of people for bookings through my company called, “Green Nanny LLC.” That’s a name you’ll see me mention more in the future as I build it out to its full intention.

Here are just a few of the services I’ll offer:

  • Assessments – are you ready for IoT Edge? Do you know what it is and how it applies to modern, distributed solutions? What does it look like, operationally? This helps you make better decisions and pick a direction for next steps.
  • Architecture design sessions – let’s think through the art of the possible using machine learning, IoT, and modern data analytics. What does your future system need to look like to support edge workloads?
  • Briefings – if we were to implement a project, what would it take? Do you need a proof-of-value to start or are you ready for a well-architected IoT solution?
  • Implementations- how can I help your team implement an IoT Edge solution?
  • POV – let’s see if we can create a proof of value in a very short period of time?
  • Team training – how can I help your team learn how to implement IoT Edge?
  • Well-architected review and recommendations- can we improve your existing solution? Do you need help with reliability, cost optimization, security, operational excellence, or performance?
  • Managed Service – what are some options for managing your IoT products using a managed services provider? I don’t provide this service myself, but I still have many connections who can help make this a reality.
  • Retainer – Do you just need someone with a lot of IoT Edge knowledge to bounce questions off of? Not sure where you want to start on your own IoT Edge journey, but would like a guide?

I’m excited for the future

I think and feel that I made the right decision for this move at the right time. My skills as an Azure Architect have put me in an excellent position to transition into something more specialized. The consulting work I did with the firm clarified the type of work I like to do, and the type of work that I’m good at. I see a lot of promise in my area of focus and a lot of financial opportunity for myself, my partners, and the clients who I will serve.

Posted on 2 Comments

How do you use AI to improve reliability?

If you’re an IT service provider, systems reliability is one of the concerns high on your lists. Back in my earlier days of development I worked for a medium sized electronics parts company. They had the unique business model of selling very small parts, like transistors and capacitors as individual parts. Back before the whole Maker movement took hold, it used to be nearly impossible to buy these small parts without buying a giant spool. The manufacturer of these parts wanted them sold by the hundreds and thousands, not ten or twenty.

So the website we maintained sold thousands of such electronics and tools all over the world. We would even buy bulk from China, and then turn around and sell the individual parts back to China with a significant markup. All that to say, site reliability was a huge concern. Enough so that one day when the site was down, the president of the company came to our little development corner and passed back-and-forth while we did all we could to get our beast of a monolithic .NET version 2 web application to behave under a high traffic load.

I learned the true value of systems reliability that day. The president and CEO of the company raced Porsches. He loved Porsches. We had pictures of Porsches all over the executive area of the company. I think he had at least two or three that were his regular drive and likely the same number modified for racing. Porsches were his thing. So when he told us that with each half hour the site was down we were costing as much as a brand new Porsche–I think it was more like, “you guys are literally crashing a brand new Porsche into the wall every thirty minutes this site is down,” we understood the severity of our failing application.

Photo by Supreet on Pexels.com

It was then that the value and importance of site reliability really solidified in my mind. It’s something as a solutions developer that I’m regularly taking into account with my designs. But can an AI help me do a better a job? Can I make a system even better by adding AI to the systems?

What is AIOps? Isn’t it the same as MLOPs or DevOps or Triceratops?

AIOps is basically Artificial Intelligence for IT operations. Basically, enterprise systems create a lot of data with all their various logs and system events. These logs are sometimes centralized, if you have a unified logging strategy, but most of the time the logs are in different servers, in the cloud, on-premises, and even on IoT and Edge devices. The goal of AIOps is to use that data to produce a view of your assets with a goal toward seeing its dependencies, it’s processes, its failures, and get an overall idea of how the asset’s performance could be improved.

AIOps can help by automating common tasks, recognizing serious issues, and streamlining communication between the different areas of responsibility within organizations. Sounds magical? Where do I get an AIOPs? Can I just plug it in and start getting these benefits?

Well, not quite. Like many of the best solutions in IT, it’s not a switch that you can just turn on or a box that you can just add to your network. Just like DevOps, AIOps is a journey. It’s a discipline. It’s a process. I know, nothing is ever as easy as it seems it should be, but the value to AIOps for some organizations does outweigh the drawback.

Photo by Alex Knight on Pexels.com

Where does AIOps fit within an operations team?

AIOps can help out in the following areas of your organization:

  • Systems
  • DevOps
  • Customer Support

For systems, the most common use is for hardware systems failure predictions. For most of us in the cloud this is something we don’t generally consider. But if you’re using a hybrid model and still have some of those old rack mounted servers running important mission critical jobs, using AIOps for hardware failure prediction is likely something you’ll care about. AIOps can also help with device and systems provisioning. Managing VM pools and container clusters based on website traffic or workloads is easily within the grasp of a machine learning algorithm.

DevOps is probably one of the first places to start experimenting with AIOps. Using AI to aid in deployments, especially if you have hundreds of rollouts of software a day, can help detect anomalies and catch latent issues. Anomaly detection comes into play for your monitoring strategy, and AI is the perfect partner to help with incident management. If any of these are your pain points, you might need to add an AI to your DevOps team.

And of course for customer experience issues around site and system failures, there are bots, decision support systems, and automated communications options that provide greater detail than just a simple alert.

This is just a high level overview of some of the possible solutions. There are hundreds of ways that AI can not only help your IT operations teams, but reach deeper into your business operations. AI help monitor industrial equipment for failures, retail systems for security and compliance, and can help with supply chain optimization.

Posted on Leave a comment

AZ-220 and AI-100 = AI Edge Engineer

Microsoft doesn’t have an official certification for an AI Edge Engineer yet, but I have a feeling one is on the horizon. The term Edge continues to insert itself into a number of products. There’s even a version of Azure Stack Edge developed specifically for Edge compute devices. If this whole Edge concept is new to you, don’t worry, you’ll hear a lot more about it in the coming years.

IoT Alone isn’t Enough

IoT isn’t new. It’s been with us for 20 years. Longer if you count the many systems developed for the manufacturing and agriculture industries. The idea is to gather telemetry data, things like machine vibrations, humidity levels, or light levels, and send that data to a central data store for processing. The processing tends to involve searching to actionable insights. Can I build a correlation between machine vibration levels and machine breakdowns. If I can, I can add predictive maintenance triggered on vibration. Same goes for watering plants in a field or adjusting light levels in an office building.

IoT is still valid for these operations, however the demand is increasing for devices and intelligent actions that happen closer to the source. Why send constant streams of telemetry data to the cloud when you know the parameters that trigger an event? Why not bring the event trigger closer to the device? And what if you need greater levels of intelligence, like vehicle recognition in a shipping yard or warning zones on drilling platforms that recognize when a human not wearing the appropriate safety attire has crossed into that zone? For these types of situations you need something more sophisticated than the basic telemetry gathering tools. You need devices with the sort of AI typically hosted in the cloud brought to the edge of the cloud. Edge devices. More powerful systems that can serve as telemetry gathering systems, decision systems, and routing gateways. This is what Edge Computing is all about.

Developing for The Edge

Microsoft has developed a suite of tools that make working with edge technologies fairly simple. It’s not outside the realm of possibility that in the next few years Edge developers and Edge Engineers will be in high demand. The technologies used today are IoT Hub, IoT Edge Development kits, Edge enabled devices (which can be as simple as a PC), containers like Docker. Cognitive services like Computer Vision can be loaded onto these devices in the form of modules that are managed by Edge Agents and routed to the cloud or other devices using Edge Hubs and routing. These developers will need an understanding of developing for IoT devices, as well as development for the cloud. Azure provides many services for making this possible. There are services like IoT Hub and the Device Provisioning Service that make managing devices and integration with backend services possible. There’s also a SaaS platform called IoT Central that covers the majority of the use cases for specific industries that want to begin their journey into IoT Edge data processing.

These are the platforms that enable AI at the Edge using Microsoft Azure technologies, but these certainly aren’t the only tools that are taking advantage of this particular pattern. AWS, Google, NVIDIA, and many other manufacturers and services have staked a claim in this burgeoning field.

A New Type Of DevOps

DevOps is the word that has come to replace agile as the term to best describe the marriage of development and operations in the service of quickly delivering working systems to market. MLOPs is a similar term used to describe the process of bringing machine learning and AI to market. I’m not aware of a term specific for Edge development, perhaps EdgeOps should be the label, but what’s important to understand is that for this type of development there is a transformative workflow.

For a greenfield engagement, you probably won’t start with AI. You’ll probably start with telemetry in order to gain insights and develop actionable plans based on the data. This means you need a deployment pipeline that allows a device to go from being a simple hub or gateway for telemetry data to a more sophisticated application platform. Intelligence from the cloud moves from cloud services to Edge containers that are utilized by the Edge devices. This requires an evolutionary Edge network that can grow more responsive and cost effective as the business evolves its goals and strategies around gathered data.

This is an enormous opportunity developers and engineers, as well as value added providers, delivery partners, and definitely the organizations who embrace this new way to capture and manage data and events.

Why Did I Follow this Path?

For most of my career I’ve worked with different organizations that have struggled with their data. There are a large number of anti-pattens deployed by companies that make their data nothing more than a trailing indicator asset. It’s rarely a form of strategic information and even more rarely an active asset helping to shorten the loop between value delivery and cash flow. I decided not too long ago that I wanted the focus of my career to be toward that goal; helping organizations use their systems and data to shorten that loop. This isn’t always a technology solution. Sometimes achieving this goal requires organization change. I’m regularly looking for people, tools, and processes that I can utilize to help deliver on that goal.

Becoming an AI Edge Engineer as well as diving deeper into the world of Data Science seem the natural extension of that goal. If you are interested in exploring this path, you should take a look at Microsoft’s related training materials as well as the Oxford course mentioned on the training page: https://docs.microsoft.com/en-us/learn/paths/ai-edge-engineer/

Additionally, I’ll continue to post my lab work related to IoT Edge development and my thoughts on how this applies to helping businesses use their data to achieve their financial goals on this blog.

Posted on Leave a comment

Passing the AI-100 Exam

I recently passed the AI-100 Certification Test from Microsoft and wanted to take the opportunity to discuss my experience studying for the test, passing the test, and finally how I plan to apply what I’ve learned to my larger career journey of helping organizations build better systems.

First, I actually had no intention of taking the AI-100 this soon. I had planned to take it, but I was thinking more along the lines of next year when I had more time to work with more AI projects than I had. My experience was pretty much limited to building out a proof of concept Form Recognizer solution for a client. The opportunity to take the test came up last month when it appeared we needed this particular certification to help unlock a benefit around Microsoft Teams.

Basically, one of our team leads reached out to a handful of us with a request to take it and to take it soon. I committed to a month, hoping that would be enough time to get experience with the material in order to pass.

First, I can’t recommend Linux Academy enough. I took the course on that platform by Dan Sasse for the AI-100 and it gave me a great overview of everything I needed to study. The practice test for this course was also excellent for preparing me for the certification.

In addition to Dan’s course, I also worked through a handful of Microsoft’s Learning paths related to Azure Machine Learning Services and Studio. I initially didn’t like ML Studio, but after working with it I began to see the real benefits. These specialized studios are something that have become a trend with Microsoft products. Working with AI you will get used to the various interfaces developed for LUIS, QnA Maker, and so on.

Taking the actual test is always the most nerve wrecking for me. I’ve been developing for years – I’m an old guy in this field – but I still get sweaty palms when faced with these types of tests. So I had to precondition myself before diving in and taking the test. If you are taking the test, I would suggest you do what I did. Take your time.

So here’s my process for taking the test:

  1. Take your time. Take a few deep breaths before each question.
  2. Read the questions slowly and carefully to make sure you understand exactly what the objective is. Microsoft no longer creates “trick,” questions, but it does sometimes give more detail than needed to discover the answer.
  3. If it’s a multiple choice question, begin with the process of elimination. Be sure you are sure about what answers it is not. That will usually leave you with 2 answers.
  4. If you have any doubts about the answers you’ve selected for the question, mark it for review later. There were at least 2 questions I had marked for review that I was able to change because the answer seemed obvious to me when I reviewed them.

Lastly, go into the test with the attitude that everything is going to be fine no matter what. If you pass the test, well that’s great! That was the goal. It means you studied enough to know the subject material and were lucky enough to get the right combination of questions to match your knowledge – yes, I do believe luck has a hand in passing the tests. You can help luck by studying and knowing the materials, but sometimes the questions just don’t align well with your knowledge. Which leads me to the second point; don’t beat yourself up if you fail. Failure is an opportunity to learn. Be sure to look over your results and focus on the areas you did poorly in.

I did great on all my service knowledge, but even though I passed the test I know I have a weakness in deploying AI services. I need to brush up on certain technologies around that part of the discipline.

I count this as a fun certification. You get to work with a lot of cool technology and dive a little into the world of Data Science and Machine Learning. I would encourage anyone who is interested in working with AI to attempt this certification. Even if you don’t plan to be a data scientist, you will likely one day need to work with the output of the data science pipeline—an AI. And Microsoft has given the AI Engineer a lot of great tools to work with.

How do I plan to use this new certification? Well, by developing AI solutions of course. Right now I have a prototype QnA bot created that I’m building out. I only have a web connection created for it now, but the plan is to build it out to other channels like Facebook Messenger, Twilio SMS, and even email. I have also found many uses for the Form Recognizer and the possibilities for LUIS are endless.

Yesterday I even spent the afternoon building out a residential property assessment AI in Tensorflow – yes, and I’m not even a data scientist, so it’s not that hard if you have some development experience and you aren’t too scared of math.

AI will be embedded in everything soon. Edge Computing is on the rise. As a developer, you can’t afford to ignore this sector of the development world. And I think the AI-100 is the perfect way to jump in and start learning more about Artificial Intelligence.