Posted on Leave a comment

Understanding Microservices Architecture: Key Concepts, Learning Resources & More

As a new engineer, understanding the concept of microservices architecture is important for several reasons.

First, it is a key component of cloud native application development. It allows for faster development, easier scaling, and more flexible deployment options. By understanding how microservices work, you will be able to build, deploy, and manage cloud-native applications more effectively.

Second, microservices architecture promotes modularity, which allows for greater flexibility, scalability, and maintainability of the system. Each microservice can be developed, deployed, and scaled independently, making it easier to handle increasing demand.

Third, microservices architecture facilitates better collaboration and DevOps culture. By breaking down the application into smaller, independent units, different teams and developers can work together on the same application.

Fourth, microservices architecture allows for greater resilience. By isolating the failure in one service, it will not impact the entire system.

In summary, understanding the concept of microservices architecture is essential for any engineer working in the field today. It is a powerful tool for building and deploying applications in a cloud environment and provides benefits such as faster development, easier scaling, more flexible deployment options, greater flexibility, scalability, and maintainability, better collaboration and DevOps culture, and greater resilience.

Learning Materials

Here’s a list to get you started learning about Microservices architecture. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

Microservices explained – the What, Why and How?

In this video, the Nana explains the concept of microservices architecture and its advantages over monolith architecture. He also discusses best practices for microservices communication, such as using API calls, message brokers, and service meshes. Finally, he mentions the importance of a CI/CD pipeline for deploying microservices.

Possible Learning Path

Hands-on experience: Start by experimenting with building simple microservices using technologies such as Node.js, Spring Boot, or Flask. This can be done by following tutorials and guides and deploying these services on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of microservices, you can begin to explore the underlying concepts and technologies. This can be done through online resources such as tutorials, courses, and documentation provided by microservices architecture, as well as books and blogs on the topic.

Understanding the principles and best practices: Microservices architecture is not just about technology; it’s also about principles and best practices. It’s important to understand the key principles and best practices of microservices, such as loose coupling, autonomy, and scalability.

Joining a community: Joining a community of microservices enthusiasts will help you connect with other people who are learning and working with microservices architecture. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice building and deploying microservices, the more comfortable and proficient you will become with the architecture.

A Note from the Architect

I’m not in the camp that thinks monolithic architecture is necessarily bad. However, I believe that, in the long run, microservices have a better chance of success. Are microservices more work? Yes, there is an overhead associated with them, but I think it’s worth it for the added flexibility.

To explain the difference between microservices and traditional monolithic architectures, a monolithic architecture is when all the different parts of an application, such as the user interface, the database, and the backend, are bundled together in one package. This can work well for small projects, but as the project grows and becomes more complex, it can become harder to manage and maintain.

On the other hand, microservice architecture breaks the application down into smaller, individual services. Each service is responsible for a specific task, such as user authentication or payment processing.

The benefits of microservices include the ability to make changes to one service without affecting the others, as well as more flexibility and scalability, since each service can be deployed and scaled independently.

For example, here’s some code in Python that demonstrates a microservice:

from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello, World!'

if __name__ == '__main__':
    app.run()

This code is a simple web service that listens for a request to the root URL and returns the string “Hello, World!”.

That doesn’t do much, but it does show how you can quickly create a service. A series of these services working together could create a more robust solution. I won’t dive into the Actor Pattern, but it’s probably the ideal approach to microservices.

Keep in mind, an organization would select microservices over other architectures because they are more flexible, scalable, and easier to maintain as the project grows. Plus, it’s way cooler to say you’re working with microservices than a monolithic architecture. Trust me, it’s like being in a secret club of developers who know how to handle complexity in the best way possible.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Understanding Containerization: A Guide for New Engineers

As a new engineer, understanding the concept of containerization is important for several reasons.

First, containerization is a key component of cloud native application development. Containers are a lightweight and portable way to package software, making it easy to run and manage applications on cloud infrastructure. By understanding how containers work, you will be able to build, deploy, and manage cloud-native applications more effectively.

Second, containerization allows for greater consistency and portability across different environments. With containers, you can package an application and its dependencies together, ensuring that it will run the same way regardless of where it is deployed. This eliminates the “works on my machine” problem and makes it easier to move applications between different environments.

Third, containerization allows for greater scalability and resource efficiency. Containers use less resources than traditional virtual machines, and can be easily scaled up or down as needed. This makes it easier to handle the increasing demand for more computing power and storage.

Fourth, containerization also allows for better collaboration and DevOps culture, as containers can be easily shared and reused, making it easier for different teams and developers to work together on the same application.

In summary, as a new engineer, understanding the concept of containerization is important because it is a key component of cloud native application development, allows for greater consistency and portability across different environments, enables greater scalability and resource efficiency, and promotes collaboration and DevOps culture. It is a powerful tool for building and deploying applications in a cloud environment and is essential for any engineer working in the field today.

Learning Materials

Here’s a list to get you started learning about containerization. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

Kubernetes Crash Course for Absolute Beginners

Kubernetes is an open source container orchestration framework designed to manage applications made up of hundreds or thousands of containers across multiple environments. It offers features such as high availability, scalability, and disaster recovery, as well as a virtual network that enables communication between pods. This video provides an overview of the Kubernetes architecture and components, and a use case of a web application with a database to illustrate how it works.

Possible Learning Path

Hands-on experience: Start by installing Docker and Kubernetes on your local machine or in a virtual environment. This can be done by following the official documentation provided by Docker and Kubernetes. After that, you can follow tutorials and guides to build and deploy simple applications in containers using Docker and Kubernetes.

Theoretical learning: Once you have a basic understanding of Docker and Kubernetes, you can begin to explore the underlying concepts and technologies. This can be done through online resources such as tutorials, courses, and documentation provided by Docker and Kubernetes, as well as books and blogs on the topic.

Joining a community: Joining a community of Docker and Kubernetes enthusiasts will help you connect with other people who are learning and working with these technologies. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice building and deploying containerized applications using Docker and Kubernetes, the more comfortable and proficient you will become with these technologies.

Specialize in a specific use case: Docker and Kubernetes can be used in a wide variety of scenarios and use cases, so it is beneficial to specialize in one or two that align with your business or career goals.

A Note from the Architect

I’m trying to think of the first time I worked with a Docker container. I believe it was almost four years ago, which in technology timeframes was forever. I was trying to decide if I wanted to use Angular or React to create components for a plugin framework running on SharePoint. It wasn’t the type of development I was used to doing at the time, but I knew the industry was heading away from Angular and more toward React. So, I installed Docker on my laptop, learned how to check out the images, and eventually started learning the basics of containers. I was hooked. Here was a great way to build apps, host them locally, and get the same experience locally that I could expect in production. No more, “It worked on my machine.”

In the last couple of years, I’ve used the remote connection capabilities of VS Code to run pretty much all of my development in containers. It gives me the freedom to try out different languages, frameworks, and libraries without ever needing to install those on my local operating system. I’m proud to say that I never get bugged for Java or .Net updates now. I just get the latest images and add a volume that connects to a local folder where I manage my Git repositories. It’s made my life as a developer much easier.

If you’re wondering, “What’s the big deal with containers? I just want to write code. Why do I need to use containers?” I’ll try to answer that question. Because we don’t just write code anymore. As developers and as operations engineers, we’re beginning to move into a phase where we are sharing the overall solution. This means that when I create something or have a hand in creating something, I have ownership over that thing. I’m responsible for it.

Now, you might work for an enterprise that’s still a bit behind the times. You may write code, and some other team might test that code, and then some other team might try to deploy that code. In that situation, you probably aren’t using containers or anything that looks like modern DevOps. And in that situation, the team between you and the customers who will derive value from your code is a bottleneck. If you rely on a QA team, they will find bugs, because that’s what they are incentivized to do. It’s their job to compare your code against some form of requirements and fail it if it doesn’t meet those requirements. Operations in this type of environment is incentivized to keep systems running smoothly, so they’ll look for any excuse to deny your code entry into production—that usually looks like a set of meetings designed to make sure you met all the criteria needed for something to go into production.

That is the old way of developing software. Let me tell you, if you’re working some place like that, get out. No. Seriously. Leave. Find a better job.

I believe this is the way software should be developed:

The Most Powerful Software Development Process Is The Easiest

In an ideal software development process, the only work done is understanding the problem, writing code to solve it, testing to confirm it is solved, and making progress in small steps to retain the ability to change the system when needed. The goal is to minimize work and maximize learning, allowing for changes to be made easily and with confidence.

Containers make this process easier. Your code remains modular, making it easier to version and manage libraries and dependencies. You can even build out the needed infrastructure for a container management system, such as Kubernetes, without involving operations in some cases.

As a developer and an architect, I have found that containers have improved the quality of my development life. They have allowed me to have more control over the solutions I deliver. I believe that if you start working regularly with containers, you will feel the same way too.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Why Should a New Engineer Learn the Cloud Native Concepts?

As a new engineer, learning cloud native concepts is important for several reasons.

First, cloud computing is becoming increasingly popular and is now the norm for many organizations. Many companies are moving away from traditional on-premises data centers and migrating their infrastructure and applications to the cloud. Knowing how to build, deploy, and manage cloud-native applications will give you a valuable skill set that is in high demand in the job market.

Second, cloud native concepts and technologies are designed to be flexible, scalable, and efficient. They enable faster development and deployment of applications and make it easier to handle the increasing demand for more computing power and storage. By learning these concepts, you will be able to build applications that can handle large amounts of traffic and data and can easily scale up or down as needed.

Third, cloud native concepts and technologies are designed to work well together. They are all part of a larger ecosystem that is designed to make it easy for developers to build, deploy, and manage applications on cloud infrastructure. By learning these concepts, you will be able to take advantage of the full range of cloud-native tools and services, and will be able to create more powerful and efficient applications.

In summary, as a new engineer, learning cloud native concepts will give you a valuable skill set, allow you to build flexible, scalable, and efficient applications, and enable you to take advantage of the full range of cloud-native tools and services. It is an essential skill set for many companies today and will be essential in the future.

What is Cloud Native?

Cloud native is a term used to describe an approach to building, deploying, and running applications on cloud infrastructure. It involves containerization, microservices architecture, and the use of cloud-native tools and services.

Containerization packages software, its dependencies, and configuration files together in a lightweight and portable container, allowing it to run consistently across different environments.

Microservices architecture designs and builds software as a collection of small, independent services that communicate with each other via well-defined APIs. This approach enables faster development, easier scaling, and more flexible deployment options.

Cloud-native tools and services are designed specifically for cloud environments and provide capabilities such as auto-scaling, load balancing, and service discovery. They allow for faster and more efficient deployment and management of applications.

In summary, cloud native is a way of designing, building, and running applications on cloud infrastructure. It leverages containerization and microservices architecture, and utilizes cloud-native tools and services for faster and more efficient deployment and management of applications. As a new engineer, it is important to understand these concepts and how they work together in order to build cloud-native applications.

Learning Materials

Here’s a list to get you started learning about Cloud Native. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to watch

What is Cloud Native and Why Should I Care?

Wealth Grid is a mid-sized firm that has product and service market fit, but is struggling to shorten its time to value and stay off the front page news. To do this, they must embrace cloud native technologies, but this is not business as usual. With the help of the Cloud Native Computing Foundation, Wealth Grid can learn from their mistakes and use tools and techniques to achieve their goals.

Expert talk: Cloud Native & Serverless • Matt Turner & Eric Johnson • GOTO 2022

Matt Turner and Eric Johnson discuss the importance of Cloud Native Concepts for the new engineer to learn, such as Continuous Integration and Continuous Delivery, and the benefits of testing in production to catch certain classes of bugs.

A Possible Learning Path

Hands-on experience: It is important to start by experimenting with different cloud providers, such as AWS, Azure, and GCP, to understand the basic concepts and services offered by each. This can be done by creating a free account and following tutorials and guides to build and deploy simple applications.

Theoretical learning: Once you have a basic understanding of cloud computing, you can begin to explore cloud native concepts such as containerization, microservices, and service discovery. This can be done through online resources such as tutorials, courses, and documentation provided by cloud providers, as well as books and blogs on the topic.

Joining a community: Joining a community of cloud native enthusiasts will help you connect with other people learning and working with cloud native technology. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice building and deploying cloud native applications, the more comfortable and proficient you will become with the technology.

Specialize in a specific cloud provider: Cloud providers each have their own set of services and ways of working, so it is beneficial to specialize in one or two providers that align with your business or career goals.

A Note From the Architect

Don’t be intimidated by the volume of information you’ll need to learn to be proficient in cloud-native technologies. Because I have a secret for you from a twenty-five year veteran. There’s little chance you’ll ever be much more than competent. You may be able to master a few of these subject areas, and that’s great if you do, but it’s not necessary if you truly understand one important thing.

I call this important thing, “The Why”.

In each of these articles where I present an important topic from big concepts in cloud-native development I will give you my opinion based on personal experience as to why you should consider using this technology, what are other possibilities, and what are the trade offs.

I believe that ‘The Why’ is one of the most important parts of a technology consideration. So what is the, “why,” of Cloud Native? In my opinion, it’s the ability to develop and deliver solutions on their built-to-fit platform. Even though there’s still a huge market for large systems like SAP, Microsoft Dynamics, and Oracle, the future belongs to value created solutions running on platforms that best fit their need.

I’m sure some people are wondering if everything really needs to be containerized. No, there are plenty of alternative options for running your workloads that don’t involve containers.

As a developer, I have come across several alternatives to cloud native technologies. One alternative is using virtual machines (VMs) instead of containers. VMs offer a higher level of isolation and security, but they also have a larger footprint and are less portable. Another alternative is using on-premises infrastructure, which provides greater control over data and security, but also comes with higher costs and maintenance responsibilities.

Another alternative is using a platform-as-a-service (PaaS) instead of containers. PaaS provides a higher level of abstraction and can simplify the deployment process, but it also limits the level of control and customization that you have over the infrastructure.

It’s important to note that, while these alternatives can be viable options depending on the specific use case, they often trade off some of the benefits of cloud native technologies such as scalability, flexibility, and cost-efficiency. Ultimately, it’s important to weigh the tradeoffs and choose the solution that best aligns with the needs of your project and organization.

What I hope to accomplish with this series is to open your eyes to the possibilities of what Cloud Native has to offer.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on 1 Comment

Introduction to Azure IoT Hub and Related Services SDKs

In February 2021, I wrote an article exploring the differences between Azure IoT Hub Devices and Modules. It was well-received and remains one of my most-visited blog posts. Now, with more experience in device and Edge development, I’d like to revisit the topic, but expand it to include other types of Azure IoT SDKs. There have been changes to the Azure IoT Hub device landscape, and I’m better equipped to provide useful information about the various SDKs available.

Microsoft has these divided into three basic SDK types.

  • Device SDKs
  • Service SDKs
  • Management SDKs

To make this as simple as possible, I like to think of the SDKs as serving these purposes:

  • Device – this code runs on the device either as bare metal or in a container. It represents either a leaf device or an Edge gateway. If it connects to IoT to feed it data, it needs this SDK.
  • Service – this code enables me to manage my devices with updates or other forms of communication using IoT Hub. This involves administrative tasks, all of which are focused on using the IoT Hub API to interact with the devices.
  • Management is more complex, since these are SDKs designed to manage your IoT Hub or Hubs. I could use this to monitor my IoT Hub, create or delete Hub instances, and keep track of my shared access policies.

This document focuses on the SDKs and their updates to match the latest versions of Azure IoT services. Use this as a starting point for further research. If you spot any mistakes or think I’ve missed something important, please let me know in the comments. I’m always open to feedback from my readers. I’m well aware that I make mistakes, and I’m always learning.

What are Azure IoT Hub Device SDKs?

Azure IoT Hub Device SDKs provide APIs and libraries that allow developers to create applications for IoT devices and connect them to Azure IoT Hub. These SDKs support a wide range of programming languages and platforms, and offer unique sets of features tailored to the language and platform. Common use cases include sending telemetry data, receiving commands, connecting devices using various protocols, authenticating devices, handling messaging, managing device twins, and receiving notifications.

A small code example in Python

import os
from azure.iot.device import IoTHubDeviceClient
# Retrieve the connection string for the device from an environment variable
conn_str = os.getenv("IOTHUB_DEVICE_CONNECTION_STRING")
# Create an IoT Hub client
client = IoTHubDeviceClient.create_from_connection_string(conn_str)
# Send a message to IoT Hub
message = "Hello, I am a device using IoT Hub!"
client.send_message(message)
print("Sent message: {}".format(message))
# Disconnect the client
client.disconnect()

This code imports the necessary modules from the Azure IoT Hub Device SDK and creates an IoT Hub client. It then sends a message to IoT Hub and disconnects the client when finished.

To use this code, you must have the Azure IoT Hub Device SDK installed and have a connection string for a device registered with IoT Hub.

Unlike many vendor devices, the SDKs present you with the opportunity to create your own IoT devices. You may have heard of IoT Edge, so let’s address what that is now.

What is Azure IoT Edge?

Azure IoT Edge is a fully managed service that enables developers to deploy cloud intelligence directly onto IoT devices. It allows them to build, deploy, and manage applications that run on edge devices and leverage cloud-based services and data to perform tasks such as data processing, analytics, machine learning, and more.

Azure IoT Edge consists of three main components:

  1. Edge Modules: Small, lightweight pieces of code that run on edge devices and perform specific tasks. Edge modules can be written in a variety of programming languages, including C#, Java, Python, and more.
  2. IoT Edge Runtime: A lightweight container that runs on edge devices and manages the lifecycle of edge modules. The IoT Edge runtime can be installed on a wide range of operating systems and devices, including Windows, Linux, and various embedded platforms.
  3. Azure IoT Hub: A cloud-based service that acts as the central hub for communication between edge devices and the cloud. IoT Hub provides a secure and scalable platform for managing devices, sending commands and notifications to devices, and receiving telemetry data from devices.

Azure IoT Edge enables developers to create and deploy intelligent edge solutions that bring the power of the cloud to the edge of the network. These solutions can perform tasks such as data processing, analytics, machine learning, and more on edge devices, reducing the amount of data that needs to be sent to the cloud and enabling fast decision-making and action. Azure IoT Edge is especially useful for scenarios where latency or connectivity issues make it difficult to rely solely on the cloud for data processing and analysis.

A small code example

import os
from azure.iot.edge.module_client import ModuleClient
# Create a module client
client = ModuleClient.create_from_edge_environment()
# Connect the module client to IoT Hub
client.connect()
# Send a message to IoT Hub
message = "Hello from IoT Edge!"
client.send_message_to_output("channel1", message)
print("Sent message: {}".format(message))
# Disconnect the module client
client.disconnect()

This code imports the necessary modules from the Azure IoT Edge Python SDK and creates a module client. It then uses the module client to connect to IoT Hub, send a message, and disconnect.

To use this code, you must have the Azure IoT Edge Python SDK installed and an IoT Edge runtime running on your device. Additionally, an IoT Edge module must be deployed to the device with an output binding named “channel1”.

These are basic approaches to the device and modules, but I wanted to give you an idea of how easy they can be to use. Setting up IoT Edge for development involves some complexity, but the ability to run containerized cloud solutions at the edge makes it worthwhile.

What about IoT Central?

Many of the concepts covered in this article are also true or will work with IoT Central. For this article, I’m focusing on IoT Hub. I will have a future article dedicated to IoT Central and it’s SDKs and APIs.

Azure IoT Hub Device SDKs

Azure IoT Hub Device SDKs are software development kits (SDKs) that enable developers to create applications that run on IoT devices and connect them to Azure IoT Hub. These SDKs provide a set of APIs and libraries that allow devices to perform basic and complex IoT device to cloud communications and commands.

Azure IoT Hub supports device SDKs for a wide range of programming languages and platforms, including C, C#, Java, JavaScript, Python, and a few that are platform specific. Each device SDK provides a set of APIs and libraries tailored to the programming language and platform, making it easy for developers to create applications that interact with Azure IoT Hub.

Some common use cases for Azure IoT Hub Device SDKs include:

  • Sending telemetry data from devices to IoT Hub
  • Receiving commands from IoT Hub and executing them on devices
  • Connecting devices to IoT Hub using various protocols, such as MQTT, AMQP, HTTP, and more
  • Authenticating devices with IoT Hub using various authentication mechanisms, such as shared access signatures (SAS) tokens and X.509 certificates
  • Handling device-to-cloud and cloud-to-device messaging in a reliable and secure manner
  • Managing device twins, which are JSON documents that store metadata about devices in IoT Hub
  • Receiving notifications about device connections, disconnections, and other events from IoT Hub

Overview of the different device SDKs available

  • C: The C Device SDK is a lightweight, portable library that enables C-based devices to connect to Azure IoT Hub and interact with the cloud. It supports various protocols, including MQTT, AMQP, HTTP, and more, and provides APIs for tasks such as sending telemetry data, receiving commands, and authenticating with the IoT Hub.
#include <stdio.h>
#include <stdlib.h>
#include "iothub_client.h"
#include "iothub_message.h"
#include "azure_c_shared_utility/threadapi.h"
#include "azure_c_shared_utility/crt_abstractions.h"
#include "azure_c_shared_utility/platform.h"
int main(void)
{
    // Retrieve the connection string for the device from an environment variable
    char* conn_str = getenv("IOTHUB_DEVICE_CONNECTION_STRING");
    // Create an IoT Hub client
    IOTHUB_CLIENT_HANDLE client = IoTHubClient_CreateFromConnectionString(conn_str);
    if (client == NULL)
    {
        (void)printf("ERROR: iothub_client_create failed\\n");
    }
    else
    {
        // Send a message to IoT Hub
        IOTHUB_MESSAGE_HANDLE message = IoTHubMessage_CreateFromString("Hello, Azure IoT Hub!");
        if (IoTHubClient_SendEventAsync(client, message, NULL, NULL) != IOTHUB_CLIENT_OK)
        {
            (void)printf("ERROR: IoTHubClient_SendEventAsync failed\\n");
        }
        else
        {
            (void)printf("Sent message: Hello, Azure IoT Hub!\\n");
        }
        IoTHubMessage_Destroy(message);
    }
    // Disconnect the client
    IoTHubClient_Destroy(client);
    return 0;
}

This code includes the necessary header files from the Azure IoT Hub C Device SDK and uses them to create an IoT Hub client. It then sends a message to IoT Hub using the client, before disconnecting the client when complete.

To use this code, you must have the Azure IoT Hub C Device SDK installed on your machine and have a connection string for a device registered with IoT Hub.

  • C#: The C# Device SDK is a fully-featured library that enables C#-based devices to connect to Azure IoT Hub and interact with the cloud. It supports various protocols, including MQTT, AMQP, HTTP, and more, and provides APIs for tasks such as sending telemetry data, receiving commands, and authenticating with the IoT Hub. It also includes support for advanced features such as device twins and direct method invocations.
using Microsoft.Azure.Devices.Client;
using System;
using System.Text;
using System.Threading.Tasks;
namespace ConsoleApp
{
    class Program
    {
        static void Main(string[] args)
        {
            // Retrieve the connection string for the device from an environment variable
            string conn_str = Environment.GetEnvironmentVariable("IOTHUB_DEVICE_CONNECTION_STRING");
            // Create an IoT Hub client
            DeviceClient client = DeviceClient.CreateFromConnectionString(conn_str);
            // Send a message to IoT Hub
            string message = "Hello, Azure IoT Hub!";
            byte[] message_bytes = Encoding.UTF8.GetBytes(message);
            Message iot_hub_message = new Message(message_bytes);
            client.SendEventAsync(iot_hub_message).Wait();
            Console.WriteLine("Sent message: {0}", message);
            // Disconnect the client
            client.CloseAsync().Wait();
        }
    }
}

This code imports the necessary namespaces from the Azure IoT Hub C# Device SDK and creates an IoT Hub client. It then sends a message to IoT Hub, before disconnecting the client.

To use this code, you must install the Azure IoT Hub C# Device SDK on your machine and obtain a connection string for a device registered with IoT Hub.

  • Java: The Java Device SDK is a fully-featured library that enables Java-based devices to connect to Azure IoT Hub and interact with the cloud. It supports various protocols, including MQTT, AMQP, HTTP, and more, and provides APIs for tasks such as sending telemetry data, receiving commands, and authenticating with the IoT Hub. It also includes support for advanced features such as device twins and direct method invocations.
import com.microsoft.azure.sdk.iot.device.DeviceClient;
import com.microsoft.azure.sdk.iot.device.IotHubClientProtocol;
import com.microsoft.azure.sdk.iot.device.Message;
public class Main
{
    public static void main(String[] args) throws Exception
    {
        // Retrieve the connection string for the device from an environment variable
        String conn_str = System.getenv("IOTHUB_DEVICE_CONNECTION_STRING");
        // Create an IoT Hub client
        IotHubClientProtocol protocol = IotHubClientProtocol.AMQPS;
        DeviceClient client = new DeviceClient(conn_str, protocol);
        client.open();
        // Send a message to IoT Hub
        String message = "Hello!";
        Message iot_hub_message = new Message(message);
        client.sendEventAsync(iot_hub_message);
        System.out.println("Sent message: " + message);
        // Disconnect the client
        client.close();
    }
}

This code includes the necessary namespaces from the Azure IoT Hub Java Device SDK and uses them to create an IoT Hub client. It then uses the client to connect to IoT Hub, send a message to IoT Hub, and disconnect the client when it’s finished.

  • JavaScript: The JavaScript Device SDK is a fully-featured library that enables JavaScript-based devices to connect to Azure IoT Hub and interact with the cloud. It supports various protocols, including MQTT, AMQP, HTTP, and more, and provides APIs for tasks such as sending telemetry data, receiving commands, and authenticating with the IoT Hub. It also includes support for advanced features such as device twins and direct method invocations.
const { Client } = require('azure-iot-device');
// Retrieve the connection string for the device from an environment variable
const conn_str = process.env.IOTHUB_DEVICE_CONNECTION_STRING;
// Create an IoT Hub client
const client = Client.fromConnectionString(conn_str, Client.IotHubTransport);
// Connect the client
client.open((err) => {
    if (err) {
        console.error('Error opening the client: ' + err.message);
    } else {
        // Send a message to IoT Hub
        const message = new Message('Hello, Azure IoT Hub!');
        client.sendEvent(message, (send_err) => {
            if (send_err) {
                console.error('Error sending the message: ' + send_err.message);
            } else {
                console.log('Sent message: ' + message.getData());
            }
        });
    }
});

This code imports the necessary modules from the Azure IoT Hub JavaScript Device SDK, creating an IoT Hub client. It then connects to IoT Hub and sends a message.

  • Python: The Python Device SDK is a fully-featured library that enables Python-based devices to connect to Azure IoT Hub and interact with the cloud. It supports various protocols, including MQTT, AMQP, HTTP, and more, and provides APIs for tasks such as sending telemetry data, receiving commands, and authenticating with the IoT Hub. It also includes support for advanced features such as device twins and direct method invocations.
import os
from azure.iot.edge.module_client import ModuleClient
# Create a module client
client = ModuleClient.create_from_edge_environment()
# Connect the module client to IoT Hub
client.connect()
# Send a message to IoT Hub
message = "Hello from IoT Edge!"
client.send_message_to_output("output1", message)
print("Sent message: {}".format(message))
# Disconnect the module client
client.disconnect()

Device SDKs for specific programming languages and platforms

Azure RTOS

Azure RTOS (Real-Time Operating System) is a suite of RTOS kernels and middleware components optimized for use with small, resource-constrained devices running on Arm Cortex-M microcontrollers. It provides developers with a robust and scalable platform for building IoT solutions that require real-time performance, low power consumption, and a small footprint.

The Azure RTOS middleware is a set of software components that provide additional functionality and support for developing IoT solutions. These components include:

  • Networking libraries and protocols such as TCP/IP, HTTP, HTTPS, and MQTT, which allow devices to connect to the cloud and other devices over the network.
  • Security libraries and protocols such as SSL/TLS, X.509 certificates, and secure boot, which allow devices to authenticate and encrypt communication with the cloud and other devices.
  • A file system library that allows devices to store and retrieve data on external storage devices such as SD cards.
  • The ThreadX RTOS kernel, which provides a lightweight, scalable, and reliable platform for building real-time applications on Arm Cortex-M microcontrollers.

Azure RTOS is available for a variety of Arm Cortex-M microcontrollers, including those from STMicro.

FreeRTOS

FreeRTOS is an open-source real-time operating system (RTOS) for building small, resource-constrained devices for the Internet of Things (IoT). It is designed to be lightweight, scalable, and reliable, offering a range of features and services that are beneficial for constructing IoT solutions, including:

  • Task scheduling and management
  • Memory management
  • Timers and software timers
  • Inter-task communication and synchronization
  • Networking libraries and protocols, such as TCP/IP, HTTP, and MQTT
  • Security libraries and protocols, such as SSL/TLS, X.509 certificates, and secure boot
  • File system libraries
  • Device drivers and middleware components for various hardware peripherals

FreeRTOS is available for a wide range of microcontrollers and platforms, including those from Atmel, NXP, and STMicroelectronics. It has a large community of developers and users, and is regularly updated and improved.

Azure IoT provides libraries and tools to connect FreeRTOS-based devices to its services, such as the IoT Hub, IoT Central, and IoT Edge. This enables developers to create IoT solutions that benefit from the scalability, security, and reliability of the Azure cloud.

Bare Metal (Azure SDK for Embedded C)

The Azure SDK for Embedded C is a software development kit (SDK) that enables developers to build IoT solutions using the C programming language and Microsoft Azure cloud services. It provides libraries, tools, and documentation to easily connect C-based devices to Azure IoT services such as IoT Hub, IoT Central, and IoT Edge, and construct IoT solutions that benefit from the scalability, security, and reliability of the Azure cloud.

The SDK supports a variety of microcontrollers and platforms from manufacturers like Atmel, NXP, and STMicroelectronics. It offers support for protocols and features like MQTT, AMQP, HTTP, device twins, and direct method invocations, and provides APIs for tasks like sending telemetry data, receiving commands, and authenticating with Azure IoT services.

The Azure SDK for Embedded C is open source and constantly updated and improved by a community of developers and users. It is lightweight, portable, and easy to use, and can be integrated into various development environments and toolchains.

Connecting devices to Azure IoT Hub using the device SDKs

To connect a device to Azure IoT Hub using a device SDK, you need to:

  1. Install the device SDK on your device or development machine. The device SDK provides the necessary libraries and tools to build IoT solutions that can connect to Azure IoT Hub.
  2. Register your device with Azure IoT Hub. This step involves creating a device identity in IoT Hub and obtaining a connection string that uniquely identifies your device. The connection string includes the device’s ID, hub name, and authentication key, and is used to authenticate the device with IoT Hub when it connects.
  3. Use the device SDK to create an IoT Hub client on your device. The IoT Hub client is responsible for establishing and maintaining a connection with IoT Hub, as well as sending and receiving messages.
  4. Use the IoT Hub client to connect to IoT Hub. This typically involves providing the client with the connection string and specifying the protocol to use (such as MQTT or AMQP). The client will then establish a connection with IoT Hub and authenticate using the information in the connection string.
  5. Use the IoT Hub client to send messages to IoT Hub and receive messages from IoT Hub. The device SDK provides APIs for tasks such as sending telemetry data, receiving commands, and interacting with device twins and direct methods.

This article provided a basic overview of the device SDKs for the languages and platforms supported for Azure IoT Hub. The example code was meant to give a glimpse of how it works, but not of real world workloads. In most cases, you’ll use the local device libraries to connect to sensors and collect data to send to IoT Hub. This usually requires developing an asynchronous listener to capture the telemetry data and send it in a message to IoT Hub. This functionality is beyond the scope of this article, but I am working on a newer set of Agriculture code for one of my devices. I will post it here soon, which will provide a more complete solution to device development on a Raspberry Pi.

Azure IoT Hub Service and Management SDKs

The Azure IoT Hub Service SDKs are a set of software development kits (SDKs) that enable developers to build applications that interact with Azure IoT Hub. They provide APIs and libraries in various programming languages and platforms, allowing developers to create, manage, and interact with devices and other resources in IoT Hub. The SDKs are available for .NET, Java, Node.js, Python, and Go, and offer functions such as sending and receiving messages, creating and updating devices, managing device twins and direct methods, and monitoring device health and status.

Overview of the different service SDKs available

The Azure IoT Hub Service SDKs are a set of software development kits (SDKs) that enable developers to build applications that interact with Azure IoT Hub. They provide APIs and libraries in various programming languages and platforms, allowing developers to create, manage, and interact with devices and other resources in IoT Hub. From the choices, it’s clear that these SDKs are designed to supplement or amplify the capabilities of IoT Hub.

These are just a few of the possible uses for the Service SDK:

  • Sending and receiving messages to and from devices
  • Creating, updating, and deleting devices in IoT Hub
  • Managing device twins and direct methods
  • Monitoring the health and status of devices and other resources in IoT Hub

A couple of specific use cases I can imagine are:

  • An integration with an application or an enterprise system that is not available through the standard IoT Hub integration channels.
  • Updating device digital twins is a good use of the SDK, especially for automated processes
  • Device Registry is particularly useful when dealing with a large number of devices. Automating the process is highly recommended. I will go into more detail about this in a future blog post.

The Azure IoT Hub Service SDKs are available for a variety of programming languages and platforms, such as:

  • .NET: The Azure IoT Hub .NET Service SDK
  • Java: The Azure IoT Hub Java Service SDK
  • Node.js: The Azure IoT Hub Node.js Service SDK
  • Python: The Azure IoT Hub Python Service SDK
  • Go: The Azure IoT Hub Go Service SDK

Service SDKs for specific programming languages and platforms

  • .NET: The Azure IoT Hub .NET Service SDK

This is likely the library that most traditional Microsoft developers will gravitate toward. It’s definitely a well-developed, highly supported library. This library has been migrated to

https://github.com/Azure/azure-iot-sdk-csharp

The Azure IoT SDK for C# is a software development kit that enables developers to build IoT solutions using C# and Microsoft Azure cloud services. It provides libraries, tools, and documentation to connect C#-based devices and applications to Azure IoT services, and includes support for a variety of microcontrollers and protocols. The SDK is open source and designed to be lightweight, portable, and easy to use.

I’m not including code sample here, because they tend to be a bit larger. However, I think it’s just as important to work with the service SDKs as the device SDKs. You’ll find they make your administrative work easier.

  • Java: The Azure IoT Hub Java Service SDK

The Java Service SDK offers many of the same functions as the .Net library. It includes the following samples to complete tasks related to device management, data processing and analytics, and integration and orchestration:

  • Device management: Create, update, and delete devices in IoT Hub, manage device twins, and use direct methods.
  • Data processing and analytics: Receive telemetry data from devices in IoT Hub, process and analyze the data, and trigger alerts or create reports.
  • Integration and orchestration: Integrate IoT Hub with other systems and services, such as enterprise applications or cloud platforms, and build complex workflows and processes that span multiple devices and services.

azure-iot-sdk-java/service/iot-service-samples at main · Azure/azure-iot-sdk-java

  • JavaScript: Azure SDK for JS

The IoT Hub Service SDK for Node.js doesn’t appear to have code for the IoT Hub service, yet. However, the JavaScript is still available. It has many of the same functionality as the other SDKs. One thing to note is that many of these are in TypeScript and it appears that there are more samples available here. So if you’re looking for a functionality that doesn’t exists in the other samples, you may want to search through these code examples:

azure-sdk-for-js/sdk/iothub/arm-iothub/samples-dev at main · Azure/azure-sdk-for-js

  • Python: The Azure IoT Hub Python Service SDK

The Azure IoT Hub Python library is here:

https://github.com/Azure/azure-iot-hub-python

It has many of the same functionality as the other library. I would say that the Client library is a bit more robust than the library for IoT Hub, but most of the functionality you’ll want exists. There are also plenty of examples to get you started. And one of the things I love about Python is how much you can accomplish with so little code:

# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import os
import msrest
from azure.iot.hub import DigitalTwinClient
iothub_connection_str = os.getenv("IOTHUB_CONNECTION_STRING")
device_id = os.getenv("IOTHUB_DEVICE_ID")
try:
    # Create DigitalTwinClient
    digital_twin_client = DigitalTwinClient.from_connection_string(iothub_connection_str)
    # Get digital twin and retrieve the modelId from it
    digital_twin = digital_twin_client.get_digital_twin(device_id)
    if digital_twin:
        print(digital_twin)
        print("Model Id: " + digital_twin["$metadata"]["$model"])
    else:
        print("No digital_twin found")
except msrest.exceptions.HttpOperationError as ex:
    print("HttpOperationError error {0}".format(ex.response.text))
except Exception as exc:
    print("Unexpected error {0}".format(exc))
except KeyboardInterrupt:
    print("{} stopped".format(__file__))
finally:
    print("{} finished".format(__file__))
  • Go: Azure SDK for Go

To my knowledge, there is no official Microsoft sponsored IoT Hub or Client library for GO. At the time of this writing, the services supported are:

  • Azure Blob Storage
  • Azure Functions
  • Azure Container Registry
  • Azure Container Instance
  • Azure Kubernetes Service
  • Azure Virtual Machines
  • Azure Key Vault
  • And Azure Cognitive Service

However, the success of Go as a Cloud Native language has prompted an Azure Go SDK. There is a version that appears to have been created here:

https://github.com/amenzhinsky/iothub

There is code for Device and Service here with very few samples, but it’s probably enough to get someone started:

package main
import (
	"context"
	"fmt"
	"log"
	"os"
	"github.com/amenzhinsky/iothub/iotservice"
)
func main() {
	c, err := iotservice.NewFromConnectionString(
		os.Getenv("IOTHUB_SERVICE_CONNECTION_STRING"),
	)
	if err != nil {
		log.Fatal(err)
	}
	// subscribe to device-to-cloud events
	log.Fatal(c.SubscribeEvents(context.Background(), func(msg *iotservice.Event) error {
		fmt.Printf("%q sends %q", msg.ConnectionDeviceID, msg.Payload)
		return nil
	}))
}

Using the management SDKs to communicate with devices and manage IoT Hub resources

Here, I want to discuss some unique ways to use the Azure IoT Hub Management SDKs to work with IoT Hub. The SDKs provide APIs and libraries that enable developers to manage and operate IoT Hub, such as creating and deleting instances, setting and querying properties and quotas, managing shared access policies, monitoring usage and diagnostics, and performing maintenance tasks like repairing or backing up IoT Hub. The SDKs are available for .NET, Python, and Node.js, and can be used on Windows, macOS, and Linux.

I don’t think the average IoT developer will be too concerned with these SDKs. However, if you work in an organization with multiple IoT hubs and millions of devices, these packages could be useful. Additionally, if you want to build products that use IoT Hub as a base service, then you should definitely look into these SDKs.

  • Azure SDK for .Net IoT Management

To get down to more detail, because I think it’s important to look at the detail for this particular SDK, here’s what’s delivered in the primary IoT Hub client for management. This piece of code is what I’m reviewing:

azure-sdk-for-net/IotHubClient.cs at main · Azure/azure-sdk-for-net

The code is a part of the Azure IoT Hub .NET Management SDK, which is a set of libraries and tools for building .NET applications that manage and operate Azure IoT Hub.

The file, IotHubClient.cs, contains the implementation of the IotHubClient class, which is a client object that provides a set of methods and properties for interacting with the Azure IoT Hub service. The IotHubClient class is part of the generated code for the Azure Management Libraries for .NET, which is a set of libraries that provide a consistent, easy-to-use, and high-level API for accessing Azure cloud services.

The IotHubClient class is used to perform various tasks related to managing and operating an IoT Hub instance, such as:

  • Creating and deleting IoT Hub instances
  • Setting and querying IoT Hub properties and quotas
  • Managing IoT Hub shared access policies
  • Monitoring IoT Hub usage and diagnostics
  • Performing maintenance tasks such as repairing or backing up IoT Hub

The IotHubClient class exposes a set of methods that correspond to the different operations that can be performed on an IoT Hub instance. For example, the IotHubClient.CreateOrUpdateAsync method is used to create or update an IoT Hub instance, and the IotHubClient.GetAsync method is used to retrieve the properties of an IoT Hub instance.

  • Azure SDK for Java – Azure Resource Manager IotHub client library for Java

I believe this is a better library than the .Net version. I provides more capabilities and samples.

azure-sdk-for-java/sdk/iothub/azure-resourcemanager-iothub at main · Azure/azure-sdk-for-java

The Java library has many of the same features as other libraries. It is based on the Azure Resource Manager Java library, so if you are already familiar with it, using the IoT Hub Management Library should be straightforward.

  • Azure iotHub client library for JavaScript

This is the same library as the Service library, which makes sense. It explains why the Service library seems larger than the other service libraries. This is also built on top of the Azure Resource Management.

  • Azure SDK for Python – Azure Management IoT Hbu

This library is a bit more like the .Net library. You enter with an IoT Hub Client and access you capabilities that way. It offers most of the same capabilities of the other libraries.

Overall, these libraries offer similar capabilities. Java and JavaScript have had more development, but the .Net and Python versions are still useful. Additionally, the API provides many management capabilities. If the SDKs don’t meet your needs, you can use Azure Resource Manager to manage IoT Hub.

Azure IoT Services SDKs

Azure IoT SDKs provide a range of tools and services for building secure, connected, and location-aware IoT applications and solutions. This section will cover the basics of these products.

Azure Maps, Azure Sphere, Azure Digital Twins, Azure Object Anchors, and Azure Remote Rendering are all included in the Azure IoT resources. Each of these offers unique features and capabilities to enable the development of IoT applications and solutions.

Overview of the Azure IoT services SDKs available

Azure Maps

Azure Maps is a cloud-based mapping and location platform that provides a range of tools and services for building location-based applications and solutions. It is part of the Azure IoT resources, as it can be used to build location-aware IoT applications and solutions that incorporate geospatial data and functionality.

Some of its key features and capabilities include:

  1. Map rendering: Azure Maps offers 2D and 3D maps, street maps, satellite imagery, and terrain maps, which can be rendered and displayed in web-based, mobile, and embedded formats.
  2. Geocoding and reverse geocoding: Convert addresses and location coordinates into geographic locations, and vice versa.
  3. Routing and navigation: Calculate routes and directions between two or more locations.
  4. Location services: Geofencing, location search, and real-time traffic data for building location-based applications.

Overall, Azure Maps is a powerful and flexible platform for building location-based applications and solutions, especially for IoT scenarios.

Azure maps provides four separate SDKs to help developers:

  • REST SDK
  • Web SDK (JavaScript)
  • Android SDK (Java or Kotlin)
  • iOS (Swift)

Azure Maps Documentation

Azure Sphere

Azure Sphere is a secure and connected microcontroller platform designed for building Internet of Things (IoT) applications and solutions. It is part of the Azure IoT resources, providing a range of tools and technologies to build and deploy secure, connected IoT devices.

Key features and capabilities of Azure Sphere include:

  1. Security: Azure Sphere is designed to provide a high level of security for IoT devices. It features a custom-built Linux operating system and a hardware-based security module that provides secure boot, trusted execution, and hardware-level isolation.
  2. Connectivity: Azure Sphere supports various connectivity options, including Wi-Fi and cellular, allowing devices to connect to the internet and communicate with other devices and systems.
  3. Development tools: Azure Sphere includes a software development kit (SDK), libraries and APIs, and integration with Azure IoT Hub and other Azure services, making it easier to build and deploy IoT applications and solutions.

Overall, Azure Sphere is an excellent platform for building and deploying secure, connected IoT devices, especially in scenarios where security and connectivity are paramount. Azure Sphere comes with a complete development kit. There are two basic SDKs, a Windows and a Linux version. The only supported programming language for Sphere appears to be C, which makes sense considering it’s designed for embedded development.

Digital Twins

Azure Digital Twins is different from the device digital twin discussed in this article. It is a cloud-based platform that enables you to construct digital twin models of physical spaces, systems, and assets. A digital twin is a digital representation of a physical object or environment that reflects its real-world characteristics and behavior.

Azure Digital Twins is part of the Azure IoT resources, providing tools and capabilities to build and deploy IoT applications and solutions that incorporate digital twin models. Key features and capabilities of Azure Digital Twins include:

  1. Data modeling and integration: Create a digital twin model by defining objects and properties that represent physical objects and features in the environment, as well as the relationships between them. Integrate data from external sources, such as sensors, devices, and systems, into the digital twin model.
  2. Data storage and querying: Store and query data within the digital twin model. Use various data types and formats to represent data in the digital twin, and use SQL-like queries to retrieve and analyze the data.
  3. Behaviors and logic: Define behaviors and logic in the digital twin model, using tools such as Azure Functions and Azure Stream Analytics. This enables the digital twin to respond to events and conditions in real-time, and perform actions or calculations based on the data in the digital twin.
  4. Visualization and exploration: Use the Azure Digital Twins portal, Azure Grafana, or the Azure Digital Twins REST API to view and interact with the digital twin.

Overall, Azure Digital Twins is a powerful and flexible platform for creating and managing digital twin models, and enabling a wide range of use cases and scenarios in the IoT space.

Azure Digital Twins comes with control plane APIs, data plane APIs, and SDKs for managing instances and its elements. The SDKs available are for .NET (C#), Java, JavaScript, Python, and Go.

I plan to cover Azure Digital Twins in much greater detail in the future. I’m working on a small book on the subject for Solution Architects. I think it’s one of the most interesting technologies available today. I don’t think we’ve scratched the surface of its potential.

Object Anchors

Azure Object Anchors is a service that allows you to create, manage, and interact with digital representations of physical objects in mixed reality applications. It is included in the Azure IoT resources as it can be used to build IoT applications and solutions with mixed reality features.

Some of the key features and capabilities of Azure Object Anchors include:

  1. Object recognition: Azure Object Anchors can recognize and track physical objects in real-time using machine learning algorithms and computer vision techniques. This enables you to create digital representations of physical objects that can be displayed in mixed reality applications.
  2. Digital twin creation: Azure Object Anchors allows you to create digital twin models of physical objects, which can reflect the real-world characteristics and behavior of the objects. You can define the properties and relationships of the objects in the digital twin model, and integrate data from external sources such as sensors and devices.
  3. Mixed reality integration: Azure Object Anchors can be integrated with mixed reality platforms and tools, such as HoloLens and Unity, allowing you to build mixed reality applications that incorporate the digital twin models created with Azure Object Anchors.

Azure Object Anchors is a powerful platform for building mixed reality applications and solutions that incorporate digital representations of physical objects, particularly in IoT scenarios. Development is built around the Unity game development platform and the Hololens device, though not necessarily restricted to them. The SDKs available are a Conversion SDK for .NET designed to turn existing 3D model assets into Unity Digital Twin objects, as well as Runtime SDKs for Unity and Hololens. If you plan to include Augmented Reality into your IoT solution, Azure Object Anchors is a great option.

Remote Rendering

Azure Remote Rendering is a cloud-based service that enables high-quality 3D graphics and visualization to be rendered in real-time, using the power and scale of the cloud. It is included in the Azure IoT resources because it can be used to build IoT applications and solutions that incorporate 3D graphics and visualization, and that require real-time rendering at scale.

Key features and capabilities of Azure Remote Rendering include:

  1. 3D graphics rendering: Azure Remote Rendering offers a range of tools and technologies for rendering high-quality 3D graphics, such as support for various 3D file formats, realistic lighting and materials, and advanced rendering techniques like ray tracing.
  2. Real-time rendering: Azure Remote Rendering is designed to support real-time rendering of 3D graphics, even at high resolutions and frame rates. This is useful for applications and solutions that need to display dynamic, interactive 3D graphics in real-time.
  3. Cloud-based rendering: Azure Remote Rendering leverages the power and scale of the cloud to enable high-performance rendering of 3D graphics. This is beneficial for applications and solutions that need to support large numbers of users or devices, or that need to process large volumes of data.
  4. Integration with other Azure services: Azure Remote Rendering can be integrated with other Azure services, such as Azure IoT Hub and Azure Stream Analytics, which allows for the building of IoT applications and solutions that incorporate 3D graphics and visualization.

Overall, Azure Remote Rendering is an effective platform for building applications and solutions that incorporate 3D graphics and visualization, and that require real-time rendering at scale. It is particularly suitable for use in IoT scenarios where 3D graphics and visualization are important.

Developing for this service requires making use of the existing REST APIs. Those are:

  • Azure Mixed Reality Resource Management REST API
  • Remote Rendering REST API

There is an extensive Microsoft.Azure.RemoteRendering API library available for C#.

Spatial Anchors

Azure Spatial Anchors is a service that enables you to create and manage digital anchors in physical spaces. It can be used to build IoT applications and solutions that incorporate location-based functionality and support mixed reality scenarios. I think one of the best examples of this technology that many people are probably familiar with is the mobile game Pokemon Go. This massive multi-player platform allows gamers to track, capture, train, and battle fantasy creatures in a mixed-reality environment.

Its key features and capabilities include:

  1. Digital anchor creation: You can define the properties and relationships of the anchors in the digital model, and integrate data from external sources such as sensors and devices.
  2. Mixed reality integration: Azure Spatial Anchors can be integrated with mixed reality platforms and tools, such as HoloLens and Unity.
  3. Real-time anchor tracking: It supports real-time tracking of the digital anchors, allowing you to update the location and orientation of the anchors in the mixed reality application in real-time.
  4. Cloud-based anchor management: It provides a cloud-based platform for managing the digital anchors, which is useful for applications and solutions that need to support large numbers of users or devices, or handle large volumes of data.

Overall, Azure Spatial Anchors is a powerful platform for building mixed reality applications and solutions that incorporate location-based functionality. It is especially suitable for IoT scenarios where mixed reality and location-based functionality are important.

There are multiple Spatial Anchors SDKs:

  • SDK for Unity
  • SDK for iOS Objective-C
  • SDK for Android Java
  • SDK for Android NDK
  • SDK for HoloLens C++/WinRT
  • Azure CLI
  • Azure Powershell

There are multiple samples to work through to learn this resource. Again, this is another one of the family of resources that helps bring real world data to life in the digital world.

Time Series Insights

Azure Time Series Insights is a cloud-based platform that allows you to store, visualize, and analyze time series data in real-time. It is included in the Azure IoT resources because it can be used to build IoT applications and solutions that need to process, analyze, and visualize large volumes of time series data in real-time.

Some of its key features and capabilities include:

  1. Time series data storage: Azure Time Series Insights provides a scalable and highly available data store for storing time series data. It can handle large volumes of data and supports a variety of data types and formats.
  2. Real-time data visualization: Azure Time Series Insights offers a range of visualization tools and interfaces that allow you to view and analyze time series data in real-time. You can use the Azure Time Series Insights portal, Azure Grafana, or the Azure Time Series Insights REST API to visualize and explore the data.
  3. Query and analysis: Azure Time Series Insights provides a query language and a range of analytical functions that allow you to perform complex queries and analysis on the time series data. This can be useful for applications and solutions that need advanced analysis.
  4. Integration with other Azure services: Azure Time Series Insights can be integrated with other Azure services, such as Azure IoT Hub and Azure Stream Analytics, to build IoT applications and solutions that incorporate time series data and analysis.

Overall, Azure Time Series Insights is a powerful platform for building IoT applications and solutions that need to process, analyze, and visualize large volumes of time series data in real-time. It is particularly well-suited for scenarios where real-time data analysis and visualization are important.

Azure Time Series Insights and Azure Data Explorer are both cloud-based platforms designed to store, process, and analyze large volumes of data in real-time. However, there are key differences between the two.

Data types and formats: Azure Time Series Insights is designed for time series data, which is data collected and recorded over time at regular intervals. Azure Data Explorer is more general-purpose, and can handle structured and unstructured data.

Query language: Azure Time Series Insights uses Kusto Query Language (KQL) for time series data. Azure Data Explorer also uses KQL, but it is more flexible and can query a wider variety of data types and formats.

Visualization: Both platforms offer visualization tools and interfaces for viewing and exploring the data. Azure Time Series Insights is more specialized for time series data, and provides visualization options specifically designed for it.

Integration with other Azure services: Both platforms can integrate with Azure IoT Hub and Azure Stream Analytics, but the specific integration options and capabilities may vary.

Overall, Azure Time Series Insights is specialized for time series data, while Azure Data Explorer is more general-purpose.

Other Services

There are additional services that don’t fit into the IoT resources category, but are often used with Azure IoT Hub to fulfill the backend data processing needs of an IoT platform. These include:

  • Azure Functions
  • Azure Event Hubs
  • Azure Synapse
  • Azure Storage
  • Azure Databricks

Choosing the Right Azure IoT Hub SDK

Azure IoT Hub SDKs are available for multiple languages, platforms, protocols, and devices, and should be chosen based on the specific needs of the IoT solution. Best practices for using the SDKs include using the most recent version, following the recommended architecture, using the async programming model, and using secure communication protocols. Message routing, device twins, and device identities and keys should also be used to optimize the performance and security of the solution.

Factors to consider when choosing an Azure IoT Hub SDK

Consider several factors when selecting the right Azure IoT Hub SDK:

  1. Language support: Choose an SDK that supports the programming language you are using for your IoT solution. Azure IoT Hub SDKs are available for C, C#, Java, Node.js, and Python.
  2. Platform support: Consider the platform you are using for your IoT solution. Azure IoT Hub SDKs are available for Windows, Linux, and MacOS.
  3. Protocol support: Choose an SDK that supports the protocol you are using for communication between your IoT devices and the cloud. Azure IoT Hub supports MQTT, AMQP, and HTTP.
  4. Device support: Consider the type of device you are using and choose an SDK that supports the necessary features and capabilities. For example, if you need secure communication, make sure the SDK you choose supports secure communication protocols.
  5. Ecosystem integration: If you are building an IoT solution that integrates with other Azure services or third-party tools, choose an SDK with the necessary integration capabilities.
  6. Ease of use: Consider the learning curve and documentation associated with the SDK. Choose an SDK that is easy to use and has clear documentation to help you get started quickly.
  7. Development environment: Consider the development environment you are using, and choose an SDK that is compatible with your environment.
  8. Performance: Consider the performance needs of your IoT solution and choose an SDK that can meet those needs.
  9. Cost: Consider the cost of the SDK and whether it fits within your budget.

Best practices for using Azure IoT Hub SDKs in different scenarios

Make sure to use the most recent version of the Azure IoT Hub SDKs to take advantage of new features and improvements. Follow the recommended architecture to ensure that your solution is scalable, reliable, and secure. Use the async programming model to use non-blocking code and optimize performance. Secure communication protocols, such as TLS, should be used to protect the data transmitted between devices and the cloud. Device-to-cloud messages should be used for critical data that needs to be stored and analyzed. Cloud-to-device messages can be used to control devices or send commands. Message routing can be used to filter and route messages based on the content of the message or the device that sent it, helping to scale the IoT solution and process messages more efficiently. Device twins allow for managing the state of devices and synchronizing their state with the cloud. Finally, device identities and keys should be used securely and protected from unauthorized access.

Conclusion and Future Directions

The potential of Azure IoT Hub SDKs to drive business value

Azure IoT Hub SDKs offer several potentials to drive business value:

  1. Improve efficiency and productivity: Automate processes, gather real-time data, and enable remote monitoring and control with IoT solutions built using Azure IoT Hub.
  2. Increase customer satisfaction: Better understand and meet customer needs, leading to increased satisfaction and loyalty.
  3. Enhance decision making: Collect and analyze real-time data from IoT devices to make more informed decisions and improve operations.
  4. Improve asset management: Track and monitor the performance and maintenance of assets, leading to improved asset utilization and reduced downtime.
  5. Generate new revenue streams: Offer new products and services, create new business models, and open up new revenue streams with IoT solutions.
  6. Reduce costs: Optimize resource usage, improve maintenance schedules, and automate processes with IoT solutions.
  7. Increase competitiveness: Give businesses a competitive edge by providing them with a way to differentiate themselves and offer unique value to their customers.

Edge computing involves processing data closer to the source of the data, reducing latency, improving performance, and reducing the amount of data sent to the cloud. 5G networks are enabling faster and more reliable IoT connectivity, enabling new use cases and applications. Artificial intelligence and machine learning are being integrated into IoT solutions, making them more intelligent and autonomous, with the ability to learn and adapt. Blockchain technology is being used to improve security, traceability, and trust. The IoT is being applied in healthcare for remote monitoring, diagnostics, and treatment (IoMT). It is also being used in industrial settings, improving efficiency, maintenance, and safety (IIoT). Smart cities are being created with the IoT, enabling improved public services, transportation, and environmental sustainability. Augmented reality and virtual reality are being integrated into IoT applications, creating immersive and interactive experiences.

Closing

Azure IoT Hub Device SDKs are software development kits that enable developers to create applications that run on IoT devices and connect them to Azure IoT Hub. These SDKs provide a set of APIs and libraries that allow devices to perform basic and complex IoT device-to-cloud communications and commands, and support a wide range of programming languages and platforms. Common use cases include sending telemetry data, receiving commands, connecting devices using various protocols, authenticating devices, handling messaging, managing device twins, and receiving notifications.

Azure IoT Edge is a fully managed service that enables developers to deploy cloud intelligence directly onto IoT devices. It consists of edge modules, an IoT Edge runtime, and Azure IoT Hub. This allows developers to create and deploy intelligent edge solutions that bring the power of the cloud to the edge of the network.

This article has presented some of the major SDKs related to IoT and some of the related Azure IoT resources. It brings these resources together to show the array of options available to the Azure IoT developer. If a favorite was left out, please leave a comment. It could be a subject of a future post.

Posted on Leave a comment

AutoML and Domain Driven Design

Can Domain-Driven Design (DDD) improve AutoML implementations? I believe it can, as many Machine Learning experiments involve the same problem-solving approaches used in software development.

This article provides a brief overview of AutoML and no-code development. It then discusses the most common approach to DDD for software development. With a specific use case in mind, I’ll walk through a scenario with AutoML as the tactical architecture. I’ll explain how DDD should be used to make strategic and tactical decisions regarding AutoML development.

By the end, you’ll have a basic understanding of AutoML and DDD. You’ll also understand how to apply DDD as a framework to build the right ML solution for the domain problem with organizational stakeholders.

Introduction to AutoML

AutoML is the process of automating the tasks of applying machine learning to real-world problems, according to Wikipedia. So, what is the use case for no-code AutoML?

Many organizations struggle to move beyond the proof-of-concept stage. This can be due to a lack of staff or data estate to support the efforts, the technical complexity of building out the infrastructure to support machine learning in production, or an unclear definition of the business objectives they wish to meet in the problem space.

AutoML helps reduce the risk of failure by providing cloud-native, low- or no-code tools to guide users through the process of curating a dataset and deploying a model. No-code development has enabled organizations to reach their goals without the need for experts. Popular platforms like Microsoft’s Power Platform, Zoho Creator, Airtable, BettyBlocks, and Salesforce have made no-code development a regular part of an organization’s IT toolset. This puts development tools closer to domain experts, allowing organizations to meet their objectives without the usual IT project overhead.

Critics of the no-code movement point to limited capabilities compared to traditional software development, dependency on vendor-specific systems, lack of control, poor scalability, and potential security risks. However, some organizations may find these risks worth the opportunities and solutions that no-code provides.

AutoML has both critics and champions. Organizations should be aware that AutoML comes with tradeoffs alongside its benefits. Champions of AutoML will point to the following advantages over traditional machine learning development processes:

  • Accessibility: AutoML requires minimal knowledge of machine learning concepts and techniques, so you don’t need a data scientist or data engineer to guide you through the process.
  • Collaboration: Platforms like AutoML, Databricks, and Amazon SafeMaker Studio enable data collaboration in one platform, allowing teams to share data, models, and results with each other.
  • Consistency: Automating the optimization of models reduces the chances of human error, improving the consistency of machine learning model results.
  • Customization: Platforms like Azure ML and Amazon SageMaker Studio make it easy to customize machine learning environments and set specific requirements for models and parameters.
  • Efficiency: AutoML addresses faster ways to preprocess data, select the correct model, and tune hyperparameters, reducing tedious and time-consuming tasks.
  • Scalability: AutoML platforms are typically built on cloud architecture, making it easier to handle large datasets and complex problems.

Critics of AutoML warn that using it instead of traditional machine learning could lead to dependence on data quality, ethical concerns, lack of control, lack of interpretability, and lack of transparency.

Data quality is essential: many AutoML platforms require clean data with no issues. Without data engineers or a data quality process, it’s unlikely to have clean data. Poor data quality or noisy data can result in inaccurate models.

Ethical considerations must also be taken into account. Algorithms may perpetuate existing biases and discrimination if the data used to train them is unbalanced or biased.

AutoML’s abstraction of the complexities of model creation is beneficial, but it also means users can’t control what happens during the pipeline process. If the algorithms developed from AutoML are difficult to understand, organizations may not have insight into how decisions are being made, and may unknowingly release models with flawed biases.

Without understanding how the model is making decisions, it’s hard to fully grasp the strengths and weaknesses of a model, leading to a lack of transparency.

Additionally, the models generated from AutoML may not be able to handle specialized problems or reach the performance expected of modern ML models.

AutoML is a process of automating the tasks of applying machine learning to real-world problems. It offers cloud native, low to no-code tools that help guide users from a curated dataset to a deployed model. There are benefits and tradeoffs to using AutoML, such as accessibility, collaboration, customization, scalability, and efficiency, but also potential ethical concerns, lack of control, interpretability, transparency, and limited capabilities. Is there a way that we can apply a common, and well-established framework that helps us better exploit the positive elements of AutoML while reducing the negative side-effects brought up by its critiques? I believe we can, and I think Eric Evans’ approach to creating a ubiquitous language for the software development team and the domain experts within an organization is the best place to start.

A Quick Overview of Domain-Driven Design for Software Development

DDD is a software development practice that focuses on understanding and modeling the complex domains that systems operate in. It emphasizes the importance of gaining a deep understanding of the problem domain and using this knowledge to guide system design. DDD is a flexible practice based on principles and concepts rather than rigid rules. I use DDD because, as a developer, it encourages me to think more about the domain problem and desired business outcomes than on the technical approach to creating software and infrastructure. It’s a lightweight way of building a shared language with someone using a common language. The best example of this I found was in the book Architecture Patterns with Python by O’Reilly Media.

Imagine that you, our unfortunate reader, were suddenly transported light years away from Earth aboard an alien spaceship with your friends and family and had to figure out, from first principles, how to navigate home.

In your first few days, you might just push buttons randomly, but soon you’d learn which buttons did what, so that you could give one another instructions. “Press the red button near the flashing doohickey and then throw that big lever over by the radar gizmo,” you might say.

Within a couple of weeks, you’d become more precise as you adopted words to describe the ship’s functions: “Increase oxygen levels in cargo bay three” or “turn on the little thrusters.” After a few months, you’d have adopted language for entire complex processes: “Start landing sequence” or “prepare for warp.” This process would happen quite naturally, without any formal effort to build a shared glossary.

I love that this example shows the natural process of discovery and how it creates a shared understanding of the spaceship’s behavior. DDD is a big topic, so I won’t try to cover it all here. The important thing to understand is that DDD is meant to be practiced. It’s a process based on discussions and drives towards building deep, shared knowledge about a specific problem.

Why do I believe that someone wishing to learn machine learning, even as an AutoML user, should begin their own DDD practice? I would point to these key concepts of DDD:

  1. Bounded context: Isolate and well-define a specific part of the problem domain to manage complexity and prevent misunderstandings between different parts of the domain model. Avoid “boiling the ocean” and taking on more work than can be managed. Bounded context can represent a team, line of business, department, set of related services, data elements, or parameters.
  2. Domain expert: Someone with a deep understanding of the problem domain who can provide valuable insights and guidance to the development team. Without access to domain expert, it’s difficult to build a solution of real value.
  3. Domain model: Representation of the key concepts and relationships in the problem domain, expressed in a shared language. Not an exact replica of reality, but captures the essence of what makes the organization’s model unique.
  4. Event storming: Collaborative technique to identify and model key events and processes in the problem domain. Uncovers hidden complexity and ensures the domain model reflects the needs of the business.
  5. Ubiquitous language: Shared language used by all members of the development team to communicate about the problem domain. Ensures everyone is using the same terminology and concepts.

Why would an AutoML developer want to know DDD?

DDD encourages the use of domain-specific language and concepts in modeling, which can make it easier for domain experts to understand and interpret the results of the models. It also emphasizes the importance of understanding the business context and domain-specific knowledge when solving problems. This can help AutoML developers to build more accurate and effective models.

DDD provides a common language and set of concepts that can help data scientists communicate more effectively with domain experts and other stakeholders. It also emphasizes the importance of designing solutions that are maintainable and adaptable over time, helping AutoML developers to build models that are more robust and resilient to change.

Finally, DDD encourages collaboration between domain experts and technical experts, which can help AutoML developers to better understand the problem they are trying to solve and the impact their solutions will have on the business.

A Use Case: A Machine Learning Model to Diagnose the Flu

I have explored how AutoML and Domain-Driven Design can be used together as a framework to help AutoML developers. Our aim is to take advantage of AutoML’s positive aspects while minimizing its negative tradeoffs. I have discussed why an AutoML developer might choose to use DDD as a framework, so in this section I will explain how to implement the process.

I have chosen a relatively simple use case, one that has been extensively studied in terms of building classifiers for the problem domain. Therefore, I will take a basic approach to a non-novel problem, focusing on the DDD process rather than the complexity of the problem domain.

Using domain-driven design (DDD) to build a machine learning model to help diagnose patients with the flu could involve the following steps:

  1. Identify the bounded context: The first step would be to identify the bounded context, or specific part of the problem domain, that the model will operate in. In this case, the bounded context might be the process of diagnosing patients with the flu. A conversation with the people making the request for a solution can help establish a scope and general goals. Additionally, tools like Simon Wardley’s Wardley Mapping and Teresa Torres’ Opportunity Solution Trees can uncover what type of business outcomes the organization is attempting to address and the service or supply chain associated with meeting customer needs. Questions such as what the requesters expected outcomes are from a solution that can diagnose patients who have the flu, if it will lessen time in a waiting room, or if it will make patient intake faster should be asked.
  2. Identify the domain experts: The development team should then identify domain experts who have a deep understanding of the problem domain and can provide valuable insights and guidance. These domain experts might include medical professionals who are experienced in diagnosing and treating patients with the flu. The goal is to establish clarity around vocabulary, expected behaviors, and begin to build a vision for the existing strategic, business systems.
  3. Define the ubiquitous language: The development team should work with the domain experts to define a shared language, or ubiquitous language, that everyone can use to communicate about the problem domain. This might include defining key terms and concepts related to the flu and its symptoms.
  4. Conduct event storming: The development team should use a collaborative technique called event storming to identify and model the key events and processes involved in diagnosing patients with the flu. This might include identifying the symptoms that are most indicative of the flu and the tests that are typically used to diagnose it. A large whiteboard or a shared online collaboration tool can be used to define domain events, commands, policies, and other important elements that make up a working system.
  5. Build the domain model: Using the insights and knowledge gained through event storming, the development team should build a domain model that represents the key concepts and relationships in the problem domain. This might include building a model that predicts the likelihood of a patient having the flu based on their symptoms and test results.
  6. Use AutoML to build and tune the machine learning model: The development team should then use automated machine learning (AutoML) to build and tune the machine learning model. This might involve selecting an appropriate model type, preprocessing the data, and optimizing the hyperparameters. The AutoML user should build a better understanding of the type of settings they want to establish for their model. The type of model should address the primary problem type uncovered during the DDD process. If it wasn’t, the AutoML developer should return to additional sessions with the domain experts to tune and solidify the design of the solution.

Overall, by applying DDD principles to the development of the machine learning model, the development team can create a model that is closely aligned with the business needs and can evolve and adapt over time. DDD is not just a process to follow when planning the project, but one to continue throughout the development and deployment of the solution. Involve domain experts in the ML model’s lifecycle as long as it creates value.

DDD and AutoML

AutoML is a process of automating machine learning tasks to solve real-world problems. It offers cloud-native, low- to no-code tools to guide users from a curated dataset to a deployed model. Benefits include accessibility, collaboration, customization, scalability, and efficiency. However, there are potential ethical concerns, lack of control, interpretability, transparency, and limited capabilities.

Domain-Driven Design (DDD) is a software development practice that focuses on understanding and modeling complex organization domains. It encourages developers to think more about the domain problem and desired business outcomes than the tactical approach. DDD is a flexible practice built on principles and concepts, not hard rules. It is a lightweight method of building a common language with domain experts.

Does this mean DDD is right for every AutoML endeavor? Not necessarily. When experimenting with data and working with light predictions, bringing the framework of DDD is likely overkill. But when working with complex domains, like healthcare or modern manufacturing, involving domain experts is common. DDD is useful for conducting valuable discussions, capturing important vocabulary, and uncovering unique domain behaviors. It is a tool to include in the professional’s toolbox, even when working with low- or no-code solutions. Understanding how the organization will use the solution to meet desired outcomes is essential for success. DDD helps bridge the gap between desired outcomes and machine learning models.

Posted on Leave a comment

Let’s Learn about Camel K

Let’s learn about Camel k


There are multiple cloud and serverless solutions available for integration. Some of these are proprietary products like Microsoft BizTalk, Azure Logic Apps, or Dell’s Boomi.

There are hybrid options, like Saleforce’s Mulesoft, which has a lighter, open source version, but the real functionality is in the licensed version. Then there are a few truly open source solutions, like Apache ServiceMix or Apache Synapse (not to be confused with Azure Synapse).

We’re covering Camel K today, because it looks like an integration winner in the open source community. ServiceMix looked like an interesting competitor to Mulesoft, but it doesn’t seem to be as active as Camel K.

What is Camel?

Apache Camel is an enterprise integration framework based on the famous Enterprise Integration Patterns codified by Gregor Hohpe and Bobby Woolf in their book, Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions

I’ve been waiting forever for a second edition of that book, but for the most part, that’s not necessary. I first learned the ins-and-outs of distributed systems from that book, and I would still recommend it as required reading today for cloud architects.

I won’t dive into the discipline and the craft needed to be a software integrator, but the fact that Camel focuses on these concepts and patterns makes a developer’s job much easier.

Camel is a small library designed to work with pluggable connectors. There are hundreds of pluggable connector choices for existing, known APIs. And like any good pluggable system, you can create your own plug-ins.

You can create routing and mediation rules in a variety of JVM languages, like Java, Groovy, and Kotlin. There’s also the option to use XML and YAML if you’re into that sort of thing (no judgement!).

Is there a difference between Camel and Camel K?

It’s a native version of Apache Camel designed to run containerized on Kubernetes or Microservices. Basically, this is the serverless version of Camel.

Which is one of the reasons I believe it has thrived more than a product like ServiceMix.

The good news! If you are familiar with Camel, Camel K will just work. If you’ve written code in Camel DSL, that code can run on Kubernetes or OpenShift.

Why use Apache Camel?

I’ve been in IT long enough that my career is no longer carded at the bar when it wants to buy a beer. One practice that never seems to go away is integration. It’s the glue that holds everything together.

As someone working in the technology field, having a minimum understanding and respect for the challenges of integrating disparate systems is necessary. No matter what platform you build on, at some point your platform will need to interface with some other systems.

It’s best to understand the fundamentals of integration if you want successfully operating technology systems that bring multiple chains of value together for your customers.

Camel is happiest as the connection between two or more systems. It allows you to define a Camel Route. The Camel Route allows you to make decisions about what you do with data coming into the route, what type of processing that might need to be done to that data, and what that data needs to look like before it’s sent to the other system.

And let me be clear, that data can be almost anything. It could be an event from a piece of manufacturing machinery, it could be a command from one system to another, or it could be the communication between two microservices.

The enterprise integration patterns were designed to help establish what actually happens to a message or data between two or more systems.

As you can probably imagine, having a system between two other systems that allows you to modify the data or take action on that data is a pretty powerful tool.

Why not just directly connect the two systems together?

There are times when this might not be a bad solution. Context is important when building technology solutions. Point-to-point connections aren’t always evil.

However, when you get to the point where you have more than two systems that need to exchange data or messaging, you start to run into a problem. Keeping up with messages, source systems, and data transformations can get painful.

When things get painful, it’s time to use a tool to help stop that pain.

Camel is excellent for this. It’s exceptionally flexible, provides multiple patterns from manipulating XML or JSON files to working directly with cloud services.

Ready to learn!


Let’s do this! Here’s where we start.

Routes

Remember the integration patterns I discussed earlier? Well, this is where we start putting those to work.

Camel Routes are designed with a Domain Specific Language using a JVM programming language like Java, Kotlin, or Groovy. And you can also define these in YAML or XML.

If you are familiar with working on Mulesoft code under the hood, those ugly XML routes will be familiar.

<routes xmlns="<http://camel.apache.org/schema/spring>">
    <route>
        <from uri="timer:tick"/>
        <setBody>
            <constant>Hello Camel K!</constant>
         </setBody>
        <to uri="log:info"/>
    </route>
</routes>

That’s not too bad. And ideally you want to keep these routes short and concise. If your routing logic starts to look scary, it’s probably time to write code.

I don’t want to dive too far into code here. This article’s goal is to just give you a quick overview, but I do want to show you how easy this is using Java.

import org.apache.camel.builder.RouteBuilder;

public class Example extends RouteBuilder {
    @Override
    public void configure() throws Exception {
        from("timer:tick")
            .setBody()
              .constant("Hello Camel K!")
            .to("log:info");
    }
}

Ok, so what if you come from the Mulesoft world or one of these other integration platforms that offer a visual interface to make this work? Let’s be honest, if you’ve used Azure Logic Apps, Make, or Zapier, you probably want a similar experience.

Drag and Drop GUI

I don’t want to jump too far ahead, but there is a solution for the low-code folks. And let’s face it, seeing a visual representation of flows is much easier to work with than code.

Introducing Kaoto

There’s a lot to Kaoto. I want to keep this brief, but I do want to assure those who are used to working with visual tools that you aren’t losing that functionality. For the engineers in the room, Kaoto won’t increase the footprint of the deployed code.

Why is using Kaoto a good idea?

  • It’s Cloud Native and works with Camel K
  • We can extend it if needed
  • It’s not locked in No Code, we can switch back to code if we need to
  • It’s Open Source, so it will only cost you the time to learn it
  • You can run it in Docker – I’m a big fan of doing all my development in containers, so this is always a plus for me

Routes and Integration Patterns

There are well over 300 connection points available for Camel. Many of these are common like JDBC, REST, SOAP. But there are also more specific connectors, like Slack, gRPC, Google Mail, WordPress, RabbitMQ, etc.

Many of the connectors you are used to seeing in commercial products are available in Camel. If you don’t find something you need, you can create your own connector.

There are also integration patterns for almost any situation, and the patterns can build connected and built upon to create messaging pipelines.

I won’t go into each pattern, but they fit within these categories:

  • Messaging Systems, like a message, a message channel, or message endpoint
  • Messaging Channel, like a point-to-point channel, dead letter channel, message bus
  • Message Construction, like return address, message expiration, and event message
  • Message Routing, like splitter, scatter-gather, and process manager
  • Message Transformation, like content enricher, claim check, and validate
  • Messaging Endpoints, like message dispatcher, idempotent consumer, and messaging gateway
  • System Management, like detour, message history, and step

That is just a short collection of the patterns and their categories. It’s well worth any developers time to read through these and understand the problems in integration that they address. As a bonus, familiarizing yourself with integration patterns will make you a better programmer and more adept at designing solutions for the cloud.

The Big K

Camel K allows us to run Camel in Kubernetes. Why is this important?

First, you’ll want to understand that Camel K isn’t just Camel, it’s everything Camel can do, but it’s written in Go. The original Camel is a Java product. Nothing necessarily wrong with Java and the JVM, but it does tend to have a bigger footprint than Go. Go eats less memory. Eating less memory is good for the cloud.

It also doesn’t need the surrounding infrastructure that Camel requires. Camel can run almost anywhere the JVM is supported. Spring Boot is a good way of hosting Camel. And yes, you could containerize that and run it in Kubernetes.

However, Camel K was born for Kubernetes and containers. And there is a custom Kubernetes resource designed for Camel. This means that from a developer’s standpoint, you just need to write your code locally, and then use the Kamel CLI to push your changes to the cloud.

Now, you might want a more defined DevOps process, but the idea is that there is far less friction between the code written and the code that runs in production.

The basic loop is as follows:

  1. You change your integration code
  2. You commit your changes to the cloud
  3. The custom integration resource notifies a Camel K operator
  4. The operator pushes the changes to the running pods
  5. Go back to step 1

Camel K and Knative

What is Knative, and why do I want to use it with Camel k?

Knative is an extension of Kubernetes. It enables Serverless workloads to run on Kubernetes clusters, and provides tools to make building and managing containers easier.

Knative has three primary areas of responsibility:

  1. Build
  2. Serving
  3. Eventing

The idea behind Serverless is that it should just work, and it should just work well in a variety of situations. For instance, it should scale up automatically when workloads increase, and it should scale down automatically when workloads decrease. This is what the serving portion of the solution does.

If you install Camel K on a Kubernetes cluster that already has Knative installed, the Camel K operator will automatically configure itself with a Knative profile. Pretty sweet!

When this in place, instead of just creating a standard Camel K deployment, Knative and the Camel Operator will create a Knative Service, which gives us the serverless experience.

KNative can also help with events. Event-driven architecture using Camel K is a bit too complex for this quick introduction, but I do want to touch on what possibilities this opens up for developers and architects.

Because Knative allows you to add subscription channels that are associated with your integration services, you can now build pipelines that work with events. This event-based system can be backed by Kafka. Managing evens within your Camel K integration service, allows you to also employ common integration patterns.

We can accept event sources from any event producing system. I typically use Mosquitto as my primary MQTT hub, and in this case I could pass all my incoming MQTT message to Camel K and allow it to manage the orchestration of event messages to its various subscribers.

Camel K and Quarkus

What is Quarkus? Think of Quarkus as the new Java Framework with a funny name. Quarkus is a Kubernetes-native Java framework made for GraalVM and HotSpot. It’s also open source and an Apache project.

Why do we want to use it with our Camel K integrations?

Again, one of the things we want from our cloud native solutions is smaller library sizes. The Java Framework was conceived and built in the age of the monolith. Apps usually ran on powerful hardware in data centers, and they ran continuously. The concept of scaling up or down meant adding or removing hardware.

With Kubernetes and cloud solutions, we want small. The smaller, the better. Quarkus gives us that smaller size, so we can scale up or down as needed.

Basically, we’re designing our Java applications to compile much more like Go. It’s a binary now, and we don’t need the JVM.

Next Steps

Here are a few great resources for learning more about Camel K and how to use

https://developers.redhat.com/topics/camel-k

https://camel.apache.org/

https://github.com/apache/camel-k/tree/main/examples

Download this article

Posted on Leave a comment

A Journey Toward Better Workshops and Analysis

This post is serving two purposes today. The first is to thank all the experts who reached out to me on Twitter to help point me toward some important tools, research, and talks related to my request for help. A few days ago I reached out to the twitter community with a request:

https://platform.twitter.com/widgets.js

I received a flood of great advice and resources that I’ll go through in this post.

My second purpose for this post is to announce that I’m working on some of my own workshops. Workshops were one of the few meeting types I enjoyed while working at a consulting firm. Done correctly, they’re a wonderful way to work collaboratively with clients and partners. I was fortunate enough to attend and participate in many successful workshops. I also had the opportunity to run and create workshops.

The firm did have their own in-house developed approaches. Many of these were either based on Microsoft’s recommended technical workshops, like the Cloud Adoption Framework, or Human Centered Design approaches.

There’s nothing wrong with these workshops, but I always struggled with connecting the workshops directly with the client’s needs. If you know me, I’m never happy going along with something that doesn’t feel like it’s quite the right fit.

I’ve been on a type of journey, where I’m taking many of the things I’ve learned from Eric Evan’s Domain-Driven Design, from the burgeoning Sociotechnical Systems Theory, and a handful of other disciplines to create something that’s uniquely mine. My approach won’t be too far off from what’s regularly practiced in DDD modeling sessions, but I am using what I learn from these other disciplines to inform my own.

My goal isn’t to recreate the wheel. I see these as a handful of tools that help with a continuing conversation. The conversation explores ideas about systems design, operations, development, workplace improvement, processes that create results. I want to help clients go from desired outcomes to working systems. The best way I’ve discovered to do this is through directed conversations. I believe, great, collaborative conversations and workshops help achieve that goal.

Resources

If you’re interested in this same type of journey, I’ll share the resources that were shared with me on Twitter.

https://platform.twitter.com/widgets.js

Here’s the video mentioned in my Tweet:

https://platform.twitter.com/widgets.js

Ruth Malan has a small mountain of information I’ll need to work through. I can’t wait. You can find Ruth’s work here: https://www.ruthmalan.com/

https://platform.twitter.com/widgets.js

Eduardo da Silva provided additional information on his works, as well.

https://platform.twitter.com/widgets.js

For Eduardo’s consulting services on these topics, you can find more information on his webpage here: https://esilva.net/consulting/

https://platform.twitter.com/widgets.js

Additionally, Krisztina left some excellent links to Domain-Driven Design resources. DDD is one of the disciplines that has continued to grow. For its age, it’s really held up, and it provide a great set of tools for building common understanding about systems from those who build the systems and those who derive value from those systems.

https://platform.twitter.com/widgets.js

Eduardo also introduced me to the work of Trond Hjorteland. Trond and I apparently have the same appreciation for post-punk/gothic rock and we share a birthday – I mean, the exact birthday, which is always a little weird.

https://platform.twitter.com/widgets.js

To close things out:

https://platform.twitter.com/widgets.js

I have read the Team Topologies book and I love it. Thoughts and concepts from this book will definitely find a place in my processes. From what I can see, DDD modeling has already pointed out where it fits nicely when exploring Bounded Context. I’ve pre-ordered Susanne Kaiser’s book Adaptive Systems With Domain-Driven Design, Wardley Mapping, and Team Topologies: Architecture for Flow (Addison-Wesley Signature Vernon). I can’t wait to read it.

Here’s a clip where Susanne discusses the subject of her book:

More to come…

There’s a lot more to this collection of disciplines. I didn’t even touch on Gene Kim, DevOps, Dr. Goldratt’s Theory of Constraints, and so much more.

However, much of what I’m seeing here is that my love for DDD isn’t misplaced. It’s a great foundation to continue to build from. So I’ll likely reread my big blue book soon, follow up on all this material, and find better ways to delivery amazing systems to generate value for their users.

Posted on Leave a comment

Matter: a Unifying Standard for Home Devices or Hype?

In this short article I want to discuss the release of the recent standard, Matter, formally CHIP (Project Connected Home over IP). Will we have a consumer device communication standard that’s ideal for the smart home or are we looking at yet another standard that will further complicate IoT for the our smart appliances?

Matter was parented by the Connectivity Standards Alliance and release to the world this month. Version 1 is available now for review by manufacturers, developers, and anyone curious about this new means of device communications.

https://csa-iot.org/developer-resource/specifications-download-request/

The goal of this standard is to make interoperability among home devices easier. And because the standard has backing from Samsung, Google, Apple, and a large number of other organizations with interest in the home device market, there appears to be a fair chance the standard will stick.

The promise seems to be less hassle for the user. When you purchase a product that adheres to the Matter standard, you should be able to easily connect and integrate with other devices in your home using that standard.

But what about security? Even at this moment, I can connect to my neighbor’s TV from my iPad if I wanted to. How do we make sure that these devices are safe, even for those who aren’t the most technically skilled?

According to their security white paper, Matter follows five security tenets:

  • Comprehensive – they use a layered approach
  • Strong – they use well-tested, strong cryptography
  • Easy – security should improve ease of use, or decrease it
  • Resilient – protect, detect, recover
  • Agile – able to respond quickly to threats

How easily will this be adopted by the developers? I did take a look at the source code and libraries. It’s written in C++, so it should be fast. And there appear to be libraries for most of your common development boards, chipsets, and MCUs. I think that’s good sign. You can see the repo here:

https://github.com/project-chip/connectedhomeip

So is it a viable standard or is it hype?

Matter was a long time in the making. And it’s creation required manufacturing rivals like Google, Apple, and Samsung to sit across from each other at a table and agree on one direction to take device communication. The standard was just released this month and the official launch is on November 3rd. There are already a large group of early adopters prepared to launch products. Some of your appliances under the Christmas Tree this year might be Matter compatible.

I think it’s safe to say that Matter isn’t just hype. It looks to be a great way for devices to communicate easily on a home network using common IP methods and options like WiFi and a new protocol called Thread. Maybe this time next year we’ll see more IoT innovation in our homes because of better device interoperability.

Posted on Leave a comment

Personal Post on My Continuing Journey with IoT Edge Computing

shawn deggans personal blog post

I made the biggest career change of my life recently.

I left the consulting firm I had been employed with for the past four years. It wasn’t an easy decision. I gave up a technical leadership role, I left a lot of people who I loved working with, and I gave up the security of a regular paycheck.

What was behind my decision?

Focus.

Focus was my primary reason for leaving. Two years ago I began a journey to learn and apply everything I would need to know to be a competent IoT Edge Architect. I began that journey with the hopes that my career would be heavily focused on helping organizations solve interesting problems using modern data analytics, IoT systems, and containerized machine learning on the edge.

That never really happened. I had the occasional opportunity to work with machine learning, Kubernetes, and some advanced analytics, but the bulk of interesting data work was done by other people while I focused on platform development.

I didn’t allow those IoT skills to go static though, because I did the occasional side work with partners focused on IoT, but my day job always came first. It reached the point that the day job wouldn’t allow time for anything other than the day job. I didn’t want those IoT skills to go stale, so I had to make the difficult decision. Do I stay where I am and try to be happy or do I pursue the career working with the technology I know actually moves the needle for organizations?

So here I am, completely independent. Ready to focus.

I got pretty good with infrastructure deployment and DevOps, so that’s the majority of the work the firm put me on. And they put me on a lot of it. Systems architecture and design became my everything for a while. Let me be clear that there’s absolutely nothing wrong with systems design work. It can be incredibly rewarding. It’s actually a big part of IoT Edge development. It just became clear to me that I was never going to have the opportunity to help build anything interesting on the infrastructure I was creating.

During my time with the firm, I went from a senior application developer to a cloud systems architect. It took me four years to make it happen, but it was definitely a necessary milestone for the next stage of my journey.

What is the next stage of my journey?

I’m returning my focus to IoT Edge computing.

What I want to do for my career is implement edge and IoT systems from multiple types of systems to multiple cloud and server solutions using modern communication systems, analytics, and security. I mean, for something that’s supposed to be a focus, that’s pretty broad. However, it all fits together nicely for certain, specialized use cases.

I have a lot of experience and a few certifications in Azure, so I have no plans to walk away from Azure any time soon, but I’ve had the opportunity to work with a few other products like InfluxDB, Milesight, Chirpstack, Eclipse Mosquitto, and I don’t want to limit myself to one cloud or one set of solutions. Much of my focus moving forward will be more on the IoT Edge System Design. The theory, the best practices, the understanding of why certain technologies are used over other technologies.

Basically, in order to improve my IoT Edge expertise, I need to say no to a lot of other types of work. Am I capable of helping an organization migrate all their SQL data systems from on-premises to the cloud? Yes, it’s completely within my wheelhouse. Could I build a distributed e-commerce solution using .Net applications in Docker containers for high availability using the best security Azure has to offer? Yes, also within my skillset. Will I take on any of that work? No. I won’t. And that’s the point I’ve reached in my career. I’m going to be very selective about the type of work I take on, and the type of clients who I work with.

That’s my goal. Focus. Be one of the best.

What can you expect from this blog?

It won’t change too much, but I will create more content. I do like doing technical deep dives, so you will see more of these. I also like to explore use cases. You’ll see more of that, especially in the Controlled Environment Agriculture industry. I believe this is an area that will need more help as our environment undergoes more changes in the coming years. If we want food in the future, we will need to know how to control our environments in a way that is economical and sustainable.

I will also write more about IoT architectures for multiple clouds and scenarios. sensors, endpoints, and power management. I want to look at how Claude Shannon’s Information Theory shapes the way we communicate between the cloud and devices. I will probably write far more about networking than you want to read, but it’s an area that I used to hate that I’ve now grown unreasonably in love with. Obviously, lots of discussion around Edge Computing, protocols, Fog computing, Augmented Reality, Machine Learning, MLOPs, DevOps, EdgeOps, and of course security.

That’s the technical side, but I also want to start working more with the community. DFW, Texas has quite the community of IoT organizations and engineers. I hope to connect with these individuals and organizations and capture some of ways I can help them, or they help me and record those outcomes here.

What about money?

Ah, yes. I do actually have bills to pay. So I will need to make money. Luckily, I’m in the financial position that I don’t necessarily need to earn a fulltime income immediately. I’m also fortunate enough to have work from one of my business partners that fits directly within my goals. We’re building an InfluxDB for some time series data and using Pandas to build some forecasting. I’ve also had my own LLC for a few years now, so the business side of things won’t be a hurdle.

But I do have additional plans. Next month I’m presenting a few ways that we can partner together, if you are interesting in working on an IoT Edge project. I’m opening my calendar to a small group of people for bookings through my company called, “Green Nanny LLC.” That’s a name you’ll see me mention more in the future as I build it out to its full intention.

Here are just a few of the services I’ll offer:

  • Assessments – are you ready for IoT Edge? Do you know what it is and how it applies to modern, distributed solutions? What does it look like, operationally? This helps you make better decisions and pick a direction for next steps.
  • Architecture design sessions – let’s think through the art of the possible using machine learning, IoT, and modern data analytics. What does your future system need to look like to support edge workloads?
  • Briefings – if we were to implement a project, what would it take? Do you need a proof-of-value to start or are you ready for a well-architected IoT solution?
  • Implementations- how can I help your team implement an IoT Edge solution?
  • POV – let’s see if we can create a proof of value in a very short period of time?
  • Team training – how can I help your team learn how to implement IoT Edge?
  • Well-architected review and recommendations- can we improve your existing solution? Do you need help with reliability, cost optimization, security, operational excellence, or performance?
  • Managed Service – what are some options for managing your IoT products using a managed services provider? I don’t provide this service myself, but I still have many connections who can help make this a reality.
  • Retainer – Do you just need someone with a lot of IoT Edge knowledge to bounce questions off of? Not sure where you want to start on your own IoT Edge journey, but would like a guide?

I’m excited for the future

I think and feel that I made the right decision for this move at the right time. My skills as an Azure Architect have put me in an excellent position to transition into something more specialized. The consulting work I did with the firm clarified the type of work I like to do, and the type of work that I’m good at. I see a lot of promise in my area of focus and a lot of financial opportunity for myself, my partners, and the clients who I will serve.

Posted on 1 Comment

Are We Ready for the XR Cloud?

What is the XR Cloud?

To answer this question, I have to define XR. XR is extended reality. It’s actually a term that encompasses Augmented Reality, Virtual Reality, and Mixed Reality. These are all technologies that alter our perception of reality in one way or another. Most people associate these technologies with bulky headsets and memes of people punching their televisions because they think they’re in a real fight with a virtual character.

I believe I’m using the term correctly when I state that XR is the portal to spatial computing. Ok, so what is spatial computing?

As complex as it sounds, spatial computing is a relatively simple concept. In fact, if you’ve ever used Google Maps or played Pokémon Go on your mobile device, you’ve already started using spatial computing. Let’s get the definition from the person who coined the phrase.

human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces

Simon Greenwold

I’m defining an XR Cloud as a platform as a service that helps developers make XR possible with minimal effort. To me, this looks a lot like the IoT Edge development loop, with the exception that we’re sending out real-time changes to the XR devices. Or are we? I think we can probably learn a lot from the gaming development world here. Do we choose to create a fat client? A large executable that resides on the device and only sends small bits of telemetry to the cloud would allow for faster execution at the device. However, there are cloud streaming services that stream games. The clients on these are smaller and can often provide similar execution speed as the fat client. Many of these problems have been solved, so building out a XR cloud doesn’t seem outside the realm of possibility.

Is it Already Here?

Sort of. There are a few groups working to make this happen.

Open AR Cloud’s mission is to drive the development of open and interoperable spatial computing technology, data and standards to connect the physical and digital worlds for the benefit of all.

Open AR Cloud

https://www.openarcloud.org/

Though it’s focused on AR, this could still be a good solution for an overall XR platform. Personally, I like the open-source approach for a platform like this. It will likely allow us developers to avoid vendor lock in, as well as participate in the development of the platform. At the moment, there are eleven different working groups tackling the challenges that come with an AR cloud.

https://arcloud.pretiaar.com/

Pretia has created a platform meant to make AR development easy. This looks a lot like gaming development, which will probably make up the backbone of early XR development. Gaming has tackled the issues around multiple people sharing space, both physically and virtually. And libraries like Unity have editors that work well with 3D objects and real spaces.

https://www.augmented.city/

Augmented.City is an augmented reality cloud & platform ecosystem that allows you to capture, enrich with data, and visualize it on location in basically any device.

Augmented city

This is an interesting approach that seems to take full advantage of the Digital Twins concept. This platform takes your city (if it has been mapped) and allows you to add content to the map. This feels more like a testbed, at the moment, but I can see the potential. Mapping physical objects in real spaces will be a challenge, for a while. There’s probably some sort of NFT angle that could be played here to get people to be the first to scan a thing. But from a data asset side, this creates all sorts of interesting opportunities.

At the Start of a Thing

It’s fun to be at the start of a movement or technology. Especially when you can see the potential. You can call this the Metaverse, MX, or just client/server with extra steps, but there is definitely a confluence of technology here to help with digital transformation. It’s definitely a journey I’m interested in exploring, and I’ll probably do more of it in this space over the next couple of years.