Posted on Leave a comment

Getting Started with Cloud-Native App Development: Best Practices, Learning Materials, and Videos to Watch

As a new engineer, understanding the concept of cloud-native app development is important for several reasons.

First, cloud-native app development is a key aspect of building and deploying applications in the cloud. It is the practice of using technologies, tools, and best practices that are designed to work seamlessly with cloud environments. By understanding how cloud-native app development works, you will be able to build, deploy, and manage cloud-native applications more effectively.

Second, cloud-native app development allows for better scalability and cost efficiency. By using technologies, tools, and best practices that are designed to work seamlessly with cloud environments, it allows for resources to be automatically allocated and scaled up or down as needed, without incurring additional costs.

Third, cloud-native app development promotes better collaboration and DevOps culture. By using technologies, tools, and best practices that are designed to work seamlessly with cloud environments, it becomes easier for different teams and developers to work together on the same application.

Fourth, cloud-native app development allows for better security, by using technologies, tools, and best practices that are designed to work seamlessly with cloud environments, it ensures that the application and infrastructure are protected from threats and can continue to operate in case of an attack or failure.

In summary, as a new engineer, understanding the concept of cloud-native app development is important because it is a key aspect of building and deploying applications in the cloud, allows for better scalability and cost efficiency, promotes better collaboration and DevOps culture, and allows for better security.

Learning Materials

Here’s a list to get you started learning about cloud-native app development. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

Best practices in Kubernetes app development

This document outlines best practices for developing with Kubernetes, such as using a tailored logging interface, debugging with CLI commands, and creating project templates with Cloud Code. Google Cloud DevTools are introduced as a way to simplify the process of incorporating best practices into the Kubernetes development workflow.

A Possible Learning Path

Hands-on experience: Start by setting up a simple Kubernetes cluster and experimenting with different app development tools such as Kubernetes Deployments, Services, and ConfigMaps. This can be done by following tutorials and guides, and deploying a simple application on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of app development, you can begin to explore the underlying concepts and technologies such as Kubernetes pods, services, and volumes. This can be done through online resources such as tutorials, courses, and documentation provided by Kubernetes, as well as books and blogs on the topic.

Understanding the principles and best practices: App development is an important aspect of a microservices architecture, so it’s important to understand the key principles and best practices of app development such as containerization, scaling, and rolling updates.

Joining a community: Joining a community of Kubernetes enthusiasts will help you connect with other people who are learning and working with app development for Kubernetes. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice deploying and using app development tools in a Kubernetes cluster, the more comfortable and proficient you will become with the technology.

A Note from the Architect

Hey there, let’s talk about cloud native application development. So, what is it exactly? Well, it’s all about developing applications that are specifically designed to run in a cloud environment, like Kubernetes, which is one of the most popular container orchestration platforms out there.

What makes cloud native application development different from other approaches is that it’s all about leveraging the benefits of the cloud, like scalability and flexibility, to create applications that can easily adapt to changing workloads and environments.

When it comes to developing applications for Kubernetes, there’s a typical software development workflow that you’ll need to follow. First, you’ll need to choose the right programming languages and frameworks. Some of the most popular languages for cloud native development are Go, Java, Python, and Node.js.

Once you’ve chosen your languages and frameworks, you’ll need to design your application architecture to work well with Kubernetes. Some of the best patterns for Kubernetes include using microservices, containerization, and service meshes. On the other hand, monolithic applications and stateful applications are not well suited for running on Kubernetes.

And finally, let me tell you, we really like VS Code around here. It’s one of the best tools for cloud native application development, especially when working with Kubernetes. It provides a lot of great features for working with containerization and orchestration, like excellent plugins for Kubernetes, debugging, and integration with other popular tools like Helm and Kustomize. So, if you haven’t already, give it a try, you might like it too.

Be sure to reach out on LinkedIn if you have any questions.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Master Cloud Native Governance: The Essential Guide for New Engineers

As a new engineer, understanding the concept of cloud-native governance is important for several reasons.

First, it is a key component of cloud native application development. It uses governance tools and technologies that are native to the cloud and designed to work seamlessly with cloud-native applications. This allows for better deployment and management of cloud-native applications.

Second, cloud-native governance ensures compliance and security. It ensures that the application and infrastructure meet compliance requirements and are protected from threats.

Third, it promotes better collaboration and DevOps culture. Different teams and developers can work together on the same application, and the organization’s policies and standards are followed.

Fourth, it allows for better cost management. Resources can be monitored and controlled, and the organization is not overspending on the cloud.

In summary, understanding the concept of cloud-native governance is important for any engineer working in the field today. It is a powerful tool for building and deploying applications in a cloud environment.

Learning Materials

Here’s a list to get you started learning about cloud-native governance. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

A Possible Learning Path

Hands-on experience: Start by setting up a simple Kubernetes cluster and experimenting with different governance tools such as Open Policy Agent (OPA), Kubernetes Policy Controller, and Kube-bench. This can be done by following tutorials and guides, and deploying these tools on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of governance, you can begin to explore the underlying concepts and technologies such as Kubernetes role-based access control (RBAC), Namespaces, and NetworkPolicies. This can be done through online resources such as tutorials, courses, and documentation provided by Kubernetes, as well as books and blogs on the topic.

Understanding the principles and best practices: Governance is an important aspect of a microservices architecture, so it’s important to understand the key principles and best practices of governance such as security, compliance, and auditing.

Joining a community: Joining a community of Kubernetes enthusiasts will help you connect with other people who are learning and working with governance for Kubernetes. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice deploying and using governance tools in a Kubernetes cluster, the more comfortable and proficient you will become with the technology.

A Note from the Architect

Ok, let’s talk about cloud native governance. So, why do we need to practice it? Well, as we all know, the cloud is a constantly evolving landscape and it can be pretty overwhelming to keep up with all the new technologies and best practices. That’s where governance comes in – it’s all about making sure we’re using the cloud in a consistent and efficient way across our organization.

So what exactly is cloud native governance? It’s all about using policies and tools to manage the resources in our cloud environment. This includes things like setting up guidelines for how our teams use the cloud, automating tasks to keep our environment in check, and monitoring for any potential issues.

Now, you might be wondering why cloud native governance was created. Well, as organizations started moving more and more of their workloads to the cloud, they realized they needed a way to keep everything in check. Without governance in place, it can be easy for teams to create resources in an ad-hoc way, which can lead to wasted resources, security vulnerabilities, and inconsistencies in how the cloud is being used.

Now, let’s talk about the major tools on Kubernetes that help with cloud native governance. One of the most popular is Kubernetes itself, which provides a way to manage and scale containerized applications. Another popular tool is Helm, which helps with managing and deploying Kubernetes resources. There’s also Kustomize, which helps with creating and managing customized resources. And finally, there’s Open Policy Agent (OPA) which allows to define and enforce policies on k8s resources.

It’s important to note that governance is similar to security, and it requires a continuous practice. Governance policies and tools need to be regularly reviewed and updated to ensure they are still effective and aligned with the organization’s goals and requirements. It’s all about making sure we’re using the cloud in the best way possible.

Be sure to reach out on LinkedIn if you have any questions.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Understanding Cloud-Native Storage for Kubernetes: What It Is and How It Works

As a new engineer, understanding the concept of cloud-native storage is important for several reasons.

First, it is a key component of cloud native application development. It is the practice of using storage solutions that are native to the cloud and designed to work seamlessly with cloud-native applications. By understanding how cloud-native storage works, you will be able to build, deploy, and manage cloud-native applications more effectively.

Second, it allows for better scalability and cost efficiency. By using storage solutions that are native to the cloud, resources can be automatically allocated and scaled up or down as needed, without incurring additional costs.

Third, it promotes better collaboration and DevOps culture. By using storage solutions that are native to the cloud, it becomes easier for different teams and developers to work together on the same application.

Fourth, it allows for better data availability. By using storage solutions that are native to the cloud, data is stored in multiple locations and can be accessed from any location.

In summary, understanding the concept of cloud-native storage is important because it is a key component of cloud native application development, allows for better scalability and cost efficiency, promotes better collaboration and DevOps culture, and allows for better data availability. It is a powerful tool for building and deploying applications in a cloud environment and is essential for any engineer working in the field today.

Learning Materials

Here’s a list to get you started learning about cloud-native storage. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

CNCF Webinar – Introduction to Cloud Native Storage

This webinar provides an overview of the evolving cloud native storage requirements and data architecture, discussing the need for layered services that can be seamlessly bound for the user experience. It explores the extensible, open-source approach of abstractions and plugins that give users the choice to select the storage technologies that fit their needs. It also looks at the importance of persistent volumes for containers, allowing for the flexibility and performance needed for stateful workloads.

A Possible Learning Path

Hands-on experience: Start by setting up a simple Kubernetes cluster and experimenting with different cloud-native storage options such as Kubernetes Persistent Volumes (PVs), Persistent Volume Claims (PVCs), and StatefulSets. This can be done by following tutorials and guides, and deploying these options on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of cloud-native storage, you can begin to explore the underlying concepts and technologies such as Kubernetes storage classes, storage provisioners, and storage backends. This can be done through online resources such as tutorials, courses, and documentation provided by Kubernetes and storage providers, as well as books and blogs on the topic.

Understanding the principles and best practices: Cloud-native storage is an important aspect of a microservices architecture, so it’s important to understand the key principles and best practices, such as data replication, data durability, and data accessibility.

Joining a community: Connect with other people who are learning and working with cloud-native storage for Kubernetes by joining online forums, meetups, and social media groups.

Practice, practice, practice: The best way to learn is by doing. The more you practice deploying and using cloud-native storage options in a Kubernetes cluster, the more comfortable and proficient you will become with the technology.

A Note from the Architect

Hey there! I’m glad you’re interested in learning more about cloud native storage. It’s a pretty important concept to understand, especially if you’re working with Kubernetes.

So, what is cloud native storage? Essentially, it’s a type of storage that is built specifically for use in cloud environments. This means that it’s designed to work seamlessly with cloud technologies like Kubernetes, and can handle the unique challenges that come with a cloud environment, like scalability and high availability.

Why is it necessary for certain solutions built on Kubernetes? Well, Kubernetes is a powerful tool for managing containerized applications, but it doesn’t come with its own storage solution. So, if you’re building a solution on Kubernetes, you’ll need to use a separate storage solution that’s compatible with it. That’s where cloud native storage comes in – it’s designed to work with Kubernetes, so it’s the perfect match.

Now, let’s talk about how it works. Essentially, cloud native storage solutions provide a way to store and access data that’s running on Kubernetes. They do this by creating a “volume” that can be mounted to a Kubernetes pod, which allows the pod to access the data stored in the volume. This way, you can store data in a persistent way, even if the pod is deleted or recreated.

Here are a couple of popular Kubernetes cloud native storage options and the types of solutions they’re used for:

  1. Rook: Rook is an open-source storage solution for Kubernetes. It’s great for use cases where you need to store data in a distributed way, like a big data analytics solution or a large-scale data storage system.
  2. GlusterFS: GlusterFS is another open-source storage solution for Kubernetes. It’s great for use cases where you need to store data in a highly available way, like a web application or a database.

Both of these options are great choices for different types of solutions, and they’re both built specifically to work with Kubernetes.

So, that’s the basics of cloud native storage. It’s a powerful tool that can help you build robust and scalable solutions on Kubernetes. If you have any more questions, feel free to ask on LinkedIn!

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Unlocking the Power of Service Discovery in Kubernetes

As a new engineer, understanding the concept of service discovery is important for several reasons.

First, it is a key component of microservices architecture. Service discovery allows services to find and communicate with each other, regardless of their location or IP address. This makes it easier to build, deploy, and manage microservices-based applications.

Second, service discovery enables greater scalability and flexibility. Services can be added or removed without affecting the rest of the system, and new services can be introduced without changing existing code.

Third, service discovery facilitates better collaboration and DevOps culture. By making it easy for services to find and communicate with each other, different teams and developers can work together on the same application.

Fourth, service discovery allows for better resilience. It enables the system to automatically route traffic to healthy instances of a service, even if some instances are unavailable.

In summary, understanding service discovery is important for any engineer working in the field today. It is a powerful tool for building and deploying applications in a microservices environment and is essential for achieving greater scalability, flexibility, collaboration, and resilience.

Learning Materials

Here’s a list to get you started learning about Service Discovery. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

What is Service Discovery?

Moving from a monolith to a Cloud-based Microservices Architecture presents several challenges, such as Service Discovery, which involves locating resources on a network and keeping a Service Registry up to date. Service Discovery can be categorized by WHERE it happens (Client Side or Server Side) and HOW it is maintained (Self Registration or Third-party Registration). Each approach has its own pros and cons, and further complexities such as Service Mesh can be explored in further detail.

Possible Learning Path (Service Discovery for Kubernetes)

Hands-on experience: Start by setting up a simple Kubernetes cluster and experimenting with different service discovery mechanisms such as Kubernetes Services, DNS, and Load Balancing. This can be done by following tutorials and guides and deploying these services on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of service discovery, you can begin to explore the underlying concepts and technologies such as Kubernetes Services, DNS, and Load Balancing. This can be done through online resources such as tutorials, courses, and documentation provided by Kubernetes, as well as books and blogs on the topic.

Understanding the principles and best practices: Service discovery is an important aspect of a microservices architecture, so it’s important to understand the key principles and best practices of service discovery such as service registration, service discovery, and service resolution.

Joining a community: Joining a community of Kubernetes enthusiasts will help you connect with other people who are learning and working with service discovery for Kubernetes. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice deploying and using service discovery mechanisms in a Kubernetes cluster, the more comfortable and proficient you will become with the technology.

A Note from the Architect

So, you know how in a microservice architecture, we have all these different services that need to talk to each other? Well, service discovery is kind of like a phone book for those services (this is DNS). It helps them find each other and communicate with each other.

In traditional networks, service discovery is often done using a centralized server or load balancer. This means that all the services need to know the IP address or hostname of this central server in order to communicate with other services.

But in Kubernetes, service discovery is built into the platform. Each service gets its own unique IP address and DNS name, and Kubernetes automatically handles routing traffic between them. This means that the services don’t need to know anything about other services except their own name.

And the best part? Kubernetes service discovery is dynamic, which means that it automatically updates when new services are added or removed, so you don’t have to manually update the phone book every time something changes.

But that’s not all, Kubernetes also provides a way to expose your services externally, so that you can access them from outside the cluster, which is very useful for example if you want to access your services from the internet.

So, with service discovery in Kubernetes, you don’t have to worry about keeping track of IP addresses and hostnames, and you don’t have to worry about updating a central server when things change. It’s like having a personal assistant who always knows the latest phone number of your services and also makes sure that they are accessible from anywhere.

Basically, service discovery in Kubernetes provides a way for services to easily find and communicate with each other, and it’s built right into the platform. It’s a game-changer for managing and scaling a microservice architecture.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Unlocking the Benefits of Configuration Management in Kubernetes

As a new engineer, understanding the concept of Configuration Management (CM) is important for several reasons.

First, CM is a key component of cloud native application development. It is the process of maintaining consistent and accurate configurations across all components of an application, including servers, networks, and software. By understanding how CM works, you can build, deploy, and manage cloud-native applications more effectively.

Second, CM ensures greater consistency and reproducibility. By maintaining consistent configurations across all components of an application, it ensures the same configuration is used across different environments, and that the infrastructure can be easily recreated if necessary. This makes it easier to handle the increasing demand for more computing power and storage.

Third, CM promotes better collaboration and a DevOps culture. By maintaining consistent configurations across all components of an application, it becomes easier for different teams and developers to work together on the same application.

Fourth, CM allows for better tracking and version control of changes. By keeping the configurations in a CM tool, it allows for tracking changes and rollback if necessary.

In summary, as a new engineer, understanding the concept of CM is important because it is a key component of cloud native application development, ensures greater consistency and reproducibility, promotes better collaboration and a DevOps culture, and allows for better tracking and version control of changes. It is a powerful tool for building and deploying applications in a cloud environment, and is essential for any engineer working in the field today.

Learning Materials

Here’s a list to get you started learning about configuration management. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

Configuration Management in 2020 and Beyond – Eric Sorenson

Cloud Native concepts are based on good architecture principles, such as declarative automation, continuous deployment, and observability. These principles emphasize short life cycles, high cardinality, and immutability, and have changed the way systems administration is done.

Possible Learning Path

Hands-on experience: Start by setting up a simple Kubernetes cluster and experimenting with different configuration management tools such as Kubernetes Resource Configs, ConfigMaps, and Secrets. This can be done by following tutorials and guides, and deploying these tools on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of configuration management, you can begin to explore the underlying concepts and technologies such as Kubernetes API objects, YAML manifests, and declarative configuration. This can be done through online resources such as tutorials, courses, and documentation provided by Kubernetes, as well as books and blogs on the topic.

Understanding the principles and best practices: Configuration management is an important aspect of a microservices architecture, so it’s important to understand the key principles and best practices of configuration management such as separation of concerns, immutability, and version control.

Joining a community: Joining a community of Kubernetes enthusiasts will help you connect with other people who are learning and working with configuration management for Kubernetes. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice deploying and using configuration management tools in a Kubernetes cluster, the more comfortable and proficient you will become with the technology.

A Note from the Architect

In traditional IT systems, configuration management is often done manually, which can be time-consuming and error-prone. It can be a real pain to keep track of all the different configurations and make sure they’re up-to-date across all the servers.

But in Kubernetes, configuration management is built into the platform and can be easily managed using Kubernetes resources like ConfigMaps and Secrets. Just keep in mind that Kubernetes does not encrypt secrets, so it is recommended to use Vault for this purpose. This makes it much more efficient and reliable.

One of the main advantages of using configuration management in Kubernetes is that it allows you to easily manage and update configurations across all the components of your application. This includes environment variables, database connection strings, and other sensitive information.

Moreover, version control can be used for configurations, which makes it possible to roll back changes if something goes wrong (again, secrets should not be stored in the source code repository). This is especially beneficial in a production environment, where mistakes can have serious consequences.

Additionally, by keeping configurations in code, the process of updating and deploying them can be automated, making it easier to manage and maintain over time.

In conclusion, configuration management in Kubernetes enables efficient and reliable management of configurations across all components of an application, as well as version control and automation of configurations. It is like having a personal assistant who keeps all configurations organized and up-to-date, without needing to manually update them.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Learning Infrastructure as Code: What You Need to Know for Cloud Native Engineering

As a new engineer, understanding Infrastructure as Code (IAC) is important for several reasons.

First, IAC is a key component of cloud native application development. It is the practice of managing and provisioning infrastructure using code, rather than manual configuration. By understanding how IAC works, you can build, deploy, and manage cloud-native applications more effectively.

Second, IAC allows for greater consistency and reproducibility. By using code to manage and provision infrastructure, it ensures the same configuration is used across different environments and that the infrastructure can be easily recreated if necessary. This makes it easier to handle increasing demand for computing power and storage.

Third, IAC promotes better collaboration and a DevOps culture. By using code to manage and provision infrastructure, it becomes easier for different teams and developers to work together on the same application.

Fourth, IAC allows for better tracking and version control of infrastructure changes. By keeping the infrastructure definition in code, it allows for tracking changes in the same way as code changes and reverting to previous versions if necessary.

In summary, understanding IAC is important because it is a key component of cloud native application development, allows for greater consistency and reproducibility, promotes better collaboration and DevOps culture, and allows for better tracking and version control of infrastructure changes. It is a powerful tool for building and deploying applications in a cloud environment and is essential for any engineer working in the field today.

Learning Materials

Here’s a list to get you started learning about Infrastructure as code (IAC). Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

Cloud Native Summit – Five ways to manage your Infrastructure as Code at Scale / Ryan Cartwright

This document provides an overview of the challenges of cloud native engineering and the solutions available, such as remote state management, avoiding manual state changes, using CI tools, and implementing a reliable SAS offering. It also covers the features and security essentials that must be included in a terraform CI platform.

Possible Learning Path

Hands-on experience: Start by setting up a simple Kubernetes cluster and experimenting with different IAC tools such as Terraform, Ansible, and Helm. This can be done by following tutorials and guides and deploying these tools on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of IAC, you can begin to explore the underlying concepts and technologies such as configuration management, version control, and automation. This can be done through online resources such as tutorials, courses, and documentation provided by IAC tools, as well as books and blogs on the topic.

Understanding the principles and best practices: IAC is an important aspect of modern infrastructure management, so it’s important to understand the key principles and best practices of IAC, such as versioning, testing, and rollback.

Joining a community: Joining a community of IAC enthusiasts will help you connect with other people who are learning and working with IAC for Kubernetes. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice deploying and using IAC tools in a Kubernetes cluster, the more comfortable and proficient you will become with the technology.

A Note from the Architect

Infrastructure as Code (IaC) is one of the most exciting developments in cloud deployments. It is a way of describing and provisioning infrastructure using code, instead of manual configuration. This makes it easier to manage and maintain over time, as well as enabling faster scaling, testing, and deploying of cloud native solutions.

Version control is essential for modern operations teams, as it allows them to track changes to the infrastructure over time and roll back to a previous version if something goes wrong. Developers should also learn it, as it allows them to collaborate more easily with operations teams and to understand the infrastructure that their code is running on.

IaC helps to ensure quality by allowing for automated provisioning and configuration of infrastructure. It also helps to ensure security, as it enables more controlled and automated access to infrastructure, making it easier to identify and isolate any malicious activity.

In conclusion, IaC is a great way to deploy cloud native solutions, as it makes it easier to manage and maintain over time. Version control is essential for modern operations teams, and developers should also learn it. Writing IaC helps to ensure quality and can help with security by allowing for more controlled and automated access to infrastructure.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Understand Auto-Scaling to Maximize Cloud Native Application Development

As a new engineer, understanding auto-scaling is important for several reasons. It is a key component of cloud native application development, allowing for greater scalability and cost efficiency, better collaboration and DevOps culture, and better availability and resilience. Auto-scaling automatically adjusts the number of resources used by an application based on current demand, ensuring the system can handle a large number of requests while minimizing cost. It also makes it easier for different teams and developers to work together on the same application. Auto-scaling is a powerful tool for building and deploying applications in a cloud environment, and is essential for any engineer working in the field today.

As a new engineer, understanding the concept of auto-scaling is important for several reasons.

First, auto-scaling is a key component of cloud native application development. It is the process of automatically adjusting the number of resources (such as servers) used by an application based on the current demand. By understanding how auto-scaling works, you will be able to build, deploy, and manage cloud-native applications more effectively.

Second, auto-scaling allows for greater scalability and cost efficiency. By automatically adjusting the number of resources used by an application, it ensures that the system can handle a large number of requests while minimizing the cost of running the application. This makes it easy to handle the increasing demand for more computing power and storage.

Third, auto-scaling allows for better collaboration and DevOps culture. By automating the process of scaling resources, it becomes easier for different teams and developers to work together on the same application.

Fourth, auto-scaling allows for better availability and resilience, by automatically adjusting resources when traffic increases, it ensures that the application is always available to handle requests, and when traffic decreases, it reduces the resources which allows to save cost.

In summary, as a new engineer, understanding the concept of auto-scaling is important because it is a key component of cloud native application development, allows for greater scalability and cost efficiency, better collaboration and DevOps culture, and better availability and resilience. It is a powerful tool for building and deploying applications in a cloud environment and is essential for any engineer working in the field today.

Learning Materials

Here’s a list to get you started learning about auto-scaling. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

Kubernetes cluster autoscaling for beginners

Kubernetes is a powerful tool for deploying applications, but it’s important to understand the resource requirements of the applications and set resource requests and limits to allow the scheduler to make informed decisions when placing pods onto nodes. This will help ensure that the nodes are not over-utilized and that applications are given the resources they need.

A Possible Learning Path

Hands-on experience: Start by setting up a simple Kubernetes cluster and experimenting with different auto-scaling mechanisms such as Kubernetes Horizontal Pod Autoscaler and Cluster Autoscaler. This can be done by following tutorials and guides and deploying these services on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of auto-scaling, you can begin to explore the underlying concepts and technologies such as Kubernetes Horizontal Pod Autoscaler and Cluster Autoscaler. This can be done through online resources such as tutorials, courses, and documentation provided by Kubernetes, as well as books and blogs on the topic.

Understanding the principles and best practices: Auto-scaling is an important aspect of a microservices architecture, so it’s important to understand the key principles and best practices of auto-scaling, such as scaling policies, monitoring, and alerting.

Joining a community: Joining a community of Kubernetes enthusiasts will help you connect with other people who are learning and working with auto-scaling for Kubernetes. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice deploying and using auto-scaling mechanisms in a Kubernetes cluster, the more comfortable and proficient you will become with the technology.

A Note from the Architect

Before I explain an auto-scaler, let me quickly tell you the story of the Porsches crashing into walls every 15 minutes.

Back when I was a newer developer working for the e-commerce department of an electronics parts seller, we had a web outage. We had no idea why the webserver was down. This was in the olden days when the best monitoring tool we had was named ELMA. It was better for debugging and development than anything you would want to put into production.

The e-commerce website was down and we developers were scrambling to find out why. The president of the company was standing in our cubicles, shaking his head and looking at his watch. Every few minutes he would proclaim loudly, “There goes another one.”

After a while, one of our more senior developers asked, “’Another one,’ what, sir?”

He replied, “You just crashed a brand new Porsche into a brick wall.”

That’s how much money we were losing every 15 minutes while the site was down.

We found out that the problem had nothing to do with our code. It was an issue with our servers. We didn’t have enough. We had recently opened up to new markets overseas and this added traffic was crashing our old Internet Information Server systems. We didn’t have auto-scaling.

Auto-scaling is like having a personal trainer for your infrastructure. It makes sure that your infrastructure is always in tip-top shape by automatically adjusting the number of resources (like servers or containers) based on the current demand.

In traditional IT environments, auto-scaling is often done manually, which can be time-consuming and error-prone. But in a cloud-native environment like Kubernetes, auto-scaling is built into the platform and can be easily configured with Kubernetes resources. This makes it a lot more efficient and reliable.

With Kubernetes, you can set up auto-scaling rules that specify how many resources you want to have available for a given service. For example, you can set a rule that says “if the CPU usage goes above 80% for more than 5 minutes, then add another server.” And when the demand goes down, the system will also automatically scale down the resources. This way, you can ensure that you always have the right amount of resources to handle the current load and also save money by not having unnecessary resources running.

Another benefit of using auto-scaling in a cloud-native environment is that it allows you to handle unexpected traffic spikes, such as a viral social media post or a news article. This way, you don’t have to worry about your service going down because of a sudden increase in traffic.

So, using auto-scaling in a cloud-native environment has many benefits, including increased efficiency, reliability and cost savings. Plus, it’s like having a personal trainer that makes sure your infrastructure is always ready for action. It might also save on the number of Porsches you crash into brick walls.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

What is Load Balancing, and How Can it Help Your Cloud-Native Applications?

As a new engineer, understanding the concept of load balancing is important for several reasons.

First, it is a key component of cloud native application development. It is the process of distributing incoming network traffic across multiple servers to ensure that no single server is overwhelmed. By understanding how it works, you will be able to build, deploy, and manage cloud-native applications more effectively.

Second, load balancing allows for greater scalability and availability. By distributing traffic across multiple servers, it ensures that the system can handle a large number of requests and can continue to function even if one server goes down. This makes it easy to handle the increasing demand for more computing power and storage.

Third, it facilitates better collaboration and DevOps culture. By making it easy to distribute traffic across multiple servers, it becomes easier for different teams and developers to work together on the same application.

Fourth, it enhances security, by distributing the traffic across multiple servers, it makes it harder for attackers to target a single point of failure.

In summary, as a new engineer, understanding the concept of load balancing is important because it is a key component of cloud native application development, allows for greater scalability and availability, better collaboration and DevOps culture, and better security. It is a powerful tool for building and deploying applications in a cloud environment and is essential for any engineer working in the field today.

Learning Materials

Here’s a list to get you started learning about load balancing. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

How load balancing and service discovery works in Kubernetes

Kubernetes provides a range of features to enable applications to communicate with each other, including service discovery, load balancing, DNS, and port management. These features are enabled through the use of namespaces, labels, services, and Linux networking features such as bridges and IP tables.

A Possible Learning Path

Hands-on experience: Start by setting up a simple Kubernetes cluster and experimenting with different load balancing mechanisms such as Kubernetes Services, Ingress, and External Load Balancers. This can be done by following tutorials and guides and deploying these services on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of load balancing, you can begin to explore the underlying concepts and technologies such as Kubernetes Services, Ingress, and External Load Balancers. This can be done through online resources such as tutorials, courses, and documentation provided by Kubernetes, as well as books and blogs on the topic.

Understanding the principles and best practices: Load balancing is an important aspect of a microservices architecture, so it’s important to understand the key principles and best practices of load balancing such as traffic distribution, service availability, and fault tolerance.

Joining a community: Joining a community of Kubernetes enthusiasts will help you connect with other people who are learning and working with load balancing for Kubernetes. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice deploying and using load balancing mechanisms in a Kubernetes cluster, the more comfortable and proficient you will become with the technology.

A Note from the Architect

A load balancer is like an air traffic controller for your network. It ensures that traffic is evenly distributed across all servers, so that no single server becomes overwhelmed and crashes. This is especially important when there is a lot of incoming traffic or a large number of users accessing the application.

In traditional networks, load balancers are typically dedicated hardware devices that sit in front of the servers and manage the traffic. However, in cloud-native networks, load balancers are usually software-based and run as part of the infrastructure. This is where Kubernetes comes into play.

Kubernetes has a built-in load balancer that can be easily configured with the Kubernetes resources. It automatically distributes traffic to the services running in the cluster. This is a great advantage, as it allows for more flexibility, scalability, availability, and the ability to handle failover automatically.

Another benefit of using a load balancer in a cloud-native network is that it makes it possible to expose services to the outside world. This means that services can be accessed from anywhere, which is especially useful if you want to access them from the internet.

Using a load balancer in a cloud-native network has many advantages, such as increased availability, scalability, and flexibility, as well as the ability to handle failover automatically. Plus, it’s like having a superhero that ensures traffic is evenly distributed and users are happy.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Understanding Containerization: A Guide for New Engineers

As a new engineer, understanding the concept of containerization is important for several reasons.

First, containerization is a key component of cloud native application development. Containers are a lightweight and portable way to package software, making it easy to run and manage applications on cloud infrastructure. By understanding how containers work, you will be able to build, deploy, and manage cloud-native applications more effectively.

Second, containerization allows for greater consistency and portability across different environments. With containers, you can package an application and its dependencies together, ensuring that it will run the same way regardless of where it is deployed. This eliminates the “works on my machine” problem and makes it easier to move applications between different environments.

Third, containerization allows for greater scalability and resource efficiency. Containers use less resources than traditional virtual machines, and can be easily scaled up or down as needed. This makes it easier to handle the increasing demand for more computing power and storage.

Fourth, containerization also allows for better collaboration and DevOps culture, as containers can be easily shared and reused, making it easier for different teams and developers to work together on the same application.

In summary, as a new engineer, understanding the concept of containerization is important because it is a key component of cloud native application development, allows for greater consistency and portability across different environments, enables greater scalability and resource efficiency, and promotes collaboration and DevOps culture. It is a powerful tool for building and deploying applications in a cloud environment and is essential for any engineer working in the field today.

Learning Materials

Here’s a list to get you started learning about containerization. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

Kubernetes Crash Course for Absolute Beginners

Kubernetes is an open source container orchestration framework designed to manage applications made up of hundreds or thousands of containers across multiple environments. It offers features such as high availability, scalability, and disaster recovery, as well as a virtual network that enables communication between pods. This video provides an overview of the Kubernetes architecture and components, and a use case of a web application with a database to illustrate how it works.

Possible Learning Path

Hands-on experience: Start by installing Docker and Kubernetes on your local machine or in a virtual environment. This can be done by following the official documentation provided by Docker and Kubernetes. After that, you can follow tutorials and guides to build and deploy simple applications in containers using Docker and Kubernetes.

Theoretical learning: Once you have a basic understanding of Docker and Kubernetes, you can begin to explore the underlying concepts and technologies. This can be done through online resources such as tutorials, courses, and documentation provided by Docker and Kubernetes, as well as books and blogs on the topic.

Joining a community: Joining a community of Docker and Kubernetes enthusiasts will help you connect with other people who are learning and working with these technologies. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice building and deploying containerized applications using Docker and Kubernetes, the more comfortable and proficient you will become with these technologies.

Specialize in a specific use case: Docker and Kubernetes can be used in a wide variety of scenarios and use cases, so it is beneficial to specialize in one or two that align with your business or career goals.

A Note from the Architect

I’m trying to think of the first time I worked with a Docker container. I believe it was almost four years ago, which in technology timeframes was forever. I was trying to decide if I wanted to use Angular or React to create components for a plugin framework running on SharePoint. It wasn’t the type of development I was used to doing at the time, but I knew the industry was heading away from Angular and more toward React. So, I installed Docker on my laptop, learned how to check out the images, and eventually started learning the basics of containers. I was hooked. Here was a great way to build apps, host them locally, and get the same experience locally that I could expect in production. No more, “It worked on my machine.”

In the last couple of years, I’ve used the remote connection capabilities of VS Code to run pretty much all of my development in containers. It gives me the freedom to try out different languages, frameworks, and libraries without ever needing to install those on my local operating system. I’m proud to say that I never get bugged for Java or .Net updates now. I just get the latest images and add a volume that connects to a local folder where I manage my Git repositories. It’s made my life as a developer much easier.

If you’re wondering, “What’s the big deal with containers? I just want to write code. Why do I need to use containers?” I’ll try to answer that question. Because we don’t just write code anymore. As developers and as operations engineers, we’re beginning to move into a phase where we are sharing the overall solution. This means that when I create something or have a hand in creating something, I have ownership over that thing. I’m responsible for it.

Now, you might work for an enterprise that’s still a bit behind the times. You may write code, and some other team might test that code, and then some other team might try to deploy that code. In that situation, you probably aren’t using containers or anything that looks like modern DevOps. And in that situation, the team between you and the customers who will derive value from your code is a bottleneck. If you rely on a QA team, they will find bugs, because that’s what they are incentivized to do. It’s their job to compare your code against some form of requirements and fail it if it doesn’t meet those requirements. Operations in this type of environment is incentivized to keep systems running smoothly, so they’ll look for any excuse to deny your code entry into production—that usually looks like a set of meetings designed to make sure you met all the criteria needed for something to go into production.

That is the old way of developing software. Let me tell you, if you’re working some place like that, get out. No. Seriously. Leave. Find a better job.

I believe this is the way software should be developed:

The Most Powerful Software Development Process Is The Easiest

In an ideal software development process, the only work done is understanding the problem, writing code to solve it, testing to confirm it is solved, and making progress in small steps to retain the ability to change the system when needed. The goal is to minimize work and maximize learning, allowing for changes to be made easily and with confidence.

Containers make this process easier. Your code remains modular, making it easier to version and manage libraries and dependencies. You can even build out the needed infrastructure for a container management system, such as Kubernetes, without involving operations in some cases.

As a developer and an architect, I have found that containers have improved the quality of my development life. They have allowed me to have more control over the solutions I deliver. I believe that if you start working regularly with containers, you will feel the same way too.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Let’s Learn about Camel K

Let’s learn about Camel k


There are multiple cloud and serverless solutions available for integration. Some of these are proprietary products like Microsoft BizTalk, Azure Logic Apps, or Dell’s Boomi.

There are hybrid options, like Saleforce’s Mulesoft, which has a lighter, open source version, but the real functionality is in the licensed version. Then there are a few truly open source solutions, like Apache ServiceMix or Apache Synapse (not to be confused with Azure Synapse).

We’re covering Camel K today, because it looks like an integration winner in the open source community. ServiceMix looked like an interesting competitor to Mulesoft, but it doesn’t seem to be as active as Camel K.

What is Camel?

Apache Camel is an enterprise integration framework based on the famous Enterprise Integration Patterns codified by Gregor Hohpe and Bobby Woolf in their book, Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions

I’ve been waiting forever for a second edition of that book, but for the most part, that’s not necessary. I first learned the ins-and-outs of distributed systems from that book, and I would still recommend it as required reading today for cloud architects.

I won’t dive into the discipline and the craft needed to be a software integrator, but the fact that Camel focuses on these concepts and patterns makes a developer’s job much easier.

Camel is a small library designed to work with pluggable connectors. There are hundreds of pluggable connector choices for existing, known APIs. And like any good pluggable system, you can create your own plug-ins.

You can create routing and mediation rules in a variety of JVM languages, like Java, Groovy, and Kotlin. There’s also the option to use XML and YAML if you’re into that sort of thing (no judgement!).

Is there a difference between Camel and Camel K?

It’s a native version of Apache Camel designed to run containerized on Kubernetes or Microservices. Basically, this is the serverless version of Camel.

Which is one of the reasons I believe it has thrived more than a product like ServiceMix.

The good news! If you are familiar with Camel, Camel K will just work. If you’ve written code in Camel DSL, that code can run on Kubernetes or OpenShift.

Why use Apache Camel?

I’ve been in IT long enough that my career is no longer carded at the bar when it wants to buy a beer. One practice that never seems to go away is integration. It’s the glue that holds everything together.

As someone working in the technology field, having a minimum understanding and respect for the challenges of integrating disparate systems is necessary. No matter what platform you build on, at some point your platform will need to interface with some other systems.

It’s best to understand the fundamentals of integration if you want successfully operating technology systems that bring multiple chains of value together for your customers.

Camel is happiest as the connection between two or more systems. It allows you to define a Camel Route. The Camel Route allows you to make decisions about what you do with data coming into the route, what type of processing that might need to be done to that data, and what that data needs to look like before it’s sent to the other system.

And let me be clear, that data can be almost anything. It could be an event from a piece of manufacturing machinery, it could be a command from one system to another, or it could be the communication between two microservices.

The enterprise integration patterns were designed to help establish what actually happens to a message or data between two or more systems.

As you can probably imagine, having a system between two other systems that allows you to modify the data or take action on that data is a pretty powerful tool.

Why not just directly connect the two systems together?

There are times when this might not be a bad solution. Context is important when building technology solutions. Point-to-point connections aren’t always evil.

However, when you get to the point where you have more than two systems that need to exchange data or messaging, you start to run into a problem. Keeping up with messages, source systems, and data transformations can get painful.

When things get painful, it’s time to use a tool to help stop that pain.

Camel is excellent for this. It’s exceptionally flexible, provides multiple patterns from manipulating XML or JSON files to working directly with cloud services.

Ready to learn!


Let’s do this! Here’s where we start.

Routes

Remember the integration patterns I discussed earlier? Well, this is where we start putting those to work.

Camel Routes are designed with a Domain Specific Language using a JVM programming language like Java, Kotlin, or Groovy. And you can also define these in YAML or XML.

If you are familiar with working on Mulesoft code under the hood, those ugly XML routes will be familiar.

<routes xmlns="<http://camel.apache.org/schema/spring>">
    <route>
        <from uri="timer:tick"/>
        <setBody>
            <constant>Hello Camel K!</constant>
         </setBody>
        <to uri="log:info"/>
    </route>
</routes>

That’s not too bad. And ideally you want to keep these routes short and concise. If your routing logic starts to look scary, it’s probably time to write code.

I don’t want to dive too far into code here. This article’s goal is to just give you a quick overview, but I do want to show you how easy this is using Java.

import org.apache.camel.builder.RouteBuilder;

public class Example extends RouteBuilder {
    @Override
    public void configure() throws Exception {
        from("timer:tick")
            .setBody()
              .constant("Hello Camel K!")
            .to("log:info");
    }
}

Ok, so what if you come from the Mulesoft world or one of these other integration platforms that offer a visual interface to make this work? Let’s be honest, if you’ve used Azure Logic Apps, Make, or Zapier, you probably want a similar experience.

Drag and Drop GUI

I don’t want to jump too far ahead, but there is a solution for the low-code folks. And let’s face it, seeing a visual representation of flows is much easier to work with than code.

Introducing Kaoto

There’s a lot to Kaoto. I want to keep this brief, but I do want to assure those who are used to working with visual tools that you aren’t losing that functionality. For the engineers in the room, Kaoto won’t increase the footprint of the deployed code.

Why is using Kaoto a good idea?

  • It’s Cloud Native and works with Camel K
  • We can extend it if needed
  • It’s not locked in No Code, we can switch back to code if we need to
  • It’s Open Source, so it will only cost you the time to learn it
  • You can run it in Docker – I’m a big fan of doing all my development in containers, so this is always a plus for me

Routes and Integration Patterns

There are well over 300 connection points available for Camel. Many of these are common like JDBC, REST, SOAP. But there are also more specific connectors, like Slack, gRPC, Google Mail, WordPress, RabbitMQ, etc.

Many of the connectors you are used to seeing in commercial products are available in Camel. If you don’t find something you need, you can create your own connector.

There are also integration patterns for almost any situation, and the patterns can build connected and built upon to create messaging pipelines.

I won’t go into each pattern, but they fit within these categories:

  • Messaging Systems, like a message, a message channel, or message endpoint
  • Messaging Channel, like a point-to-point channel, dead letter channel, message bus
  • Message Construction, like return address, message expiration, and event message
  • Message Routing, like splitter, scatter-gather, and process manager
  • Message Transformation, like content enricher, claim check, and validate
  • Messaging Endpoints, like message dispatcher, idempotent consumer, and messaging gateway
  • System Management, like detour, message history, and step

That is just a short collection of the patterns and their categories. It’s well worth any developers time to read through these and understand the problems in integration that they address. As a bonus, familiarizing yourself with integration patterns will make you a better programmer and more adept at designing solutions for the cloud.

The Big K

Camel K allows us to run Camel in Kubernetes. Why is this important?

First, you’ll want to understand that Camel K isn’t just Camel, it’s everything Camel can do, but it’s written in Go. The original Camel is a Java product. Nothing necessarily wrong with Java and the JVM, but it does tend to have a bigger footprint than Go. Go eats less memory. Eating less memory is good for the cloud.

It also doesn’t need the surrounding infrastructure that Camel requires. Camel can run almost anywhere the JVM is supported. Spring Boot is a good way of hosting Camel. And yes, you could containerize that and run it in Kubernetes.

However, Camel K was born for Kubernetes and containers. And there is a custom Kubernetes resource designed for Camel. This means that from a developer’s standpoint, you just need to write your code locally, and then use the Kamel CLI to push your changes to the cloud.

Now, you might want a more defined DevOps process, but the idea is that there is far less friction between the code written and the code that runs in production.

The basic loop is as follows:

  1. You change your integration code
  2. You commit your changes to the cloud
  3. The custom integration resource notifies a Camel K operator
  4. The operator pushes the changes to the running pods
  5. Go back to step 1

Camel K and Knative

What is Knative, and why do I want to use it with Camel k?

Knative is an extension of Kubernetes. It enables Serverless workloads to run on Kubernetes clusters, and provides tools to make building and managing containers easier.

Knative has three primary areas of responsibility:

  1. Build
  2. Serving
  3. Eventing

The idea behind Serverless is that it should just work, and it should just work well in a variety of situations. For instance, it should scale up automatically when workloads increase, and it should scale down automatically when workloads decrease. This is what the serving portion of the solution does.

If you install Camel K on a Kubernetes cluster that already has Knative installed, the Camel K operator will automatically configure itself with a Knative profile. Pretty sweet!

When this in place, instead of just creating a standard Camel K deployment, Knative and the Camel Operator will create a Knative Service, which gives us the serverless experience.

KNative can also help with events. Event-driven architecture using Camel K is a bit too complex for this quick introduction, but I do want to touch on what possibilities this opens up for developers and architects.

Because Knative allows you to add subscription channels that are associated with your integration services, you can now build pipelines that work with events. This event-based system can be backed by Kafka. Managing evens within your Camel K integration service, allows you to also employ common integration patterns.

We can accept event sources from any event producing system. I typically use Mosquitto as my primary MQTT hub, and in this case I could pass all my incoming MQTT message to Camel K and allow it to manage the orchestration of event messages to its various subscribers.

Camel K and Quarkus

What is Quarkus? Think of Quarkus as the new Java Framework with a funny name. Quarkus is a Kubernetes-native Java framework made for GraalVM and HotSpot. It’s also open source and an Apache project.

Why do we want to use it with our Camel K integrations?

Again, one of the things we want from our cloud native solutions is smaller library sizes. The Java Framework was conceived and built in the age of the monolith. Apps usually ran on powerful hardware in data centers, and they ran continuously. The concept of scaling up or down meant adding or removing hardware.

With Kubernetes and cloud solutions, we want small. The smaller, the better. Quarkus gives us that smaller size, so we can scale up or down as needed.

Basically, we’re designing our Java applications to compile much more like Go. It’s a binary now, and we don’t need the JVM.

Next Steps

Here are a few great resources for learning more about Camel K and how to use

https://developers.redhat.com/topics/camel-k

https://camel.apache.org/

https://github.com/apache/camel-k/tree/main/examples

Download this article