Posted on Leave a comment

Understanding Cloud-Native Collaboration and DevOps Culture for the New Engineer

As a new engineer, understanding cloud-native collaboration and DevOps culture is important for several reasons.

First, it is essential for building and deploying cloud-native applications. These practices emphasize collaboration and communication between teams such as development, operations, and security, and aim to streamline the development and deployment process. By understanding how they work, you can build, deploy, and manage cloud-native applications more effectively.

Second, they allow for faster time-to-market. By promoting collaboration and communication between teams, development and deployment can be sped up, leading to faster time-to-market.

Third, they promote better quality of the applications. By promoting collaboration and communication between teams, more thorough testing and debugging can be done, resulting in better quality of the application.

Fourth, they allow for better scalability. By promoting collaboration and communication between teams, resources can be allocated and scaled up or down as needed, without incurring additional costs.

In summary, understanding cloud-native collaboration and DevOps culture is important because it is essential for building and deploying cloud-native applications, allows for faster time-to-market, promotes better quality of the applications, and allows for better scalability. It is a powerful tool for building and deploying applications in a cloud environment and is essential for any engineer working in the field today.

Learning Materials

Here’s a list to get you started learning about cloud-native collaboration and DevOps culture. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

Cloud Native DevOps Explained

This video outlines the steps for migrating an application to a cloud-native approach, including breaking it into a pipeline, building and packaging components, running tests, and scanning for vulnerabilities. It also covers the use of DevOps and continuous delivery, as well as frameworks for test-driven development.

A Possible Learning Path

Hands-on experience: Start by setting up a simple Kubernetes cluster and experimenting with different DevOps tools such as Jenkins, Ansible, and Helm. This can be done by following tutorials and guides, and deploying these tools on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of DevOps, you can begin to explore the underlying concepts and technologies such as continuous integration, continuous delivery, and infrastructure as code. This can be done through online resources such as tutorials, courses, and documentation provided by Kubernetes, as well as books and blogs on the topic.

Understanding the principles and best practices: DevOps is an important aspect of a microservices architecture, so it’s important to understand the key principles and best practices of DevOps, such as collaboration, automation, and monitoring.

Joining a community: Joining a community of Kubernetes enthusiasts will help you connect with other people who are learning and working with DevOps for Kubernetes. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice deploying and using DevOps tools in a Kubernetes cluster, the more comfortable and proficient you will become with the technology.

A Note from the Architect

OK, let’s talk about DevOps. So, what is it exactly? Well, DevOps is all about breaking down the barriers between development and operations teams, and promoting a culture of collaboration and communication. The goal is to speed up the software delivery process, improve quality, and increase efficiency.

Now, you might be wondering why we want to practice DevOps. Well, in the past, development and operations teams often worked in silos, which led to a lot of delays and inefficiencies. With DevOps, we’re able to bring those teams together and get everyone working towards the same goal: delivering high-quality software quickly and efficiently.

There are a few key principles of DevOps that we like to follow. One of the most important is automation. By automating as many processes as possible, we’re able to speed up the delivery process and reduce the likelihood of errors. Another key principle is continuous integration and delivery. By integrating code changes frequently and delivering them to production as soon as they’re ready, we’re able to get feedback and make improvements more quickly.

Now, when it comes to cloud native approaches like developing on Kubernetes, they fit really well within DevOps practices. By using containerization and orchestration, we’re able to automate the deployment and scaling of our applications, which helps us move faster and be more efficient.

As an individual team member, you can contribute to the DevOps culture by being open to feedback and suggestions, and by being willing to work collaboratively with other teams. Some good practices for a new engineer or developer to pick up include learning how to use automation tools, getting familiar with containerization and orchestration, and practicing continuous integration and delivery.

You might be wondering how DevOps is related to agile and SRE. Well, DevOps is closely related to agile, as both focus on delivering software quickly and efficiently. SRE, on the other hand, is all about ensuring the reliability and availability of software in production. All these practices come together to make sure the software is delivered fast, reliable and with high availability.

Finally, let me tell you, before DevOps became a common practice, IT departments faced a lot of problems. There were often delays in getting new features or updates to production, and there were also a lot of communication issues between development and operations teams. DevOps helps us to overcome all these problems.

Well, dear reader, we’ve come to the end of our series of blog posts on cloud native technologies and practices, and especially Kubernetes. Well, this won’t be my last post on Kubernetes, but at least as far as it relates to this series. I hope you’ve found the information we’ve covered to be helpful and informative.

Throughout the series, we’ve talked about a lot of different topics, including security, CICD, DevOps, and containerization. These are all critical concepts for anyone working in the cloud, and I want to emphasize that these concepts aren’t just short-lived trends, but tools and ideas that will serve you well for many years of your career.

We’ve covered a lot of ground in these posts, but there’s still so much more to learn about cloud native technologies and practices. As you continue to grow and develop your skills, I hope you’ll keep these concepts in mind and continue to explore the many different ways that they can be applied in your work.

I also want to take a moment to thank you for your time, support, and the conversations we’ve had throughout the series. Your feedback and input have been invaluable in helping to make these posts a success, and I appreciate the time you’ve taken to read and engage with the content.

As you continue your journey in cloud native technologies, keep in mind that it’s a constantly evolving landscape, and there will always be more to learn and explore. But with the concepts and tools we’ve covered in this series, you’ll be well-equipped to navigate this exciting and rapidly changing field.

Thanks again for reading and I hope you enjoyed the series. Let’s continue the conversation and explore even more of this exciting field together!

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Getting Started with Cloud-Native App Development: Best Practices, Learning Materials, and Videos to Watch

As a new engineer, understanding the concept of cloud-native app development is important for several reasons.

First, cloud-native app development is a key aspect of building and deploying applications in the cloud. It is the practice of using technologies, tools, and best practices that are designed to work seamlessly with cloud environments. By understanding how cloud-native app development works, you will be able to build, deploy, and manage cloud-native applications more effectively.

Second, cloud-native app development allows for better scalability and cost efficiency. By using technologies, tools, and best practices that are designed to work seamlessly with cloud environments, it allows for resources to be automatically allocated and scaled up or down as needed, without incurring additional costs.

Third, cloud-native app development promotes better collaboration and DevOps culture. By using technologies, tools, and best practices that are designed to work seamlessly with cloud environments, it becomes easier for different teams and developers to work together on the same application.

Fourth, cloud-native app development allows for better security, by using technologies, tools, and best practices that are designed to work seamlessly with cloud environments, it ensures that the application and infrastructure are protected from threats and can continue to operate in case of an attack or failure.

In summary, as a new engineer, understanding the concept of cloud-native app development is important because it is a key aspect of building and deploying applications in the cloud, allows for better scalability and cost efficiency, promotes better collaboration and DevOps culture, and allows for better security.

Learning Materials

Here’s a list to get you started learning about cloud-native app development. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

Best practices in Kubernetes app development

This document outlines best practices for developing with Kubernetes, such as using a tailored logging interface, debugging with CLI commands, and creating project templates with Cloud Code. Google Cloud DevTools are introduced as a way to simplify the process of incorporating best practices into the Kubernetes development workflow.

A Possible Learning Path

Hands-on experience: Start by setting up a simple Kubernetes cluster and experimenting with different app development tools such as Kubernetes Deployments, Services, and ConfigMaps. This can be done by following tutorials and guides, and deploying a simple application on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of app development, you can begin to explore the underlying concepts and technologies such as Kubernetes pods, services, and volumes. This can be done through online resources such as tutorials, courses, and documentation provided by Kubernetes, as well as books and blogs on the topic.

Understanding the principles and best practices: App development is an important aspect of a microservices architecture, so it’s important to understand the key principles and best practices of app development such as containerization, scaling, and rolling updates.

Joining a community: Joining a community of Kubernetes enthusiasts will help you connect with other people who are learning and working with app development for Kubernetes. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice deploying and using app development tools in a Kubernetes cluster, the more comfortable and proficient you will become with the technology.

A Note from the Architect

Hey there, let’s talk about cloud native application development. So, what is it exactly? Well, it’s all about developing applications that are specifically designed to run in a cloud environment, like Kubernetes, which is one of the most popular container orchestration platforms out there.

What makes cloud native application development different from other approaches is that it’s all about leveraging the benefits of the cloud, like scalability and flexibility, to create applications that can easily adapt to changing workloads and environments.

When it comes to developing applications for Kubernetes, there’s a typical software development workflow that you’ll need to follow. First, you’ll need to choose the right programming languages and frameworks. Some of the most popular languages for cloud native development are Go, Java, Python, and Node.js.

Once you’ve chosen your languages and frameworks, you’ll need to design your application architecture to work well with Kubernetes. Some of the best patterns for Kubernetes include using microservices, containerization, and service meshes. On the other hand, monolithic applications and stateful applications are not well suited for running on Kubernetes.

And finally, let me tell you, we really like VS Code around here. It’s one of the best tools for cloud native application development, especially when working with Kubernetes. It provides a lot of great features for working with containerization and orchestration, like excellent plugins for Kubernetes, debugging, and integration with other popular tools like Helm and Kustomize. So, if you haven’t already, give it a try, you might like it too.

Be sure to reach out on LinkedIn if you have any questions.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Master Cloud Native Governance: The Essential Guide for New Engineers

As a new engineer, understanding the concept of cloud-native governance is important for several reasons.

First, it is a key component of cloud native application development. It uses governance tools and technologies that are native to the cloud and designed to work seamlessly with cloud-native applications. This allows for better deployment and management of cloud-native applications.

Second, cloud-native governance ensures compliance and security. It ensures that the application and infrastructure meet compliance requirements and are protected from threats.

Third, it promotes better collaboration and DevOps culture. Different teams and developers can work together on the same application, and the organization’s policies and standards are followed.

Fourth, it allows for better cost management. Resources can be monitored and controlled, and the organization is not overspending on the cloud.

In summary, understanding the concept of cloud-native governance is important for any engineer working in the field today. It is a powerful tool for building and deploying applications in a cloud environment.

Learning Materials

Here’s a list to get you started learning about cloud-native governance. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

A Possible Learning Path

Hands-on experience: Start by setting up a simple Kubernetes cluster and experimenting with different governance tools such as Open Policy Agent (OPA), Kubernetes Policy Controller, and Kube-bench. This can be done by following tutorials and guides, and deploying these tools on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of governance, you can begin to explore the underlying concepts and technologies such as Kubernetes role-based access control (RBAC), Namespaces, and NetworkPolicies. This can be done through online resources such as tutorials, courses, and documentation provided by Kubernetes, as well as books and blogs on the topic.

Understanding the principles and best practices: Governance is an important aspect of a microservices architecture, so it’s important to understand the key principles and best practices of governance such as security, compliance, and auditing.

Joining a community: Joining a community of Kubernetes enthusiasts will help you connect with other people who are learning and working with governance for Kubernetes. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice deploying and using governance tools in a Kubernetes cluster, the more comfortable and proficient you will become with the technology.

A Note from the Architect

Ok, let’s talk about cloud native governance. So, why do we need to practice it? Well, as we all know, the cloud is a constantly evolving landscape and it can be pretty overwhelming to keep up with all the new technologies and best practices. That’s where governance comes in – it’s all about making sure we’re using the cloud in a consistent and efficient way across our organization.

So what exactly is cloud native governance? It’s all about using policies and tools to manage the resources in our cloud environment. This includes things like setting up guidelines for how our teams use the cloud, automating tasks to keep our environment in check, and monitoring for any potential issues.

Now, you might be wondering why cloud native governance was created. Well, as organizations started moving more and more of their workloads to the cloud, they realized they needed a way to keep everything in check. Without governance in place, it can be easy for teams to create resources in an ad-hoc way, which can lead to wasted resources, security vulnerabilities, and inconsistencies in how the cloud is being used.

Now, let’s talk about the major tools on Kubernetes that help with cloud native governance. One of the most popular is Kubernetes itself, which provides a way to manage and scale containerized applications. Another popular tool is Helm, which helps with managing and deploying Kubernetes resources. There’s also Kustomize, which helps with creating and managing customized resources. And finally, there’s Open Policy Agent (OPA) which allows to define and enforce policies on k8s resources.

It’s important to note that governance is similar to security, and it requires a continuous practice. Governance policies and tools need to be regularly reviewed and updated to ensure they are still effective and aligned with the organization’s goals and requirements. It’s all about making sure we’re using the cloud in the best way possible.

Be sure to reach out on LinkedIn if you have any questions.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Unlocking the Power of Service Discovery in Kubernetes

As a new engineer, understanding the concept of service discovery is important for several reasons.

First, it is a key component of microservices architecture. Service discovery allows services to find and communicate with each other, regardless of their location or IP address. This makes it easier to build, deploy, and manage microservices-based applications.

Second, service discovery enables greater scalability and flexibility. Services can be added or removed without affecting the rest of the system, and new services can be introduced without changing existing code.

Third, service discovery facilitates better collaboration and DevOps culture. By making it easy for services to find and communicate with each other, different teams and developers can work together on the same application.

Fourth, service discovery allows for better resilience. It enables the system to automatically route traffic to healthy instances of a service, even if some instances are unavailable.

In summary, understanding service discovery is important for any engineer working in the field today. It is a powerful tool for building and deploying applications in a microservices environment and is essential for achieving greater scalability, flexibility, collaboration, and resilience.

Learning Materials

Here’s a list to get you started learning about Service Discovery. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

What is Service Discovery?

Moving from a monolith to a Cloud-based Microservices Architecture presents several challenges, such as Service Discovery, which involves locating resources on a network and keeping a Service Registry up to date. Service Discovery can be categorized by WHERE it happens (Client Side or Server Side) and HOW it is maintained (Self Registration or Third-party Registration). Each approach has its own pros and cons, and further complexities such as Service Mesh can be explored in further detail.

Possible Learning Path (Service Discovery for Kubernetes)

Hands-on experience: Start by setting up a simple Kubernetes cluster and experimenting with different service discovery mechanisms such as Kubernetes Services, DNS, and Load Balancing. This can be done by following tutorials and guides and deploying these services on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of service discovery, you can begin to explore the underlying concepts and technologies such as Kubernetes Services, DNS, and Load Balancing. This can be done through online resources such as tutorials, courses, and documentation provided by Kubernetes, as well as books and blogs on the topic.

Understanding the principles and best practices: Service discovery is an important aspect of a microservices architecture, so it’s important to understand the key principles and best practices of service discovery such as service registration, service discovery, and service resolution.

Joining a community: Joining a community of Kubernetes enthusiasts will help you connect with other people who are learning and working with service discovery for Kubernetes. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice deploying and using service discovery mechanisms in a Kubernetes cluster, the more comfortable and proficient you will become with the technology.

A Note from the Architect

So, you know how in a microservice architecture, we have all these different services that need to talk to each other? Well, service discovery is kind of like a phone book for those services (this is DNS). It helps them find each other and communicate with each other.

In traditional networks, service discovery is often done using a centralized server or load balancer. This means that all the services need to know the IP address or hostname of this central server in order to communicate with other services.

But in Kubernetes, service discovery is built into the platform. Each service gets its own unique IP address and DNS name, and Kubernetes automatically handles routing traffic between them. This means that the services don’t need to know anything about other services except their own name.

And the best part? Kubernetes service discovery is dynamic, which means that it automatically updates when new services are added or removed, so you don’t have to manually update the phone book every time something changes.

But that’s not all, Kubernetes also provides a way to expose your services externally, so that you can access them from outside the cluster, which is very useful for example if you want to access your services from the internet.

So, with service discovery in Kubernetes, you don’t have to worry about keeping track of IP addresses and hostnames, and you don’t have to worry about updating a central server when things change. It’s like having a personal assistant who always knows the latest phone number of your services and also makes sure that they are accessible from anywhere.

Basically, service discovery in Kubernetes provides a way for services to easily find and communicate with each other, and it’s built right into the platform. It’s a game-changer for managing and scaling a microservice architecture.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Learning Infrastructure as Code: What You Need to Know for Cloud Native Engineering

As a new engineer, understanding Infrastructure as Code (IAC) is important for several reasons.

First, IAC is a key component of cloud native application development. It is the practice of managing and provisioning infrastructure using code, rather than manual configuration. By understanding how IAC works, you can build, deploy, and manage cloud-native applications more effectively.

Second, IAC allows for greater consistency and reproducibility. By using code to manage and provision infrastructure, it ensures the same configuration is used across different environments and that the infrastructure can be easily recreated if necessary. This makes it easier to handle increasing demand for computing power and storage.

Third, IAC promotes better collaboration and a DevOps culture. By using code to manage and provision infrastructure, it becomes easier for different teams and developers to work together on the same application.

Fourth, IAC allows for better tracking and version control of infrastructure changes. By keeping the infrastructure definition in code, it allows for tracking changes in the same way as code changes and reverting to previous versions if necessary.

In summary, understanding IAC is important because it is a key component of cloud native application development, allows for greater consistency and reproducibility, promotes better collaboration and DevOps culture, and allows for better tracking and version control of infrastructure changes. It is a powerful tool for building and deploying applications in a cloud environment and is essential for any engineer working in the field today.

Learning Materials

Here’s a list to get you started learning about Infrastructure as code (IAC). Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

Cloud Native Summit – Five ways to manage your Infrastructure as Code at Scale / Ryan Cartwright

This document provides an overview of the challenges of cloud native engineering and the solutions available, such as remote state management, avoiding manual state changes, using CI tools, and implementing a reliable SAS offering. It also covers the features and security essentials that must be included in a terraform CI platform.

Possible Learning Path

Hands-on experience: Start by setting up a simple Kubernetes cluster and experimenting with different IAC tools such as Terraform, Ansible, and Helm. This can be done by following tutorials and guides and deploying these tools on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of IAC, you can begin to explore the underlying concepts and technologies such as configuration management, version control, and automation. This can be done through online resources such as tutorials, courses, and documentation provided by IAC tools, as well as books and blogs on the topic.

Understanding the principles and best practices: IAC is an important aspect of modern infrastructure management, so it’s important to understand the key principles and best practices of IAC, such as versioning, testing, and rollback.

Joining a community: Joining a community of IAC enthusiasts will help you connect with other people who are learning and working with IAC for Kubernetes. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice deploying and using IAC tools in a Kubernetes cluster, the more comfortable and proficient you will become with the technology.

A Note from the Architect

Infrastructure as Code (IaC) is one of the most exciting developments in cloud deployments. It is a way of describing and provisioning infrastructure using code, instead of manual configuration. This makes it easier to manage and maintain over time, as well as enabling faster scaling, testing, and deploying of cloud native solutions.

Version control is essential for modern operations teams, as it allows them to track changes to the infrastructure over time and roll back to a previous version if something goes wrong. Developers should also learn it, as it allows them to collaborate more easily with operations teams and to understand the infrastructure that their code is running on.

IaC helps to ensure quality by allowing for automated provisioning and configuration of infrastructure. It also helps to ensure security, as it enables more controlled and automated access to infrastructure, making it easier to identify and isolate any malicious activity.

In conclusion, IaC is a great way to deploy cloud native solutions, as it makes it easier to manage and maintain over time. Version control is essential for modern operations teams, and developers should also learn it. Writing IaC helps to ensure quality and can help with security by allowing for more controlled and automated access to infrastructure.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

An Introduction to Continuous Integration and Continuous Deployment (CI/CD): Understanding the Benefits, Learning Materials, Videos to Watch and a Possible Learning Path

As a new engineer, understanding the concept of continuous integration and continuous deployment (CI/CD) is important for several reasons.

First, CI/CD is a key component of cloud native application development. It is the process of automatically building, testing, and deploying code changes as soon as they are committed to the code repository. By understanding how CI/CD works, you will be able to build, deploy, and manage cloud-native applications more effectively.

Second, CI/CD allows for faster development and deployment. By automating the build, test, and deployment process, it enables developers to make changes to the code and have them deployed to production faster. This facilitates faster innovation and time-to-market for new features.

Third, CI/CD promotes better collaboration and DevOps culture. By automating the process of building, testing, and deploying code changes, it becomes easier for different teams and developers to work together on the same application.

Fourth, CI/CD allows for better quality and reliability of the software. By automating the testing process, it ensures that code changes are tested as soon as they are made, which helps to catch any bugs or errors early in the development cycle.

In summary, as a new engineer, understanding the concept of CI/CD is important because it is a key component of cloud native application development, allows for faster development and deployment, promotes better collaboration and DevOps culture, and allows for better quality and reliability of the software. It is a powerful tool for building and deploying applications in a cloud environment and is essential for any engineer working in the field today.

Learning Materials

Here’s a list to get you started learning about CI/CD. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

The Foundations of Continuous Delivery

Continuous Delivery is a revolutionary approach to software development that focuses on efficient feedback, strong engineering discipline, reducing the amount of work, and cycle time reduction. Automation is essential to reducing cycle time and the deployment pipeline is used to prove that changes are fit for production.

Possible Learning Path

Hands-on experience: Start by setting up a simple Kubernetes cluster and experimenting with different CI/CD tools such as Jenkins, Travis CI, and GitLab CI/CD. This can be done by following tutorials and guides and deploying these tools on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of CI/CD, you can begin to explore the underlying concepts and technologies such as pipeline management, version control, and testing. This can be done through online resources such as tutorials, courses, and documentation provided by CI/CD tools, as well as books and blogs on the topic.

Understanding the principles and best practices: CI/CD is an important aspect of modern software development, so it’s important to understand the key principles and best practices of CI/CD such as automation, testing, and deployment.

Joining a community: Joining a community of CI/CD enthusiasts will help you connect with other people who are learning and working with CI/CD for Kubernetes. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice deploying and using CI/CD tools in a Kubernetes cluster, the more comfortable and proficient you will become with the technology.

A Note from the Architect

I’m about to tell you how I used to publish my websites way back in the day. You see, back then, many of us actually kept a really powerful server under our desks. I had such a server. This server had a network card with two different IP addresses bound to it. One was a Public IP address. Yes, I had a public IP address bound directly onto the network card of the server sitting under my desk. I also had a private IP address so that I could share the website’s folder on the network for the whole company.

Ok, stop laughing, you’re making me feel bad. I haven’t even mentioned the fact that I edited my Active Server Pages directly from that folder. That’s right, changes went directly from development to the internet with no steps in between. I had lots of files that were named things like, “about-us.asp.old, about-us.asp.older, about-us.asp.donotuse.’

Well, luckily those days are behind us. Today we use CICD (Continuous Integration and Continuous Deployment).

CICD is a software development practice that involves automatically building, testing, and deploying code changes. The basic idea behind CICD is to catch and fix errors as early as possible, which helps to prevent bugs and other issues from making it into production. This is accomplished by integrating the code changes into a central repository and then running a series of automated tests and builds.

It’s more popular now because it helps to improve software quality, increase collaboration among developers, and reduce time to market for new features. It also helps to ensure that software is always in a releasable state, which makes it easier to roll out new features and bug fixes.

One of the key enablers of CICD is version control. Version control is a necessity for modern development because it allows developers to track changes to the codebase over time, collaborate with other developers, and roll back to a previous version if something goes wrong. There’s no longer a need to append extra extensions onto files you might want to keep as backup.

Pipelines help to ensure quality by automating the process of building, testing, and deploying code changes. This makes it easier to catch and fix errors early in the development process, which helps to prevent bugs and other issues from making it into production. It can also help with security by establishing a software supply chain. When code changes are automatically built, tested and deployed, it’s easier to track, identify and isolate any malicious code that may have been introduced in the codebase.

Basically, CICD is a software development practice that involves automatically building, testing, and deploying code changes. It helps to improve software quality, increase collaboration among developers, and reduce time to market for new features. Version control is a necessity for modern development, and pipelines help to ensure quality and can help with security by establishing a software supply chain.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Understand Auto-Scaling to Maximize Cloud Native Application Development

As a new engineer, understanding auto-scaling is important for several reasons. It is a key component of cloud native application development, allowing for greater scalability and cost efficiency, better collaboration and DevOps culture, and better availability and resilience. Auto-scaling automatically adjusts the number of resources used by an application based on current demand, ensuring the system can handle a large number of requests while minimizing cost. It also makes it easier for different teams and developers to work together on the same application. Auto-scaling is a powerful tool for building and deploying applications in a cloud environment, and is essential for any engineer working in the field today.

As a new engineer, understanding the concept of auto-scaling is important for several reasons.

First, auto-scaling is a key component of cloud native application development. It is the process of automatically adjusting the number of resources (such as servers) used by an application based on the current demand. By understanding how auto-scaling works, you will be able to build, deploy, and manage cloud-native applications more effectively.

Second, auto-scaling allows for greater scalability and cost efficiency. By automatically adjusting the number of resources used by an application, it ensures that the system can handle a large number of requests while minimizing the cost of running the application. This makes it easy to handle the increasing demand for more computing power and storage.

Third, auto-scaling allows for better collaboration and DevOps culture. By automating the process of scaling resources, it becomes easier for different teams and developers to work together on the same application.

Fourth, auto-scaling allows for better availability and resilience, by automatically adjusting resources when traffic increases, it ensures that the application is always available to handle requests, and when traffic decreases, it reduces the resources which allows to save cost.

In summary, as a new engineer, understanding the concept of auto-scaling is important because it is a key component of cloud native application development, allows for greater scalability and cost efficiency, better collaboration and DevOps culture, and better availability and resilience. It is a powerful tool for building and deploying applications in a cloud environment and is essential for any engineer working in the field today.

Learning Materials

Here’s a list to get you started learning about auto-scaling. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to Watch

Kubernetes cluster autoscaling for beginners

Kubernetes is a powerful tool for deploying applications, but it’s important to understand the resource requirements of the applications and set resource requests and limits to allow the scheduler to make informed decisions when placing pods onto nodes. This will help ensure that the nodes are not over-utilized and that applications are given the resources they need.

A Possible Learning Path

Hands-on experience: Start by setting up a simple Kubernetes cluster and experimenting with different auto-scaling mechanisms such as Kubernetes Horizontal Pod Autoscaler and Cluster Autoscaler. This can be done by following tutorials and guides and deploying these services on a cloud platform like AWS, Azure, or GCP.

Theoretical learning: Once you have a basic understanding of auto-scaling, you can begin to explore the underlying concepts and technologies such as Kubernetes Horizontal Pod Autoscaler and Cluster Autoscaler. This can be done through online resources such as tutorials, courses, and documentation provided by Kubernetes, as well as books and blogs on the topic.

Understanding the principles and best practices: Auto-scaling is an important aspect of a microservices architecture, so it’s important to understand the key principles and best practices of auto-scaling, such as scaling policies, monitoring, and alerting.

Joining a community: Joining a community of Kubernetes enthusiasts will help you connect with other people who are learning and working with auto-scaling for Kubernetes. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice deploying and using auto-scaling mechanisms in a Kubernetes cluster, the more comfortable and proficient you will become with the technology.

A Note from the Architect

Before I explain an auto-scaler, let me quickly tell you the story of the Porsches crashing into walls every 15 minutes.

Back when I was a newer developer working for the e-commerce department of an electronics parts seller, we had a web outage. We had no idea why the webserver was down. This was in the olden days when the best monitoring tool we had was named ELMA. It was better for debugging and development than anything you would want to put into production.

The e-commerce website was down and we developers were scrambling to find out why. The president of the company was standing in our cubicles, shaking his head and looking at his watch. Every few minutes he would proclaim loudly, “There goes another one.”

After a while, one of our more senior developers asked, “’Another one,’ what, sir?”

He replied, “You just crashed a brand new Porsche into a brick wall.”

That’s how much money we were losing every 15 minutes while the site was down.

We found out that the problem had nothing to do with our code. It was an issue with our servers. We didn’t have enough. We had recently opened up to new markets overseas and this added traffic was crashing our old Internet Information Server systems. We didn’t have auto-scaling.

Auto-scaling is like having a personal trainer for your infrastructure. It makes sure that your infrastructure is always in tip-top shape by automatically adjusting the number of resources (like servers or containers) based on the current demand.

In traditional IT environments, auto-scaling is often done manually, which can be time-consuming and error-prone. But in a cloud-native environment like Kubernetes, auto-scaling is built into the platform and can be easily configured with Kubernetes resources. This makes it a lot more efficient and reliable.

With Kubernetes, you can set up auto-scaling rules that specify how many resources you want to have available for a given service. For example, you can set a rule that says “if the CPU usage goes above 80% for more than 5 minutes, then add another server.” And when the demand goes down, the system will also automatically scale down the resources. This way, you can ensure that you always have the right amount of resources to handle the current load and also save money by not having unnecessary resources running.

Another benefit of using auto-scaling in a cloud-native environment is that it allows you to handle unexpected traffic spikes, such as a viral social media post or a news article. This way, you don’t have to worry about your service going down because of a sudden increase in traffic.

So, using auto-scaling in a cloud-native environment has many benefits, including increased efficiency, reliability and cost savings. Plus, it’s like having a personal trainer that makes sure your infrastructure is always ready for action. It might also save on the number of Porsches you crash into brick walls.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.

Posted on Leave a comment

Why Should a New Engineer Learn the Cloud Native Concepts?

As a new engineer, learning cloud native concepts is important for several reasons.

First, cloud computing is becoming increasingly popular and is now the norm for many organizations. Many companies are moving away from traditional on-premises data centers and migrating their infrastructure and applications to the cloud. Knowing how to build, deploy, and manage cloud-native applications will give you a valuable skill set that is in high demand in the job market.

Second, cloud native concepts and technologies are designed to be flexible, scalable, and efficient. They enable faster development and deployment of applications and make it easier to handle the increasing demand for more computing power and storage. By learning these concepts, you will be able to build applications that can handle large amounts of traffic and data and can easily scale up or down as needed.

Third, cloud native concepts and technologies are designed to work well together. They are all part of a larger ecosystem that is designed to make it easy for developers to build, deploy, and manage applications on cloud infrastructure. By learning these concepts, you will be able to take advantage of the full range of cloud-native tools and services, and will be able to create more powerful and efficient applications.

In summary, as a new engineer, learning cloud native concepts will give you a valuable skill set, allow you to build flexible, scalable, and efficient applications, and enable you to take advantage of the full range of cloud-native tools and services. It is an essential skill set for many companies today and will be essential in the future.

What is Cloud Native?

Cloud native is a term used to describe an approach to building, deploying, and running applications on cloud infrastructure. It involves containerization, microservices architecture, and the use of cloud-native tools and services.

Containerization packages software, its dependencies, and configuration files together in a lightweight and portable container, allowing it to run consistently across different environments.

Microservices architecture designs and builds software as a collection of small, independent services that communicate with each other via well-defined APIs. This approach enables faster development, easier scaling, and more flexible deployment options.

Cloud-native tools and services are designed specifically for cloud environments and provide capabilities such as auto-scaling, load balancing, and service discovery. They allow for faster and more efficient deployment and management of applications.

In summary, cloud native is a way of designing, building, and running applications on cloud infrastructure. It leverages containerization and microservices architecture, and utilizes cloud-native tools and services for faster and more efficient deployment and management of applications. As a new engineer, it is important to understand these concepts and how they work together in order to build cloud-native applications.

Learning Materials

Here’s a list to get you started learning about Cloud Native. Note that some of these links may not be free and may require a subscription or payment. I receive no affiliate payments for these links.

Beginner:

Intermediate:

Advanced:

Videos to watch

What is Cloud Native and Why Should I Care?

Wealth Grid is a mid-sized firm that has product and service market fit, but is struggling to shorten its time to value and stay off the front page news. To do this, they must embrace cloud native technologies, but this is not business as usual. With the help of the Cloud Native Computing Foundation, Wealth Grid can learn from their mistakes and use tools and techniques to achieve their goals.

Expert talk: Cloud Native & Serverless • Matt Turner & Eric Johnson • GOTO 2022

Matt Turner and Eric Johnson discuss the importance of Cloud Native Concepts for the new engineer to learn, such as Continuous Integration and Continuous Delivery, and the benefits of testing in production to catch certain classes of bugs.

A Possible Learning Path

Hands-on experience: It is important to start by experimenting with different cloud providers, such as AWS, Azure, and GCP, to understand the basic concepts and services offered by each. This can be done by creating a free account and following tutorials and guides to build and deploy simple applications.

Theoretical learning: Once you have a basic understanding of cloud computing, you can begin to explore cloud native concepts such as containerization, microservices, and service discovery. This can be done through online resources such as tutorials, courses, and documentation provided by cloud providers, as well as books and blogs on the topic.

Joining a community: Joining a community of cloud native enthusiasts will help you connect with other people learning and working with cloud native technology. This can be done through online forums, meetups, and social media groups.

Practice, practice, practice: As with any new technology, the best way to learn is by doing. The more you practice building and deploying cloud native applications, the more comfortable and proficient you will become with the technology.

Specialize in a specific cloud provider: Cloud providers each have their own set of services and ways of working, so it is beneficial to specialize in one or two providers that align with your business or career goals.

A Note From the Architect

Don’t be intimidated by the volume of information you’ll need to learn to be proficient in cloud-native technologies. Because I have a secret for you from a twenty-five year veteran. There’s little chance you’ll ever be much more than competent. You may be able to master a few of these subject areas, and that’s great if you do, but it’s not necessary if you truly understand one important thing.

I call this important thing, “The Why”.

In each of these articles where I present an important topic from big concepts in cloud-native development I will give you my opinion based on personal experience as to why you should consider using this technology, what are other possibilities, and what are the trade offs.

I believe that ‘The Why’ is one of the most important parts of a technology consideration. So what is the, “why,” of Cloud Native? In my opinion, it’s the ability to develop and deliver solutions on their built-to-fit platform. Even though there’s still a huge market for large systems like SAP, Microsoft Dynamics, and Oracle, the future belongs to value created solutions running on platforms that best fit their need.

I’m sure some people are wondering if everything really needs to be containerized. No, there are plenty of alternative options for running your workloads that don’t involve containers.

As a developer, I have come across several alternatives to cloud native technologies. One alternative is using virtual machines (VMs) instead of containers. VMs offer a higher level of isolation and security, but they also have a larger footprint and are less portable. Another alternative is using on-premises infrastructure, which provides greater control over data and security, but also comes with higher costs and maintenance responsibilities.

Another alternative is using a platform-as-a-service (PaaS) instead of containers. PaaS provides a higher level of abstraction and can simplify the deployment process, but it also limits the level of control and customization that you have over the infrastructure.

It’s important to note that, while these alternatives can be viable options depending on the specific use case, they often trade off some of the benefits of cloud native technologies such as scalability, flexibility, and cost-efficiency. Ultimately, it’s important to weigh the tradeoffs and choose the solution that best aligns with the needs of your project and organization.

What I hope to accomplish with this series is to open your eyes to the possibilities of what Cloud Native has to offer.

Connect with Shawn
Connect with Shawn

Connect with me on LinkedIn. It’s where I’m most active, and it’s the easiest way to connect with me.