For any project, changing the architecture of the server is a major step. Most of our IT Outposts clients take advantage of our Kubernetes services to achieve this goal. Every time, the experience has been unique.
Kubernetes migration is a popular choice because it offers increased stability, unification of the environment, and fast autoscaling.
Microserver architectures are best suited for Kubernetes. Cluster entities have to be distinct to be effective. This allows you to:
- Limiting services precisely
- Only establishing necessary connections
- Pick an entity type unique to each service (Deployment, ReplicationController, DaemonSet, etc.)
It’d be great if you could imagine the real purpose of your migration and think about it before we move forward with explaining the process.
- Why do you plan to migrate to Kubernetes?
- Will you have the resources to modify your application’s logic if it needs to be changed? Is your application container-native by design?
- In what ways do you expect Kubernetes to benefit you?
There are several questions to consider before you begin to migrate to Kubernetes, as this can be a very difficult process for developers and businesses in general. A whole mountain of rework lays just below the tip of the iceberg, so make sure that your team is prepared for it and understands what is going on. Alternatively, even a smooth Kubernetes migration will, in your case, have no practical effect on the design or the business logic of your application.
The article in this issue explains some of the reasons you should make the switch from Docker Compose to Kubernetes, along with tips on deploying your applications using other tools, as well as how to facilitate Docker to Kubernetes migration using Kompose.
Read also: Microservices Decomposition Strategy in 2021
What Is Kubernetes?
Kubernetes automates the deployment, scaling, and management of containerized applications using an open-source orchestration system. There are posts from the Sensu team that cover how Kubernetes works, but we won’t go into depth here. Throughout the rest of this article, we’ll assume you’ve decided you’re ready to migrate (or at least, you’re ready to start thinking about it).
How you use Kubernetes depends on your knowledge of its features. The reason for its use is also important. These are general guidelines for the ideas and opinions in this post. They are not a replacement for your judgment in making sure your decisions suit your goals (after all, you are the one who knows your applications best). Consider your unique context when thinking about migration.
Application Decomposing And Re-Inventing It As A Kubernetes-Native App
We now move from the strategy point to the point of the tactic, which is logical. Now let’s look at how you may go about using the answers you receive from the “goals” questions.
Planning and visualization of your current architecture
You can estimate migration time and resource scope based on documentation, visualization tools, and broad planning. Draw a schematic of the parts of your app and the connections between them. Using a deployment chart or hexagonal perspective will serve the same purpose, or even using a simple data flow chart. As a result of this step, you will have a full map of modules and their connections. With that, you would be able to understand what is migrating to Kubernetes.
Application architecture must be rethought
Although you may feel enthusiastic at this stage-“Let’s rewrite everything!” – you have to maintain your cool and build your modules’ migration order in simplest to most difficult order. By training your team in this way, you’ll be able to prepare them for even the most challenging projects. As an alternative, try organizing your plan using another method: choose the most important modules, such as those that handle business logic. Modules can be classified as secondary when the core of the app is migrated to K8s and then work on them afterwards.
Your job here will be to solve the following tasks:
- Determine the best method for logging in to your app;
- Select the way your sessions will be stored (e.g., in shared memory);
- If you plan on developing a K8s app, think about how you will handle file storage;
- Test, troubleshoot, and reflect on the new challenges you face with your application.
There might be occasions when some of your stages shrink or expand in length, but that’s fine. It might be necessary for you to hire additional staff and increase the expertise of your team. Business migration is unique to each organization.
Here is a brief description of what follows.
- Containerization stage. Docker is likely the tool of choice for your migration to be successful if your application workloads are not currently running in containers. The obvious choice is Docker, considering it’s intuitive, supported, and widely used. However, it isn’t the only machine to use for containerization. Docker can be used to create images supporting your applications and their dependencies. Ideally, this should be achieved using an automated continuous integration pipeline that includes pushing the versions to a Docker registry (which can be private).
By now, you should be ready to begin using Kubernetes. We won’t go into more detail about this step since this post is focused on Kubernetes migration instead of Docker fundamentals. Getting familiar with Docker can be done using a variety of great resources.
- Choose Kubernetes objects for each module based on your app module’s schematic. There are several types and options for the components of your app, so normally this stage goes smoothly. Following this, you need to create mapped Kubernetes objects using YAML files.
- Adapting databases. This is commonly done by simply connecting a new Kubernetes-based application to it.
After you have launched a Docker container, all you need to do is make the executive decision to containerize your entire app, including your database.
The adoption of Kubernetes is now clearer to you. We’ll dig deeper into the topic with a discussion on the technical differences between how to migrate Docker containers to Kubernetes, application migration best practices, and Kubernetes use cases.
Why Move from Docker Compose to Kubernetes
The following factors explain why you should migrate Docker Compose to Kubernetes.
Single-cluster limitation of Compose
Docker Compose containers run on a single host. When multiple hosts or cloud providers are used to run an application workload, this presents a network communication challenge. Using Kubernetes, you can manage multiple clusters and clouds more easily.
Single point of failure in Compose
Docker Compose-based applications require that the server running the application be kept running for them to continue working. This leads to a single point of failure on the server running Compose. Contrary to this, Kubernetes runs typically in a highly available (HA) state with multiple servers deploying and maintaining the applications. The nodes are also scaled based on resource utilization in Kubernetes.
Extensibility of Kubernetes
Platforms like Kubernetes are highly extensible, which is why they are popular with developers. Pods, Deployments, ConfigMaps, Secrets, and Jobs are some native resource definitions. Clustered applications run using each of them for different purposes. The Kubernetes API server provides the ability to use CustomResourceDefinition to add custom resources.
Using Kubernetes, software teams can create their operators and controllers. Control loops are specific processes that run within a Kubernetes cluster following the control loop pattern.
These are used to keep the cluster in the desired state and regulate its state. By talking to the Kubernetes API, users can create custom controllers and operators that take care of CustomResourceDefinitions.
Great ecosystem and open source support of Kubernetes
Kubernetes is a powerful platform that continues to grow rapidly among enterprises. Over the past two years, it has ranked among the most popular platforms and the most desired among software developers. It stands out among container orchestration and management tools.
Cloud-native container orchestration has become synonymous with Kubernetes. In addition to having more than 1,800 contributors, there are more than 500 meetups worldwide, and more than 42,000 users are members of the #kubernetes-dev channel on Slack. The CNCF Cloud Native Landscape also demonstrates the robust ecosystem that Kubernetes has. By using these cloud-native software tools, Kubernetes can run more efficiently and the complexity of the system can be reduced.
Migration Process Step-by-Step
By the end of this section, you will have converted a simple, two-tier containerized application that was initially for use with Docker Compose to a Kubernetes environment. React.js makes up the frontend while Node.js makes up the backend. You can find the source code here.
Docker Compose configuration
Compose orchestrates multi-container applications using a single configuration file. Using this file, you can specify various details about the types of containers you wish to run, including build configurations, restart policies, volume settings, and networking configurations. Please find below the docker-compose file of the application that you will be translating.
Docker Hub repository images are to be used to build clients and backend containers in this file. Modifying the application source code and rebuilding the images can be done by commenting out lines related to images and uncommenting the build configurations of the respective services.
Run the following command to test the application:
Docker Compose file changing
By converting the configuration file as it is, you would not achieve the desired result. Pods and services are generated by Kompose for each service. As a result, the services will only be able to flow the traffic within the cluster (cluster IP service) to the Pods. You will need to add a certain label to the services in Docker Compose so that the application can be exposed to external traffic. A Kubernetes service that fronts the Pods will be specified by this label.
Below are instructions on how to install Kompose, which you can also find on the official website:
Building Kubernetes Manifests with Kompose
Next, run the commands below at the same level as docker-compose.yaml to create your Kubernetes manifest file.
The result will be as follows:
Deploying resources to Kubernetes Cluster
Finally, you can declare your desired state in your cluster by using kubectl apply and specifying all files in the composition created by Kompose.
Other Options to Consider
When deploying to Kubernetes, you can also use other tools. The following are a few to consider.
Kubernetes-specific command-line interface (CLI) tool called DevSpace is open-source. It uses the same kube-context that you use for kubectl or Helm. Development can take place directly inside of Kubernetes clusters, reducing the chances that configuration drift will occur when deploying an application to a production Kubernetes environment. The DevSpace platform supports platforms such as Rancher, Amazon EKS, Google Kubernetes Engine (GKE) on Google Cloud Platform (GCP), and other public clouds. You can find instructions on how to install DevSpace here.
Google created Skaffold as a CLI tool that manages developer workflows for building, pushing, and deploying their applications. While Skaffold continually deploys your application to your local Kubernetes cluster or remote Kubernetes cluster, you can focus on the ongoing changes to your application. Follow the steps below to install Skaffold. With the command skaffold init, a project can be configured to deploy to Kubernetes.
Our goal was to help you avoid the same mistakes we made while figuring Kubernetes out.
In a manual this short, we recognize that these are not all of the nuances that can be described and anticipated. Therefore, we should mention that when selecting the technologies and approaches that are best suited to a particular project, our team pays close attention to its peculiarities.
IT Outposts is available for any questions you might have. For businesses of all sizes, we provide Kubernetes Deployment Services and Infrastructure Migration Services.
Dmitry has 5 years of professional IT experience developing numerous consumer & enterprise applications. Dmitry has also implemented infrastructure and process improvement projects for businesses of various sizes. Due to his broad experience, Dmitry quickly understands business needs and improves processes by using established DevOps tools supported by Agile practices. The areas of Dmitry’s expertise are extensive, namely: version control, cloud platform automation, virtualization, Atlassian JIRA, software development lifecycle, Confluence, Slack, Service Desk, Flowdock, Bitbucket, and CI/CD.