The open architecture of Kubernetes‘ automated deployment system provides a lot of benefits. You can place clusters in any storage, on your computer, using different providers. You get access to minimizing network latency and reducing the risk of cluster failure.
You can add hosts and schedule Windows containers, or create an optimized native cloud that unleashes the full potential of 5G applications. It gives you an opportunity to implement your own networking model using Kubernetes CNI plugins.
What is a CNI?
CNI (Container Network Interface) includes libraries and specifications that allow creating plugins for network interface configurations within Linux-based containers, as well as a number of other plugins.
Kubernetes Plugin Review
Instead of customizing the code for the platform, you can use a manual plug-in. There are several types of networking plugins in Kubernetes:
- CNI plugins: conform to the specification of the container network interface (CNI) and are created with the interoperability in mind.
- KubeNet plugin: allows implementing basic cbr0 via bridging and localhost CNI plugins.
By default, Kubernetes uses the KubeNet plugin for handling all the incoming requests. It is simple, but not so functional. If additional functionality is required, such as, for example, IP filtering, isolation between namespaces, changing algorithms for load balancing algorithms, or managing traffic mirroring, we recommend using third-party CNI network plugins.
What is CNI in Kubernetes?
Kubernetes was created to support distributed infrastructure across machine clusters. CNI here is used for dealing with connecting containers to the network and cleaning up allocated resources upon the container deletion, and also acts as an intermediary interface bridging the gap between network suppliers and the Kubernetes module network. Because of all this, it has a wide range of support with simple implementation.
In addition to the specification, this repository contains the Go library source code for integrating CNI into applications and an example command line tool for executing plugins. A separate repository includes reference plugins and a template for creating the new ones. The template code makes it easy to create a CNI module for an existing network container. CNI is also a good foundation for a “clean slate” container network project.
The network is the centerpiece of the platform for sharing applications across different virtual machines. This requires the use of different ports. Otherwise, complex port coordination will cause different clustering problems.
Dynamic selection is pretty complex. The application must detect ports, API servers should be able to insert dynamic addresses into configuration blocks, and services should quickly and accurately find each other. Kubernetes goes a bit different route in order to accomplish these tasks.
What Is CNI Plugin?
The CNI plugin’s role is to integrate the network interface with the namespace of the container network, implementing the required host adjustments. Then, this interface gets a dedicated IP address assigned and routes are configured according to the IP Address Management section by launching the proper IP Address Management (IPAM) module.
CNI provides specifications for various plugins that work to suit different use cases. Plugins usually separate a virtual interface from the host instance the pods are running on and then associate that particular virtual interface with themselves. They operate on different OSI network layers to cover the required use cases. The speed and functionality of plugins depend on the level at which CNI operates.
How Does Kubernetes CNI Work?
All submissions are provided with a unique IP. At the same time, the creation of links between modules and the mapping of ports of the container and host is not required.
The modules themselves work in many ways as physical hosts or virtual machines or. In particular, this applies to port allocation, service discovery, configuration changes, load balancing, and application migration.
Basic requirements of a network like that:
- Modules are switched with their “neighbors” on all nodes without NAT.
- Agents are in contact with all the host’s modules.
This model is not complex and makes it easier to move applications from virtual machines to containers.
The containers inside the pod share IP and MAC addresses and can coordinate port usage on the local host just like any virtual machine.
How to Install CNI for Kubernetes?
Kubernetes requires each cluster container to have a unique, routable IP. Kubernetes is not meant to assign addresses itself, so this is the task for third-party solutions. This network model can be implemented in ways offered by a large number of other third-party projects.
We at IT Outposts consider Flannel ─ an overlay network where you can set up a layer 3 framework designed for Kubernetes ─ one of the most promising ones among all the others. This is a real networking container factory from CoreOS!
Overlay networks come in handy when there is a lack of IP address space, when the network fails to control extra routes, or when additional management is required. For instance, the AWS routing table should formally be able to maintain up to 50 routes with no performance impact. If you require more than 50 Kubernetes nodes, an overlay network set for encapsulating data packets going back and forth between nodes will be a great help.
Flannel runs a small binary agent through hosts called Flanneld and its goal is thorough allocation of subnet leases in every other host. It uses the Kubernetes API or etcd for storing network configurations, dedicated subnets, as well as other instances of auxiliary data (like public IP addresses of hosts, etc.). Data packets are transferred via a specific internal mechanism – VXLAN or a certain cloud integration.
Platforms like Kubernetes see every clustered container to have a unique routable IP address. The advantage of this model is that it removes the port mapping complications that arise from sharing the IP address of the same host.
Flannel is responsible for providing IPv4 Layer 3 networking between multiple nodes in a cluster. It does not control the way containers connect to a host on the network, only how traffic flows between hosts. Although it does provide a CNI plug-in for Kubernetes and a Docker integration guide.
Flannel with host-gw is created for high performance networking. It has very few dependencies (in particular, it doesn’t require AWS or a newer Linux kernel) and is very easily installed. Although aws-vpc’s performance is slightly higher than host-gw, the 50 machine limit and being tightly bound to Amazon AWS can become a decisive factor.
Getting started with Kubernetes
The most hassle-free method of deploying Flannel is picking from a set of deployment tools and distributions that are by default bundled into network clusters with the add-on. For example, Tectonic from CoreOS configures the plugin on the Kubernetes clusters it creates through Tectonic Installer to manage the installation process.
You can add Flannel to any cluster existing in Kubernetes, although it is easiest to add before any pods using the network are running. When deployed outside the platform, etcd is always used as the data storage. You can find all the detailed installation instructions here.
The AKS (Azure Kubernetes Service) Integrated Service Module can be used after activating the Advanced Networking option. You can immediately deploy a Kubernetes cluster on an existing or new virtual network.
CNI meaning is growing rapidly. The current status of Kubernetes allows us to already speak of “great prospects” for practical use. The population of plugins for the platform is growing rapidly, so there is no point in waiting any longer. The first ones will get all the best stuff! If you are looking for a sturdy network solution, feel free to contact us. IT Outposts specialists will be happy to help you create a network with the performance that will surprise you.
Dmitry has 5 years of professional IT experience developing numerous consumer & enterprise applications. Dmitry has also implemented infrastructure and process improvement projects for businesses of various sizes. Due to his broad experience, Dmitry quickly understands business needs and improves processes by using established DevOps tools supported by Agile practices. The areas of Dmitry’s expertise are extensive, namely: version control, cloud platform automation, virtualization, Atlassian JIRA, software development lifecycle, Confluence, Slack, Service Desk, Flowdock, Bitbucket, and CI/CD.