Deep Dive into Kubernetes Networking : Part 1

Shiksha Engineering
8 min readMay 5, 2024

Author : Rishabh Dev Saini

Introduction

Kubernetes, born to revolutionize the management of containerized applications, serves as an intricate system designed to streamline deployment, scaling, and orchestration processes. In the realm of Kubernetes, containers emerge as fundamental entities, encapsulating an application’s code alongside all essential dependencies — ranging from the runtime environment to system tools, libraries, and binaries. A Kubernetes cluster orchestrates this system, comprising a master node responsible for overseeing containers across a group of worker nodes. Within this distributed system, networking is a critical facet that warrants careful consideration. A profound comprehension of the Kubernetes networking model becomes important, paving the way for adept application execution, vigilant monitoring, and effective troubleshooting in the Kubernetes ecosystem.

How does networking work between machines?

Before delving into the intricacies of networking within a Kubernetes cluster, it’s important to grasp the fundamentals of networking between two machines. This knowledge serves as a crucial building block upon which we can expand our understanding and apply it to the realm of pods.

Imagine a scenario where two computers reside within an office building, and they need to communicate by exchanging messages. These computers are equipped with network interfaces that facilitate connectivity through a medium, effectively serving as the conduit for message transmission. When linked via a medium, such as an Ethernet cable, these computers can share messages using their respective MAC (Media Access Control) addresses as identifiers.

However, as we contemplate expanding this network to accommodate additional devices, a straightforward approach of physically connecting each new device to every existing one becomes infeasible and impractical. Imagine the challenges of connecting hundreds, or even thousands, of devices in this manner — it quickly becomes a daunting task.

To address this scalability conundrum, we turn to networking devices, specifically Switches and Routers. A switch, a vital piece of networking hardware, employs packet switching to efficiently receive and transmit data to the intended destination device, facilitating seamless communication within a network.

Now, envision a network where multiple devices can communicate among themselves, forming a local network. To take this a step further and establish connectivity between this local network and another similar network, we introduce a key player — the router. A router serves as the bridge that connects local networks and can also act as a gateway to the broader internet. With this configuration, we have successfully created a network encompassing multiple devices, the ability to interconnect with other local networks, and the means to access the vast expanse of the internet.

Applying the same idea to pods

When contemplating networking between pods within a Kubernetes cluster, the networking paradigm closely mirrors the concept we explored previously. Extending this notion to pods, we can visualize pods as analogous to the devices that formed our network earlier. For effective communication between pods, a crucial prerequisite is the provisioning of network interfaces within these pod entities.

Notably, these network interfaces take on a virtual form, representing virtual devices within the Kubernetes framework. To establish inter-pod connectivity, a virtual medium — comprising virtual Ethernet pairs (veth pairs) — comes into play, seamlessly connecting pods to one another. This virtual networking infrastructure is made possible through the utility provided by the Linux operating system.

Moreover, to facilitate connectivity between this virtual networking system and external networks, the system hosting this configuration assumes the role of a gateway. Leveraging its own network interface, it acts as the medium, enabling communication between the Kubernetes pod network and the broader network landscape. This approach harmonizes Kubernetes networking with established networking principles, fostering efficient communication and connectivity within the cluster.

What does Kubernetes provide to enable networking?

Kubernetes does not take the responsibility to provide a networking solution for pods. It expects the cluster admins to implement a networking solution. However, to maintain uniformity, it does define a networking model, which is a set of rules. Anybody can implement a networking solution by following those rules.

Kubernetes Networking Model

The set of rules that is defined by Kubernetes is :

  1. Pods can communicate with all other pods on any node without NAT
  2. Agents on a node (like system daemons, kubectl, etc) can communicate with all pods on that node

This model has been defined by Kubernetes in order to enable simple and smooth porting of applications from Virutal Machine (VM) architecture to containers. If an application previously ran in a VM, the VM had an IP and could communicate with other VMs in the network setup. This is the same basic model.

Network Namespaces

In the realm of networking, whether dealing with physical or virtual machines, each entity possesses an Ethernet device that serves as the medium for incoming and outgoing traffic. When it comes to pods within a Kubernetes cluster, we seamlessly integrate the Ethernet device of each pod into a designated network namespace. This network namespace serves as a logical replica of the host system’s network stack, providing an isolated virtual environment for each pod. Within this encapsulated network namespace, we establish autonomy over essential networking components such as IP addresses, network interfaces, routing tables, and firewalls, mirroring the characteristics of a host machine.

With the capability to create network isolation for each pod through network namespaces in place, the next imperative step is to enable inter-pod communication. This is achieved through the utilization of veth pairs, a powerful utility furnished by the Linux operating system. For simple understanding, imagine pods as actual machines, network namespaces as the machines’ network stack, virtual network interfaces as the machine’s network interfaces (utilized to establish connectivity to a network, akin to an Ethernet cable), and veth pairs as the virtual Ethernet cables facilitating connections between machines.

While network namespaces effectively provide isolation for pods in the networking realm, a significant challenge surfaces. Whenever a new pod is instantiated, the accompanying requirements include creating a network namespace, provisioning virtual network interfaces, and establishing veth pairs. Additionally, the other ends of these veth pairs must be linked to a virtual bridge. Moreover, these virtual networking entities need to be gracefully dismantled upon pod deletion. Given the ephemeral nature of pods, which can be dynamically replaced, automating the creation and removal of virtual networking resources becomes paramount, alleviating the burden of manual intervention in this intricate process.

Enter CNI!

As we’ve just observed, the manual creation of network namespaces, veth pairs, virtual network interfaces, and the provisioning of routing information for each pod proves to be a monumental task. The need for automation led to the establishment of a comprehensive set of standards, which outlines how a system or plugin should be developed to address the challenges encountered in container runtime environments. This invaluable framework is known as the Container Networking Interface (CNI).

The CNI project is meticulously maintained by the Cloud Native Computing Foundation (CNCF). Within this project, you’ll find a duo of essential components: the CNI specifications and a collection of reference and example plugins. The CNI specifications meticulously define the configuration format when a CNI plugin is invoked, specify the actions it should take with that information, and delineate the expected outcome returned by the plugin. The reference and example plugins serve as invaluable resources for comprehending plugin functionality and embarking on the creation of new ones.

At the heart of this ecosystem lies the CNI plugin — a software entity entrusted with the task of seamlessly introducing network interfaces into the container network namespace. It also shoulders the responsibility of implementing any requisite changes, such as maintaining routing tables. Container runtimes, including but not limited to Kubernetes and CRI-O, heavily rely on CNI to facilitate network operations within containers. These container runtimes invoke the CNI plugin with a range of actions, such as ADD (to establish new network interfaces for containers), DEL (for network interface removal upon container deletion), and CHECK, all conveyed via a JSON payload.

In practice, whenever a container runtime — like kubelet in the context of Kubernetes — needs to execute network-related operations on a container, it invokes the CNI plugin with the appropriate command. Additionally, the container runtime furnishes pertinent network configuration details and container-specific data to the plugin via a JSON payload. Subsequently, the CNI plugin performs the prescribed operations and returns the results to the container runtime. Notably, in the case of a Kubernetes pod, the container runtime is summoned twice by Kubelet: once for setting up loopback interfaces and again for configuring eth0 interfaces within the pod.

Why are there multiple plugins available?

Within the Kubernetes ecosystem, a plethora of plugins is readily available, each adept at performing analogous functions and typically operated as daemons. These plugins exhibit variations in design and methodology concerning encapsulation, routing, data storage, encryption, and the scope of support they offer. The inherent complexity of networking, coupled with diverse user requirements, drives these CNI (Container Networking Interface) plugins to adopt distinct approaches to cater to a broad spectrum of user and system needs.

Two standout CNI plugins that have gained widespread popularity are Calico and Flannel:

Calico utilises the Border Gateway Protocol (BGP) as its default mechanism for routing network packets among nodes, employing IP in IP encapsulation. BGP enables native movement of data packets between nodes, delivering superior performance, especially when compared to intricate backends like VxLAN. Notably, Calico’s defining feature lies in its robust support for network policies, a crucial asset for enhancing network security within the cluster.

Flannel, on the other hand, deploys a straightforward overlay network across all interconnected nodes. Its default backend relies on VxLAN, although it can be reconfigured to use UDP. While Flannel offers simplicity and efficiency, it does not possess some of the advanced features found in other plugins, such as the ability to configure network policies and firewalls.

Broadly categorising CNI plugins, we can distinguish them into two primary groups:

  1. CNI Plugins with Basic Network Setup: These plugins adhere to a fundamental network setup approach and allocate IP addresses from the cluster’s IP pool.
  2. CNI Plugins with Overlay Networking: This category includes plugins that implement overlay networking techniques, often relying on technologies like VxLAN.

Selecting the appropriate CNI plugin hinges on a thorough examination of your specific use case and the feature set offered by each plugin. For instance, if your objective is to construct a large-scale cluster with an extensive number of nodes, a plugin supporting overlay networks like Flannel might be the preferable choice. Conversely, if network security takes precedence in your cluster, opting for a plugin equipped with robust network policies, such as Calico, would be the prudent decision.

In this part of the Kubernetes Networking series, we’ve explored the means of enabling networking within a Kubernetes cluster and the requisite components. In the next part of this series, we will delve into the journey of a data packet as it traverses the Kubernetes landscape, facilitating communication between distinct Kubernetes agents, such as pods and services.

--

--