Calico is a Layer 3 datacenter networking solution that integrates exceptionally well with Kubernetes and OpenStack, and is one of the commonly used CNI plugins for Kubernetes. This article will introduce Calico's core components and architecture.
Table of Contents
Basic components
Calico is composed of the following core components:
- Felix, running on each endpoint as the Calico Agent
- Orchestrator plugin, integrating Calico into different orchestrator code
- etcd, used for storing data
- BIRD, BGP agent responsible for publishing routing information
- BGP Route Reflector, required in larger network environments
The following sections will introduce these components one by one.
Felix
Felix is a daemon running on each host that provides endpoints, typically running on VMs or containers. Felix is responsible for configuring routing and ACLs, and generally handles all network-related configurations required on the host to enable network connectivity.
Depending on the underlying platform, Felix performs the following tasks:
Network interface management
Felix configures certain network interface-related information in the kernel so the kernel can correctly process network packets from each endpoint. The primary purpose is to ensure that endpoints can respond to ARP requests using their MAC addresses, and to enable IP forwarding for network interfaces managed by Felix.
In addition, Felix monitors newly created and deleted network interfaces and immediately configures the relevant information upon detection.
Route configuration
Felix sets up routing tables in the Linux kernel FIB to direct traffic toward endpoints, ensuring that packets to endpoints are properly forwarded.
ACL configuration
Felix also configures relevant ACLs within the Linux kernel. ACL configurations are used to ensure that only authorized traffic can be forwarded between endpoints, preventing bypass of Calico’s security policies.
Status reporting
Finally, Felix provides network status information to report any configuration issues or problems. This data is stored in etcd so that operators and other components can stay informed about the current state.
Orchestrator Plugin
Orchestrator plugins are not limited to a single type but vary based on the underlying platform (Orchestrator). For example, Kubernetes uses the Calico CNI plugin, while OpenStack uses the Calico Neutron ML2 mechanism driver. These plugins allow users to manage and operate Calico networking through their respective platforms.
Orchestrator plugin responsibilities include:
API Translation
Each platform has its own network-related API. The main function of the orchestrator plugin is to translate these APIs into Calico-specific data models and store them in the backend storage.
Report (Feedback)
If needed, the orchestrator plugin will report Calico network status to the platform, for example, whether Felix is running normally.
etcd
etcd is a distributed key-value store that Calico uses for data synchronization and storage (datastore). Depending on the platform, etcd may serve as the primary storage backend or a secondary backup mirror.
etcd performs the following roles:
Data storage
etcd provides Calico with a distributed, consistent, and highly available storage backend. This ensures Calico networking can operate reliably under a "known good" state, and allows for graceful handling of partial failures.
Communication
etcd also acts as a communication bridge between components. Different components continuously monitor etcd's key space to ensure they receive all updates, and enable components to quickly configure network settings after data changes.
BGP Client (BIRD)
Calico deploys a BGP client on every node running Felix. The BGP client retrieves Felix's kernel-level configurations and distributes them across the entire data center.
The BGP client performs the following roles:
Route Distribution
When Felix installs routes into the FIB, the BGP client advertises this information to neighboring nodes with established BGP connections. This ensures all traffic flows are efficiently routed within the data center.
BGP Route Reflector (BIRD)
In large deployments, BGP may cause complexity due to the need to maintain a mesh topology, so Calico automatically deploys a BGP Route Reflector in large-scale environments.
The BGP Route Reflector provides a centralized point for all BGP clients to connect, avoiding the need for direct connections between every pair of nodes. In Calico, the Route Reflector typically uses BIRD.
The BGP Route Reflector performs the following roles:
Centralized Route Distribution
When a Calico node's BGP client advertises routes from the FIB to the Route Reflector, the Reflector then advertises those routes to other nodes within the data center.
Summary
This section has described the various Calico components and their respective roles. The next section will discuss several architectural patterns used by Calico in IP Interconnect Fabric.
Reference
https://docs.projectcalico.org/v3.7/reference/architecture/
Copyright Notice: All articles in this blog are licensed under CC BY-NC-SA 4.0 unless otherwise stated.