Cloud Native Declarative OSI Principles
The best practices of cloud native infrastructure writ large should be applied to networking infrastructure in the small. When describing network infrastructure in a cloud native manner, the specific challenges for provisioning each OSI ,,,,, layer need to be addressed in order to minimize the usual toil associated with provisioning, deploying functionality into, and managing complex networks. Cloud native networking addresses provisioning the fabric at layers 1 (physical layer)  and some of layer 2 (data-link layer: physical & virtual layer 2 switches) ,, and providing cloud native network functions (CNFs) that are the orchestrated implementation of layers 2 (data-link layer: data units (frames) organization, error detection, and flow control), 3  (network layer , including data planes , and control planes ), 4 (transport layer) ,, and the applications layer(s) , 5 (session layer) , 6 (presentation layer) , and 7 (application layer) .
P1 - If a pipeline provisions network infrastructure , it will be provisioned and managed using declarative configuration ,,,.
Network infrastructure can be separated into the underlying network fabric (underlay) and the application or workload network (overlay). The establishment of an underlay network consists of the provisioning and configuration that resides at the lower OSI layers, such as the implementation of the physical or virtual OSI layer 1 (physical media, interconnects  such as buses  and layer 1 switches ,,,, network adapters ,, and other mechanisms ,) or physical or virtual layer 2  (layer 2 switches, Bridges  etc). The application (overlay) network functionality is deployed onto the underlying infrastructure network.
P2 - If a pipeline provisions physical network layer 1 or layer 2 infrastructure, it will be provisioned immutably.
Configuration that relates layer 1 and layer 2 tend to be operating system boot options, machine specific bios settings (e.g. SR-IOV bios settings), or a configuration file for traditional layer 1 devices. These settings can be for physical hardware or virtual hardware. In order to provision physical hardware immutably it must be taken offline, reset to a known state (e.g. networking device should get a ‘flashed’ or 'pushed', complete replacement for its artifact updates), have any of its patches applied sequentially. At this point new configuration (that is stored in a templated format and is maintained in a version control system) can be applied with the network infrastructure element then being ready for use. Both artifacts and configuration should be maintained independently of the physical or virtual device.
P3 - If a cloud native network is provisioned, it will encompass the provisioning of an infrastructure (underlay) network  and an application/workload (overlay) network .
Some decisions, such as how to manage (or avoid) the ethernet churn of hundreds of thousands of endpoints, must be made with respect to the configuration of underlying networking hardware and topology of cloud native networks. This infrastructure (underlay) network serves as the foundation which supports the higher level (overlay) networks that will be provided to applications. The components of the underlying infrastructure network (whether it be physical or virtual layer 1 and layer 2) operate at a different rate of change to, have different concerns from, and must not interfere (e.g. degrade performance or quality of service) with, the use cases of the application/workload network. Another way of stating this, is that underlay network must be provisioned and managed in a way that alterations to its deployment do not conflict with overlay networks being consumed by applications.
P4 - If a CNF has specified a set of preferred local mechanisms ,, the infrastructure will provide those mechanisms to the CNF in the order of preference specified should the infrastructure support the requested mechanism.
Some CNFs may need to declare the mechanisms (Linux interface, memif, etc) that they support so that the orchestrator can decide the most efficient way to implement the CNF. This may include the selection of a mechanism based on affinity (e.g. the availability of an interface type between two endpoints that reside in the same host). The CNF's preference for a specific type of local mechanism does not supersede the principle of immutability. Mechanisms of any type should be considered as any other resource type. If said resource is not available, then the CNF should not be scheduled.
P5 - Regardless of whether a CNF is location dependent, affinity aware, or location agnostic, it should be deployed using either the phoenix , or canary  deployment patterns.
Some layer 1 and layer 2 cloud native network functions may need location specific information in order to be provisioned (i.e. they can’t be configured to use service discovery). When this is the case, the design of that cloud native network function should support the phoenix or canary deployment patterns in order to do a phased rollout of the equipment with the new changes. The blue-green  deployment pattern should not be used as it implies non-immutability.
P6 - If a CNF has an API defined, the API will be defined using the most declarative part of the declarative spectrum  as is possible
When using declarative configuration, the overall outcome is defined. There is a sense in which location is imperative (designating how instead of what) because it encompasses ‘how’ to get to a destination (e.g. hardcoded IPs or subnets). To a lesser degree, affinity (the property a component that must be ‘close’ to another component, such as a special type of network card) is imperative as well. A declarative spectrum for configuration emerges where there is no location specific information on one side (the most declarative), and hard coded subnets on the other side (the least declarative) of the spectrum. This is not to say that technologies such as affinity/anti-infinity cannot be declared as a desired end-state, just that the CNFs API should not be specifying to the underlaying infrastructure how to achieve these ends. When designing cloud native network functions, the configuration should be as declarative as possible.
P7 - If an infrastructure element that is part of a CNF is virtual layer 1, it will be immutable
Virtualized cloud native networking infrastructure components that are part of the physical layer 1, such as virtual network cards, should have configuration that is immutable.
P8 - If an infrastructure element that is part of a CNF is virtual layer 2 or higher, it will be immutable and orchestrated.
Virtual layer 2 and higher network functions, such as layer 2 MPLS VPNs, should be provisioned immutably. Configuration for said network services should be captured in a template, stored with an associated version, and 'pushed' via the higher level orchestration construct in an atomic fashion.
P9 - If a CNF is virtual layer 2 or higher, it will expose itself using service discovery.
P10 - If a CNF is virtual layer 1, its provisioning will use a server template
The infrastructure elements of the lowest level virtual underlay network (e.g. networking components of a hypervisor that map to the physical components of its node), should have its configuration baked into an artifact that is versioned and managed with an artifact management system. Whatever configuration that is not on the image should be applied after the initial artifact is deployed (Day 2), via an orchestrated and versioned process, before the infrastructure element is considered ready for use.
P11 - If an infrastructure element that is part of a CNF is virtual layer 2, 3, or higher, its deployment will be within a microservice and will be orchestrated.
Virtual layer 2 (e.g. layer 2 MPLS VPNs), layer 3 (e.g. software data planes and control planes) and above should be deployed using coarse grained packaging (such as containers), orchestrated, and deployed onto a generic host  infrastructure element.
P12 - If an application developer consumes a cloud native networking function, it should be consumed using a declarative API.
P13 - If an operator combines cloud native network functions into a service chain, they will be combined using a declarative API and will be exposed as a declarative API.
P14 - If a cloud native network function developer creates networking software, it will expose a declarative API.
LIST OF CONTRIBUTORS
If you would like credit for helping with these documents (for either this document or any of the other four documents linked above), please add your name to the list of contributors.
W Watson Vulk Coop
Taylor Carpenter Vulk Coop
Denver Williams Vulk Coop
Jeffrey Saelens Charter Communications
Bill Mulligan Loodse
- 1.An important property of the OSI Reference Model is that it enables standardization of the protocols used in the protocol stacks, leading to the specification of interfaces between layers. Furthermore, an important feature of the model is the distinction it makes between specification (layers) and implementation (protocols), thus leading to openness and flexibility. Openness is the ability to develop new protocols for a particular layer and independently of other layers as network technologies evolve. Openness enables competition, leading to low-cost products. Flexibility is the ability to combine different protocols in stacks, enabling the interchange of protocols in stacks as necessary. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 14). Elsevier Science. Kindle Edition.
- 2.The vast number of protocols developed for communication at different levels and for meeting requirements of different environments led to the need to organize protocols and their functionalities methodologically. In addition to this structuring, the need to enable free competition in the development of network systems that execute protocols led to development of a standardized reference model for protocols. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 6). Elsevier Science. Kindle Edition.
- 3.The technological progress of physical media, transmission methods, and communication needs over a long period has led to a rich and complex landscape of network architectures and network systems. The different engineering approaches to the problem of networking, the diverse application areas, and the quest for proprietary solutions have resulted in a large number of complex network designs that differ significantly among them. In order to reduce complexity in network design, most networks are organized in layers, where each layer represents a level of abstraction focusing on the communication/networking services it provides. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 11). Elsevier Science. Kindle Edition.
- 4.Each layer provides particular communication functionalities while drawing on the functionalities provided by the layer below. The architectures of network systems reflect this layered protocol architecture. The layer at which a network system operates (i.e., its placement within the network architecture) determines what functionalities need to be built into the system. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 6). Elsevier Science. Kindle Edition.
- 5.The purpose of the OSI reference model has been to specify layers of protocols employed by network nodes to communicate successfully. Thus, two communicating end systems need to have implemented at least one common protocol per corresponding layer. However, communicating systems do not need to implement full seven-layer protocol stacks, as described later. The number of layers implemented in communicating system stacks is influenced by the functionality of the systems, that is, the level of abstraction they provide, depending on their goals. For example, systems that target to deliver packets between two networks do not need to implement end-to-end reliable transmission or application layer protocols because of their specified and intended functionality. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 13). Elsevier Science. Kindle Edition.
- 6.A layered reference model for protocols enables the interconnection of heterogeneous networks, that is, end systems and networks that use different technology, through network systems, [...] reliable end-to-end connectivity is typically achieved at the transport layer (layer 4), while interconnection of networks can be established at lower layers. [...] , an end system transmits data packets to a receiving end system traversing two different networks. The networks are interconnected through a system that implements two protocol stacks, one per network, and delivers packets of lower layer protocols between the networks. This is a typical configuration, following the layered OSI Reference model where different DLC protocols are used to establish two logical links and the network system enables the interconnection of the two links into a single network at layer 3 (network layer). Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (pp. 15-16). Elsevier Science. Kindle Edition.
- 7.Physical layer: These protocols employ methods for bit transmission over physical media and include such typical functions as signal processing, timing, and encoding. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 12). Elsevier Science. Kindle Edition.
- 8.Data Link Control (DLC) layer: Its protocols establish point-to-point communication over a physical or logical link, performing such functions as organization of bits in data units (frames) organization, error detection, and flow control. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 12). Elsevier Science. Kindle Edition.
- 9.[...] network systems operating at the data link layer can switch frames at two levels: (i) MAC or (ii) LLC. The ability to switch at the MAC level seems like a natural choice, as all MAC protocols of the IEEE 802. x family—the predominant family of bridged networks—operate under the same standard 802.2 protocol. However, standardized protocols of the 802. x family present significant differences between them in many parameters, such as frame length, priorities, routing methods, etc. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 90). Elsevier Science. Kindle Edition.
- 10.Network layer: These protocols deliver data units over a network composed of the links established through the DLC protocols of layer 2. Part of these protocols is identification of the route the data units will follow to reach their target. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 12). Elsevier Science. Kindle Edition.
- 11.[...] link layer systems can interconnect end systems at the scale of a local area network. However, scaling a network built of bridges and switches to global scale is not feasible. The filtering database would be very large, broadcast storms would limit the operation efficiency, and routing would be inefficient due to the spanning tree algorithm. Therefore, it is necessary to use systems that are specifically designed to achieve global connectivity. These network layers systems (or “routers”) overcome the limitations of link layer systems. Routers interconnect local area networks, and the resulting network of networks is an Internet that spans the globe. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 111). Elsevier Science. Kindle Edition.
- 12.One important aspect of router systems is that there is a distinction between the data plane and the control plane. [...] Most routers have a dedicated control processor that manages routing computations and error handling. This processor is connected to the switch fabric and thus can be reached by any port. The data path is the data flow and the corresponding sequence of operations that are encountered by a “normal” packet that is simply forwarded from an input port to an output port. The control plane handles the data flow and operations that are performed for traffic that contains routing updates, triggers error handling, etc. Because the vast majority of packets encountered by the system are conventional data packets, router designs are optimized to handle these packets very efficiently. The control plane is typically more complex and not as performance critical. When a port encounters a packet that needs to be handled by the control processor, it simply forwards it through the switch fabric to the dedicated control processor.
- 13.The data plane of a router implements a sequence of operations that are performed for typical network traffic. As discussed earlier, these steps include IP processing of the arriving packet, transmission through the switch fabric to the output port, and scheduling for outgoing transmission. One of the key operations in the data plane is to determine to which output port to send the packet. This process is known as route lookup [...] Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 117). Elsevier Science. Kindle Edition.
- 14.The control plane of a router handles functions that are not directly related to traffic forwarding, but that are necessary to ensure correct operation. Typical control plane operations include: • Exchange of routing messages and routing algorithms • Handling of error conditions Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 129). Elsevier Science. Kindle Edition.
- 15.Transport layer: Transport protocols establish end-to-end communication between end systems over the network defined by a layer 3 protocol. Often, transport layer protocols provide reliability, which refers to complete and correct data transfer between end systems. Reliability can be achieved through mechanisms for end-to-end error detection, retransmissions, and flow control. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 12). Elsevier Science. Kindle Edition.
- 16.The main functionality of the transport layer is to provide a connection between processes on end hosts. Communication between processes is the basis of any distributed application. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 141). Elsevier Science. Kindle Edition.
- 17.The application layer is responsible for implementing distributed applications and their protocols. This layer implements functionality accessed by end users. When considering distributed applications that use the network for communication, numerous examples come to mind: electronic mail, access to Web documents, interactive audio, streaming video, real-time gaming, etc.
- 18.The application layer can be viewed as consisting of several sublayers: session layer, presentation layer, and application layer. In the OSI layered protocol model, these sublayers are numbered layers 5–7, respectively. However, in Internet architecture, they are combined into a single application layer. The reason that they are not treated independently is that these layers often provide functionality that is tuned to higher layers. For example, mechanisms implemented to maintain sessions in layer 5 are often specific to the application used in layer 7. Therefore, it can be justified that these three layers are treated as a single application layer. Note that in some cases this combined application layer is referred to as layer 7, layer 5, or layers 5–7. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 161). Elsevier Science. Kindle Edition.
- 19.Session layer: This layer enables and manages sessions for complete data exchange between end nodes. Sessions may consist of multiple transport layer connections. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 12). Elsevier Science. Kindle Edition.
- 20.The presentation layer, which corresponds to OSI layer 6, handles the representation of information used in the communication between end-system applications. Data can be encoded in a number of different ways, and the presentation layer ensures that they are translated appropriately for transmission on the network and to be useful to the end-system application. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 162). Elsevier Science. Kindle Edition.
- 21.Application layer: The application layer includes protocols that implement or facilitate end-to-end distributed applications over the network. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 13). Elsevier Science. Kindle Edition.
- 22.“I Heard Calico Is Suggesting Layer 2: I Thought You Were Layer 3! What’s Happening?” Project Calico Documentation, docs.projectcalico.org/v3.5/usage/troubleshooting/faq#i-heard-calico-is-suggesting-layer-2-i-thought-you-were-layer-3-whats-happening. It’s important to distinguish what Calico provides to the workloads hosted in a data center (a purely layer 3 network) with what the Calico project recommends operators use to build their underlying network fabric. Calico’s core principle is that applications and workloads overwhelmingly need only IP connectivity to communicate. For this reason we build an IP-forwarded network to connect the tenant applications and workloads to each other, and the broader world. However, the underlying physical fabric obviously needs to be set up too. Here, Calico has discussed how both a layer 2 (see here) or a layer 3 (see here) fabric could be integrated with Calico. This is one of the great strengths of the Calico model: it allows the infrastructure to be decoupled from what we show to the tenant applications and workloads. We have some thoughts on different interconnect approaches (as noted above), but just because we say that there are layer 2 and layer 3 ways of building the fabric, and that those decisions may have an impact on route scale, does not mean that Calico is “going back to Ethernet” or that we’re recommending layer 2 for tenant applications. In all cases we forward on IP packets, no matter what architecture is used to build the fabric.
- 23.“Declarative configuration is different from imperative configuration , where you simply take a series of actions (e.g., apt-get install foo ) to modify the world. Years of production experience have taught us that maintaining a written record of the system’s desired state leads to a more manageable, reliable system. Declarative configuration enables numerous advantages, including code review for configurations as well as documenting the current state of the world for distributed teams. Additionally, it is the basis for all of the self-healing behaviors in Kubernetes that keep applications running without user action.” Hightower, Kelsey; Burns, Brendan; Beda, Joe. Kubernetes: Up and Running: Dive into the Future of Infrastructure (Kindle Locations 892-896). Kindle Edition.
- 24.“The combination of declarative state stored in a version control system and Kubernetes’s ability to make reality match this declarative state makes rollback of a change trivially easy. It is simply restating the previous declarative state of the system. With imperative systems this is usually impossible, since while the imperative instructions describe how to get you from point A to point B, they rarely include the reverse instructions that can get you back. “Hightower, Kelsey; Burns, Brendan; Beda, Joe. Kubernetes: Up and Running: Dive into the Future of Infrastructure (Kindle Locations 186-190). Kindle Edition.
- 25.“Because it describes the state of the world, declarative configuration does not have to be executed to be understood. Its impact is concretely declared. Since the effects of declarative configuration can be understood before they are executed, declarative configuration is far less error-prone. Further, the traditional tools of software development, such as source control, code review, and unit testing, can be used in declarative configuration in ways that are impossible for imperative instructions. “ Hightower, Kelsey; Burns, Brendan; Beda, Joe. Kubernetes: Up and Running: Dive into the Future of Infrastructure (Kindle Locations 183-186). Kindle Edition.
- 26.So declarative definitions lend themselves to running idempotently. You can safely apply your definitions over and over again, without thinking about it too much. If something is changed to a system outside of the tool, applying the definition will bring it back into line, eliminating sources of configuration drift. When you need to make a change, you simply modify the definition, and then let the tooling work out what to do. Morris, Kief. Infrastructure as Code: Managing Servers in the Cloud (Kindle Locations 1275-1278). O'Reilly Media. Kindle Edition.
- 27.Network systems and computing systems employ interconnections to deliver data among their components. In computing systems, an interconnection is necessary to enable data transfer among the processor, the memory system, and input and output (I/O) subsystems. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 35). Elsevier Science. Kindle Edition.
- 28.The most typical interconnection for component communication is the well-known bus, which is composed of a set of wires delivering data, address information, and control information (e.g., timing, arbitration). Busses are shared interconnections among a number of attached components, implementing a point-to-point communication path between any two components. The typical operation of a bus is as follows: components that need to transmit information to another component request access to the bus, an arbiter selects the component that will transmit (in case of several requests), and then the selected component transmits its data. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 35). Elsevier Science. Kindle Edition.
- 29.Switches and networks of switches constitute alternative interconnections to busses, implementing parallel, nonconflicting paths among communicating components and systems. [...] a switch with N inputs and N outputs, employing a typical architecture to implement input-to-output connections dynamically. [...] switch, able to implement any combination of N parallel, nonconflicting input-to-output connections, is called a crossbar switch and constitutes the building block of several switch-based networks. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 36). Elsevier Science. Kindle Edition.
- 30.[...] new network technologies that employ switches were developed for various environments and applications; these technologies and related protocols include ATM [ 103 ], Fiber Channel [ 89 ], and InfiniBand [ 164 ]. Importantly, switches emerged not only for networks but for intersystem interconnection as well. For example, the evolution of multicore processors led to the employment of interconnection networks (multiple data paths) of various types, such as switch interconnects, HyperTransport [ 179 ], and multiple networks, such as the EiB of the Cell BE [ 27 ]. Furthermore, switched backplanes are introduced for network systems, such as routers. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 37). Elsevier Science. Kindle Edition.
- 31.The basic crossbar switch was originally developed for interconnection networks of multiprocessors. It is a 2×2 buffer-less switch, which was named crossbar because it could be in one of two states, cross or bar, as shown in Figure 4-3(a) . The concept of crossbar switching was extended to switches of larger sizes as well, where switches implement any input-to-output permutation with more inputs and outputs. The design of a crossbar switch is simple but expensive, in terms of resources, as it has to implement all potential permutations of inputs to outputs. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (pp. 38-39). Elsevier Science. Kindle Edition.
- 32.Scheduling is necessary in switches because high load and routing conflicts lead to contention for resources. In switches that employ input queuing, scheduling is necessary to choose the input queues that will be served at every clock cycle; in switches that employ output queuing, packets contend for output queues and need to be serialized for buffering and transmission over a link. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 43). Elsevier Science. Kindle Edition.
- 33.Network adapters are used to enable connectivity on a single network link and typically implement a single protocol stack. Network adapters provide the system implementation where that protocol stack is executed. The dependency of the adapter on the physical medium of the attached network usually influences the specification and naming of the adapter in the market. For example, off-the-shelf adapters are known as Ethernet adapters, Wi-Fi adapters, etc. Importantly, because adapters implement single protocol stacks, they are often considered and used as building blocks for multistack systems, such as bridges, routers, and gateways, implementing stacks of appropriate sizes, Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 63). Elsevier Science. Kindle Edition.
- 34.The lower layers of the stack, including the physical layer, which requires specialized hardware, constitute the portion mapped on the adapter, while the higher layers may be mapped on the end system. Consider, for example, the configuration of a typical personal computer (PC) with an Ethernet adapter. In the general case, the PC with the adapter implements at least a four-layer protocol stack with Ethernet physical and Media Access Control (MAC) protocols as well as Logical Link Control (LLC), Internet Protocol (IP), and Transmission Control Protocol (TCP), from lower to higher layers. However, the protocol stack is implemented partly on the adapter (e.g., the Ethernet physical and MAC) and partly on the PC (e.g., LLC, IP, and TCP as part of the PC's operating system). Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (pp. 63-64). Elsevier Science. Kindle Edition.
- 35.“The CNF should run without privileges. Privileged actions should be managed by the scheduler and environment”, x-factor-cnfs, Fred Kautz, https://github.com/fkautz/x-factor-cnfs/blob/master/content/process-containers.md To achieve process isolation, X-factor CNFs are designed to run in process containers without privileges. Privileged actions may be requested by the container and performed by privileged delegate. Running unprivileged promotes loose coupling between environments, reduces the overall attack surface, and gives the scheduler the ability to clean up after the pod in the case of the pod failing. The X-factor CNF methodology recognizes the need for hardware which requires additional kernel modules. When possible, kernel modules must follow standard Linux kernel device driver standards [...] and do not affect the kernel's runtime environment beyond enabling the device. These devices must also not be bound directly from the CNF. Instead, they are listed as an interface mechanism and injected into the container runtime by the orchestrator. The existence of a hardware device should not affect other CNFs.Some kernel modifications may be acceptable, e.g. DPDK or drivers. This should be immutable infrastructure with a clean interface for pods. In short, pods should not be allowed to modify their infrastructure.
- 36.“List mechanisms supported in order of preference”, x-factor-cnfs, Fred Kautz, https://github.com/fkautz/x-factor-cnfs/blob/master/content/mechanisms.md A given X-factor CNF lists in order of preference what types of interface mechanisms are supported for both its terminating and initiating interfaces. An interface mechanism is any serial/block device, file or socket that is used to transport data in and out of the container. The most common type of interface mechanism is the Linux interface. Other common mechanisms include SR-IOV, vhost-user, shmem, unix sockets, or serial/block devices. An X-factor CNF may list multiple preferences of what types of interface mechanisms it supports. However, only one mechanism will be wired in for the connection it terminates and only one will be wired in for the connection it initiates. By listing these mechanisms explicitly, the orchestrator can coordinate with both the CNF and data plane to determine what the most fitting interface for the CNF should be. Likewise, the operator may choose to disable certain types of interface mechanisms administratively for a given CNF to preserve resources for other CNFs which are in higher need when resources are scare, such as hardware devices.
- 37.Considering the need for autonomous operation and high performance, layer 2 switches perform all operations that typical bridges do. However, due to their focus on performance for dedicated segments, they employ specialized hardware for frame forwarding, and some of them even employ cut-through routing techniques instead of the typical store-and-forward technique used in common bridges. Thus, their main difference from bridges is typically the technology used to implement frame forwarding, which is mostly hardware-based, in contrast to typical bridges, which generally are more programmable and accommodate a wider range of heterogeneous LANs. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 110). Elsevier Science. Kindle Edition.
- 38.Layer 2 switches can be considered a special implementation of bridges and thus can be viewed as a subset of bridging systems. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p. 89). Elsevier Science. Kindle Edition.
- 39.“Concerns over Ethernet at scale” Calico over an Ethernet interconnect fabric, https://docs.projectcalico.org/v3.5/reference/private-cloud/l2-interconnect-fabric. It has been acknowledged by the industry for years that, beyond a certain size, classical Ethernet networks are unsuitable for production deployment. Although there have been multiple attempts to address these issues, the scale-out networking community has, largely abandoned Ethernet for anything other than providing physical point-to-point links in the networking fabric. The principal reasons for Ethernet failures at large scale are: 1. Large numbers of end points 1. Each switch in an Ethernet network must learn the path to all Ethernet endpoints that are connected to the Ethernet network. Learning this amount of state can become a substantial task when we are talking about hundreds of thousands of end points. 2. High rate of churn or change in the network. With that many end points, most of them being ephemeral (such as virtual machines or containers), there is a large amount of churn in the network. That load of re-learning paths can be a substantial burden on the control plane processor of most Ethernet switches. 3. High volumes of broadcast traffic. As each node on the Ethernet network must use Broadcast packets to locate peers, and many use broadcast for other purposes, the resultant packet replication to each and every end point can lead to broadcast storms in large Ethernet networks, effectively consuming most, if not all resources in the network and the attached end points. 4. Spanning tree. Spanning tree is the protocol used to keep an Ethernet network from forming loops. The protocol was designed in the era of smaller, simpler networks, and it has not aged well. As the number of links and interconnects in an Ethernet network goes up, many implementations of spanning tree become more fragile. Unfortunately, when spanning tree fails in an Ethernet network, the effect is a catastrophic loop or partition (or both) in the network, and, in most cases, difficult to troubleshoot or resolve. While many of these issues are crippling at VM scale (tens of thousands of end points that live for hours, days, weeks), they will be absolutely lethal at container scale (hundreds of thousands of end points that live for seconds, minutes, days).
- 40.“Introduction” x-factor-cnfs,, Fred Kautz, https://github.com/fkautz/x-factor-cnfs/blob/master/content/_index.md, X-CNFs also have additional properties not common in 12 Factor Apps which enable their use as a CNF: State their payload type for easy service function chaining orchestration; List their supported mechanisms supported mechanisms in order of preference to facilitate wiring to a data plane; Connect to Cloud-Native Microservices over their default orchestration-managed network interface;
- 41.Phoenix replacement is the natural progression from blue-green using dynamic infrastructure. Rather than keeping an idle instance around between changes, a new instance can be created each time a change is needed. As with blue-green, the change is tested on the new instance before putting it into use. The previous instance can be kept up for a short time, until the new instance has been proven in use. But then the previous instance is destroyed. Morris, Kief. Infrastructure as Code: Managing Servers in the Cloud (Kindle Locations 5694-5697). O'Reilly Media. Kindle Edition.
- 42.The canary pattern involves deploying the new version of an element alongside the old one, and then routing some portion of usage to the new elements. For example, with version A of an application running on 20 servers, version B may be deployed to two servers. A subset of traffic, perhaps flagged by IP address or by randomly setting a cookie, is sent to the servers for version B. The behavior, performance, and resource usage of the new element can be monitored to validate that it’s ready for wider use. Morris, Kief. Infrastructure as Code: Managing Servers in the Cloud (Kindle Locations 5724-5728). O'Reilly Media. Kindle Edition.
- 43.Blue-green replacement is the most straightforward pattern to replace an infrastructure element without downtime. This is the blue-green deployment pattern for software applied to infrastructure. It requires running two instances of the affected infrastructure, keeping one of them live at any point in time. Changes and upgrades are made to the offline instance, which can be thoroughly tested before switching usage over to it. Morris, Kief. Infrastructure as Code: Managing Servers in the Cloud (Kindle Locations 5681-5685). O'Reilly Media. Kindle Edition.
- 44.Given two infrastructure elements providing a similar service for example, two application servers in a cluster the servers should be nearly identical. Their system software and configuration should be the same, except for those bits of configuration that differentiate them, like their IP addresses. Letting inconsistencies slip into an infrastructure keeps you from being able to trust your automation. If one file server has an 80 GB partition, while another has 100 GB, and a third has 200 GB, then you can’t rely on an action to work the same on all of them. This encourages doing special things for servers that don’t quite match, which leads to unreliable automation. Morris, Kief. Infrastructure as Code: Managing Servers in the Cloud (Kindle Locations 380-384). O'Reilly Media. Kindle Edition.
- 45.Containerization has the potential to create a clean separation between layers of infrastructure and the services and applications that run on it. Host servers that run containers can be kept very simple, without needing to be tailored to the requirements of specific applications, and without imposing constraints on the applications beyond those imposed by containerization and supporting services like logging and monitoring. So the infrastructure that runs containers consists of generic container hosts. These can be stripped down to a bare minimum, including only the minimum toolsets to run containers, and potentially a few agents for monitoring and other administrative tasks. This simplifies management of these hosts, as they change less often and have fewer things that can break or need updating. It also reduces the surface area for security exploits. Morris, Kief. Infrastructure as Code: Managing Servers in the Cloud (Kindle Locations 1723-1729). O'Reilly Media. Kindle Edition.
- 46.[...] communication paradigms and requirements influence network protocols as well as the systems that execute them. It is important to differentiate, however, protocols from the systems that execute them, for several reasons. Protocols define communication methods, as explained previously, while network systems execute these protocols. In general, protocols include mechanisms that accommodate systems with different performance and reliability characteristics, with methods that regulate traffic flow among systems and mechanisms to detect transmission errors and lead to data retransmission. Thus, the activity of protocol development and specification does not take into account any specifics about the system that will execute a protocol and does not place any specific requirements on it. This characteristic of protocols not only enables the definition of communication methods independently of technology to a large degree, but also enables the development of economically scalable network systems, where manufacturers can develop systems that execute the same protocol on different platforms with different performance, dependability characteristics, and cost. Serpanos, Dimitrios,Wolf, Tilman. Architecture of Network Systems (The Morgan Kaufmann Series in Computer Architecture and Design) (p.5). Elsevier Science. Kindle Edition.