1 / 2

Within the Technology industry

virtualization of storage, compute and networking technologies that began over a decade ago with PC/machine virtualization. "Early on, it was recognized that virtualization of the physical machine had all sorts of benefits around cost, speed and ease of development," says Thomas Nadeau, technical director of network function virtualization at open-source software provider and IBM subsidiary  Red Hat

Like many of you I was fascinated by the Michael Cohen testimony last week in what was more performance art than fact-finding. It tends to be fascinating to watch disgruntled ex-employees testify, but they often aren’t the most reliable witnesses. The personal nature of their termination tends to push them toward exaggeration, and many were fired for legitimate reasons.

What Is Truth?

We often focus on the wrong thing. In the movie Minority Report, the fictional tool that was used to predict crime used people with precognitive capability to identify a likely crime so the future criminal could be incarcerated without damage being done.

The focus was on incarceration rather than preventing the crime, and that is why the service failed. Had it instead focused on warning both the victim and the future criminal, the problem with it being less than 100 percent accurate would have been mitigated. The actual goal, preventing the crime, would have been more sustainable.

However, I’m a tech analyst, and I’m always thinking about how I would make something better. In this case, there were several ways you could define “better” — more helpful to my own political party, more entertaining (thus holding more viewers), or more likely to drive real change.

I’m about real change, and what would have been helpful to most of us would have been something that told us, with acceptable confidence, two things. The first is obvious: whether he was lying. The second isn’t as clear: whether what he said was true.

That may seem like a weird distinction, but I’ll explain how deep learning artificial intelligence could perform both tasks with acceptable levels of confidence. I’ll close with my product of the week, HoloLens 2 from Microsoft — an offering that is taking us closer to true magic.

Essential things to know about container networking

Containers have emerged over the past several years to provide an efficient method of storing and delivering applications reliably across different computing environments. By containerizing an application platform and its dependencies, differences in OS distributions and underlying infrastructures are abstracted away.

Networking has emerged as a critical element within the container ecosystem, providing connectivity between containers running on the same host as well as on different hosts, says Michael Letourneau, an IT architect at Liberty Mutual Insurance. “Putting an application into a container automatically drives the need for network connectivity for that container,” says Letourneau, whose primary focus is on building and operating Liberty Mutual’s container platform. 

Virtualization

Container networking is part of an evolution in the virtualization of storage, compute and networking technologies that began over a decade ago with PC/machine virtualization. “Early on, it was recognized that virtualization of the physical machine had all sorts of benefits around cost, speed and ease of development,” says Thomas Nadeau, technical director of network function virtualization at open-source software provider and IBM subsidiary  Red Hat.

With virtualization, hardware resources are shared by virtual machines, each of which include both an application and a complete operating system instance. A physical server running three VMs, would, for example, feature a hypervisor accompanied by three separate operating systems running on top. On the other hand, a server supporting three containerized applications requires just a single operating system, with each container sharing the operating system kernel with its companion containers.

While a VM with its own complete operating system may consume several gigabytes of storage space, a container might be only be tens of megabytes in size. Therefore, a single server can host many more containers than VMs, significantly boosting data-center efficiency while reducing equipment, maintenance, power and other costs.

Following the right container-networking approach is critical to long-term success.

Choosing the right approach to container networking depends largely on application needs, deployment type, use of orchestrators and underlying OS type. “Most popular container technology today is based on Docker and Kubernetes, which have pluggable networking subsystems using drivers,” explains John Morello, vice president of product management, container and serverless security at cybersecurity technology provider Palo Alto Networks. “Based on your networking and deployment type, you would choose the most applicable driver for your environment to handle container-to-container or container-to-host communications.”

“The network solution must be able to meet the needs of the enterprise, scaling to potentially large numbers of containers, as well as managing ephemeral containers,” Letourneau explains.

The process of defining initial requirements, determining the options that meet those requirements, and then implementing the solution can be as important choosing the right orchestration agent to provision and load balance the containers. “In today’s world, going with a Kubernetes-based orchestrator is a pretty safe decision,” Letourneau says. “The question of what to use as the networking layer is a more nuanced conversation, and is driven not only by scale, but by features required.”

When transitioning to containers, the main goal is to create a distributed architecture comprised of microservices, which are applications structured as collections of loosely coupled services, says Chris Meyer, a senior integration architect for BlueCat Networks, an IP address-management, DNS, and DHCP services provider. “By utilizing microservices, one can have a more fault-tolerant and easy-to-upgrade application that is broken down in core pieces,” he says.

This is where the networking plays an important role. “Traditionally, one would have to connect each container together as if it were a normal networking device, reaching out over the network and paying the expenses of needing to leave the interface and come back in,” Meyer says.

Such an approach introduces additional complexities, such as having to worry about issues created by firewalls. “By utilizing the latest in container networking tech, one can link containers together in such a way that it appears to be running on the same interface,” he says. “This is a huge benefit, because not only can all the pieces of your architecture talk to each other easily and quickly, they can be distributed across different machines in different data centers.”

Some common container-networking options to choose from are bridge, overlay, host and Macvlan, as described in an InfoWorld article by Serdar Yegulalp:

Bridge networks enable containers running on the same host to communicate with each other, but the IP addresses assigned to each container are not accessible from outside the host. A new instance of Docker comes with a default bridge network, and all newly started containers automatically connect to it. Out-of-the-box defaults will require fine-tuning in production. For example, custom bridges enable features that aren’t automatic in default mode, including DNS resolution; the ability to add and remove containers from a custom bridge while they’re running; and the ability to share environment variables between containers.

Overlay networks are for containers running on different hosts, such as those in a Docker swarm. In an overlay network, containers across hosts can automatically find each other and communicate by tunneling network subnets from one host to the next; an enterprise doesn’t have to set that up for each individual participating container. Production systems will typically require creating a custom overlay network.

In a host network, the host networking driver lets containers have their network stacks run side by side with the stack on the host. A web server on port 80 in a host-networked container is available from port 80 on the host itself. Speed is the biggest appeal of host networking, but it comes at the cost of flexibility: If you map port 80 to a container, no other container can use it on that host.

Macvlan network is for applications that work directly with the underlying physical network, such as network-traffic monitoring applications. The macvlan driver doesn’t just assign an IP address to a container, but a physical MAC address as well. Macvlan is generally reserved for applications that don’t work unless they rely on a physical network address.

Connectivity isn’t the only consideration. Different modes of container networking support different networking capabilities. For example, a bridge network leverages network address translation (NAT), which comes with a performance cost. A host network eliminates the need for NAT but introduces potential port conflicts. Other features that vary among networking approaches include IP address management (IPAM), IPv6, load-balancing, and quality of service.

In addition, enterprises need to contend with differences in the ways that container runtimes, orchestrators and plugins handle networking. For example, Docker and Kubernetes have different models for how network resources are allocated and managed. Kubernetes-based Container Network Interface (CNI) plugins that work with Docker’s networking controls can help bridge the gap. CNI plugins are designed to link container runtimes to dozens of different container-network implementations.

Getting started with container networking

Considering the scale that a functioning container ecosystem can eventually grow to, it’s important to prepare for the technology by developing a detailed network strategy. “A sprawl of container ecosystems without a plan will likely cause headaches for network administrators,” Letourneau says. Misconfigured container orchestration solutions can, for example, lead to denial-of-service incidents in upstream services.

Much as when a business starts growing and a strategy for its enterprise network becomes necessary, the same holds true for a growing container environment. If a Kubernetes-based orchestration solution is being used, for instance, there are numerous CNI implementations to choose from, Letourneau says. “Each implementation has different functionality and aspects that make it attractive for different use cases.”

As enterprises transition from in-house data centers to cloud providers, they should identify and assess their network architecture and modernization goals, even if a move to container technology isn’t currently being contemplated. “The integration of the cloud-provider network with the data-center network can pose networking complexities for the future use of cloud-provider-managed container solutions,” Letourneau says.

Legacy network concerns

Container adoption requires a ground-up rethinking of an enterprise’s entire network architecture. “One can’t go into container networking assuming that it will be the same as legacy networking, because then you lose the benefits of being able to connect your architecture together in an easy-to-maintain way,” Meyer says.

Legacy networks, for instance, must be manually changed whenever a need arises, such as the addition of a new server. If a change isn’t properly validated, outages will likely occur.

“Legacy data-center network configurations were implemented as static configurations on physical devices, so if a server needs to move, configurations need to change,” says Greg Cox, senior CTO architect at data-recovery services provider Sungard AS. Change validation required many IT departments to invest heavily in labs packed with expensive measuring and monitoring equipment. With container networking, changes and validation becomes an automated process, he says.

Adopting a containerization strategy and moving to a microservices-based idiom marks a significant change in traditional data-center operations and practices. “Networking teams are typically familiar with a relatively static and unchanging infrastructure, planned subnets and standard methods for capacity measurement,” Letourneau says. “DHCP and DNS requirements are based on end-user desktops, and systems are designed with caching and static resources in mind.”

Containerization effectively tosses that venerable workload footprint out through the data-center door. In a container ecosystem, changes to network configuration and service locations occur routinely, and there is no direct human control of the network, he explains. “This goes beyond the idea of ‘software-defined’; this is application-defined networking managed by the algorithm of the orchestration scheduler.”

As adoption of container technology continues its relentless growth, wiping away time-tested technologies and practices, network teams will have to adapt and advance. Since container networking is primarily software-controlled, IT staffers will need to take their hands off of increasingly obsolete controls and gain a deeper understanding of server systems and processes. Container networking … when communicating outside of a single server is network traffic from a specific container,” Cox explains. The server encapsulates the traffic and sends it to wherever it needs to go. “It’s that encapsulation that allows entirely new networking architectures to be built without having to touch the physical networks the servers actually exist on,” he says.

Further upending legacy networking is the fact that container networks render traditional network management and monitoring tools obsolete. “The visibility … is going to be limited, as the previous tools used to monitor network performance aren’t likely to help teams navigate problems within a containerized ecosystem,” Letourneau says.

Network security concerns

Networked containers can enhance security, but they can also open the door to new dangers. “With the dynamic nature of containers, it’s important to adopt a security tool that can automatically learn networking behavior of … microservices and provide full visibility into them,” Morello says.

Containers represent a huge change in technology stack and the software-development lifecycle process. Unsurprisingly, enterprises are challenged not only to make sure the systems function properly but also to secure them.

Container networking breaks many of the assumptions that make traditional firewalls and networking security controls work, says Rani Osnat, vice president of strategy at Aqua Security, a container security technology firm. Enterprises need a way to control ingress and egress; to micro-segment containers so that applications don’t interfere with each other; and to have firewalls that can map to container connectivity and not VM connectivity, preventing potentially unsafe east-west network traversal, Osnat says.

Recognizing the growing need for strong protection, various projects are springing up with the goal of making security an integral part of container network technology. “For example, the Cillium project provides low-level security and visibility by utilizing Berkeley Packet Filters to inject security policy into the network layer,” Letourneau says. Istio, meanwhile, is a service mesh that addresses the challenges inherent in a distributed microservice architecture. “[It takes] some of the requirements of service meshes and pushes that functionality down to the networking layer where it, arguably, belongs,” Letourneau notes.

Although both of these projects are relatively new, they offer a view into extending the security layer directly into networking, defined not by network administrators working on separate teams, but by the teams that are actually building the services and applications.

In the big picture, the container networking space is evolving rapidly. “New things are showing up all the time; it’s an interesting space to watch develop,” Meyer says. “The latest and greatest tools are really helping to ease the transition into this new paradigm and to get forward-thinking enterprises deployed with this new architecture.”

admin

( 9 ) COMMENTS

Comments are closed.