Page 29 - MSDN Magazine, April 2017
P. 29

management difficult, or you had to make deployment tools and scripts to take stock VMs and install the developer’s applications, which isn’t very flexible and can be fragile.
Hykes believed Docker was the answer to this problem and, looking back, he was on to something. However, he wasn’t the first cloud service to look to containers; in fact, it was the needs of a different cloud service that kick-started the whole idea—Google. In 2006, a Linux kernel patch submitted by Rohit Seth, an engineer at Google, added support for grouping processes together under a common set of resource controls in a feature he called cgroups. Seth’s description of that patch starts off with: “Commodity HW is becoming more powerful. This is giving opportunity to run dif- ferent workloads on the same platform for better HW resource utilization”(bit.ly/2mhatrp). Although cgroups solved the problem of resource isolation, they didn’t solve inconsistent distribution, which is why Docker uses not only cgroups but also another slice of Linux technology: namespaces.
Namespaces were introduced into the Linux kernel in 2002, providing a way to control what resources a process can see and what those resources are called. Namespaces are quite different from access controls because the process doesn’t even know the resources exist or that it’s using a version of them. A simple exam- ple of this is the process list: there could be 20 processes running on a server, yet a process running within a namespace might see only five of those processes with the rest hidden from view. Another example might be for a process to think it’s reading from the root directory when in fact it’s been virtualized from another separate location. It’s the combination of cgroups and namespaces and Copy-on-Write (CoW) file-system technologies into an easy- to-use open source product that became the foundation of Docker.
Namespaces were introduced into the Linux kernel in 2002, providing a way to control what resources a process can see and what those resources are called.
By mid-2013, the Docker toolset that Hykes and his team built began to take off, becoming one of the top trending projects on GitHub and formally launching the Docker brand. Hykes’ focus shifted from DotCloud to Docker and he ultimately spun off the DotCloud business while remaining the CTO of Docker Inc.
Windows Server Containers
During the same period that Docker was gaining notice in Linux circles, the Windows Base team had been looking at ways to isolate and increase the efficiency of Microsoft Azure services that executed customer or third-party code. A Microsoft research prototype code-named “Drawbridge” provided one avenue of investigation; the project had built a process isolation container leveraging a
msdnmagazine.com
library OS (bit.ly/2aCOQxP). Unfortunately, Drawbridge had limitations relating to maintainability, performance and application compat- ibility, making it ill-suited as a general-purpose solution. Another even earlier prototype technology referred to as server silos ini- tially seemed worth investigating. Silos expanded on the existing Windows Job Objects approach, which provides process grouping and resource controls (similar to cgroups in Linux) (bit.ly/2lK1AbI). What the server silos prototype added was an isolated execution environment that included file system, registry and object name- spaces (similar to namespaces in Linux). The server silos prototype had been shelved years earlier in favor of VMs but would be reimagined as the foundation of Windows Server Containers.
The server silos prototype had been shelved years earlier in favor of virtual machines, but would be reimagined as the foundation of Windows Server Containers.
The server silo prototype code hadn’t been looked at in years. It didn’t even compile, let alone function, and it was prototype code written to prove the technique was viable in Windows, but far from production-ready. The team had a choice—start over from scratch or attempt to resurrect the prototype and start from there. We chose the latter. When the prototype was first developed, it was only a small team of developers proving that the technology was viable, but now the full force of the Windows engineering team was behind the proj- ect. Architects and engineers from across Windows were drafted to help. The storage team built the file system virtualization; the networking team built the network isolation; the kernel team built the memory management and scheduling abstractions; and so on.
Some big architectural questions remained; in particular, how would we handle system processes? In Linux, a container often runs just a single process that shares the system services in the kernel with the host and other containers. However, to improve service- ability and security, Windows has been moving code out of the kernel and into user mode processes for many years. This repre- sented an issue for the team: Either we could share all the system services, requiring changes to all the system services to make them aware of containers, or we could start a new copy of the user mode system services in each container. This was a difficult decision—we worried about the density and startup time impact of starting new instances of all the user mode services in each container. On the other side, we worried about the complexity and ongoing cost of updating all the system services in Windows, both for us and for developers outside of Windows. In the end we landed on a mix of the two approaches—a select set of services was made container- aware, but most services run in each container.
April 2017 17






















































































   27   28   29   30   31