Page 30 - MSDN Magazine, April 2017
P. 30

The impact on density was minimal because the containers share read-only memory with each other and the host, so only the private memory is per-container. Startup time was a significant challenge, however, calling this decision into question many times; when we first demonstrated Windows Server Containers in the keynote of Build 2015, it took several seconds to start, in large part because of the startup time of the system services. However, the Windows Server per- formance team was on the case. They profiled, analyzed and worked with teams across Windows to make their services faster and reduce dependencies to improve parallelism. The result of this effort not only made container startup faster but actually improved Windows startup time, as well. (If your Xbox or Surface started booting faster last year, you can thank containers.) Container startup went from about seven to eight seconds to a sub-second startup time in less than a year, and this trajectory to reduce startup time continues even today.
Hyper-V Isolation
Often, the first question I get regarding Hyper-V isolation is some- thing like, “Don’t containers provide isolation already? So why do I need Hyper-V?” Containers do provide isolation and for most scenarios that isolation is likely completely sufficient. However, the risk is that if an attacker is able to compromise the kernel it could potentially break out of the container and impact other containers or the host. With kernel exploits being relatively common in Windows (typically several per year), the risk for services like Azure Automation or Azure Machine Learning that consume and execute end-user or third-party code on a shared infrastructure is too high to rely on just kernel isolation. Teams building and operating these types of services either had to manage the density and startup cost of full VMs or build different security
modified to operate well as a guest VM) such that much of the emulation was no longer required. A good example of this is Hyper-V Generation 2 VMs, which discard emulation in favor of improved startup time and performance, but still achieve the same objective of behaving the same as if the guest was running directly on hardware (bit.ly/2lPpdAg).
To improve serviceability and security Windows has been moving code out of the kernel and into user mode processes for many years.
For containers, we had a different need and different goals: We didn’t need to run any older OSes and we knew exactly what the workload inside the VM was going to be—a container. So we built a new type of VM, one that was designed to run a container. To address the need for a fast startup time we built cloning technol- ogy. This was always a challenge for traditional VMs because the OS becomes specialized with things like hostnames and identity, which can’t easily be changed without a reboot. But because con- tainers have their own hostname and identity, that was no longer an issue. Cloning also helped with the density challenge, but we had to go further: We needed memory sharing.
and isolation techniques. What was needed was a general-purpose isolation mechanism that was hostile to intruders yet multi-tenant safe: Windows Server Containers with Hyper-V isolation.
The team was already hard at work on Windows Server Containers, and this provided a great experience and management model for teams building the services. By coupling the technology with the well-tested isolation of Hyper-V, we could provide the security required. However, we needed to solve the startup time and density challenges traditionally associated with VMs.
Hyper-V, like most virtualization platforms, was designed to run guests with a variety of OSes both old and new. With the goal of behaving as much like hardware as possible, to achieve these objectives the solution most virtual- ization platforms chose was emulating common hardware. As virtualization became commonplace, however, OSes were “enlightened” (specifically
Figure 1 Comparing the Basic Architecture of Containers and Docker Across Windows and Linux
Platform Independent
Docker Client Docker PowerShell Docker Swarm Docker Registry
REST Interface
Docker Engine
libcontainerd libnetwork graph plugins
Platform Specific
Windows
Control Groups
Job objects
Compute Services Namespaces
Object Namespace, Process Table, Networking
Layer Capabilities
Registry, Union like filesystem extensions
Other OS Functionality
Control Groups Namespaces Layer Capabilities Other OS
Linux cgroups Pid, net, ipc, mnt, uts Union Filesystems: AUFS, Functionality btrfs, vfs, zfs*, DeviceMapper
18 msdn magazine
Containers







































































   28   29   30   31   32