Page 31 - MSDN Magazine, April 2017
P. 31
There are two approaches to sharing memory. You can look for mem- ory that’s common across multiple VMs and effectively de-duplicate (though memory randomization technology in most kernels makes this difficult). Or you can follow the same approach the kernel does by separating read-only (public) memory from read-write (private) memory. The latter typically requires that the memory manager in guest VMs interact with each other, which is counter to the isolation requirement. However, by changing the way the VMs boot and access files, we found a way where the host doesn’t have to trust the guest and the guests don’t have to trust each other. Instead of the VM booting from and accessing files from a virtual hard disk, it boots and accesses its files directly from the host file system. This means that the host can provide the same sharing of read-only (public) memory. This was the key to improving density by several orders of magnitude and it put us on a path to continue improving density for many years to come.
The other value we discovered with Hyper-V isolation is that by running a different kernel for the container for developers build- ing containerized applications on their Windows 10 machines, we could still run the server kernel, ensuring their applications would work the same way in production as they do on the development machines. Thus, with the Windows 10 Anniversary Update, we enabled Windows Server Containers with Hyper-V isolation and worked with Docker on Docker for Windows to take full advantage of the new technology for developers.
Docker and Windows Server Containers
One question remained—how would users interact with this new platform technology? In the Linux world Docker had been gar- nering praise and was quickly becoming the de facto standard for container management. Why not enable users to use Windows Server Containers the same way? That fall I flew down to San Fran- cisco to meet with Docker, unsure what the company would think of a Windows-based container and whether it would be interested in building on top of Windows at all. I was in for a surprise: Solomon thought the Windows container idea was great! But would
In the end we landed on a mix of the two approaches—a select set of services was made container-aware, but most services run in each container.
the company build on top of it? That conversation changed the face of the project completely. Solomon simply said, “You know Docker is open source, you can add the code to make it work on Windows and we’ll help,” and we did just that. Since then, John Howard, a soft- ware engineer on the Hyper-V team, has become a maintainer of the Docker project and, in fact, has climbed to fourth all-time code contributor (bit.ly/2lAmaZX). Figure 1 shows the basic architecture of containers and Docker across Windows and Linux.
msdnmagazine.com
Bringing It All Together
Four months ago at Microsoft Ignite, we launched Windows Server 2016 and announced the expansion of our partnership with Docker, which means it will provide its commercially supported version of the Docker Engine at no additional charge to Windows Server customers. Since then, it’s been a whirlwind of activity. Customers like Tyco have been using Docker and Windows Server Containers to revolutionize the way they build software and to modernize existing applications, all with the same platform (bit.ly/2dWqIFM). Visual Studio 2017 has fully integrated tooling for Windows and
What was needed was a general-purpose isolation mechanism that was hostile to intruders, yet multi-tenant safe.
Linux containers, including F5 debugging, and Visual Studio Code has Dockerfile and compose support baked right in. Both Azure and Amazon’s container services have added support for Windows Server Containers and well more than 1 million Windows-based container images have been pulled from Docker Hub. To achieve end-to-end security and orchestration, Docker Datacenter is the platform for developers and sysadmins to build, ship and run dis- tributed applications anywhere. With Docker, organizations shrink application delivery from months to minutes, frictionlessly move workloads between datacenters and the cloud and achieve 20 times greater efficiency in their use of computing resources.
When I took on containers I knew it was going to be a high-stress project. I knew it was going to take some long nights, some working weekends and a lot of effort, but it’s worth it because it helped millions of developers build more apps faster. I also knew it was going to be a lot of fun and that it had the opportunity to really change the way people developed and ran applications on Windows. It’s been more fun than I could have ever expected and, while it was also more work than I anticipated, I wouldn’t trade this experience for anything. I recall one weekend early in the project, looking out the window of my office as I worked at a gorgeous, sunny summer day and think- ing to myself, “I sure hope people are gonna use this stuff ...” n
Taylor Brown is a principal program management lead in the Windows and Devices Group at Microsoft. As a member of the Base Windows engineering team he’s responsible for Windows Server Developer strategy, as well as focusing specif- ically on container technologies, including Windows Server Containers. Brown started his career in Windows working on the 1394/Firewire stack for Windows 2003, then working on ACPI/power management for Windows Server 2003 SP1 before joining the newly formed virtual machine team. Since then he has contrib- uted to every VM technology shipped by Microsoft including Virtual PC, Virtual Server and every version of Hyper-V, making him recognized as an industry expert in virtualization technologies. Reach him at taylorb@microsoft.com.
Thanks to the following technical expert for reviewing this article: David Holladay
April 2017 19