Page 11 - MSDN Magazine, June 2019
P. 11
that the DOCKER_REGISTRY variable is replaced at run time with the Docker engine running on my development machine. Then, using that image, it will find the Dockerfile (defined in both Part 1 and Part 2 of this series) to get further instructions about what to do with the image when the container instance is created. Because I didn’t provide a value for the DB_PW environment variable directly in the dockercompose file, it allows me to pass a value from the shell where I’m running the container or from another source, such as a docker .env file. I used an .env file in Part 2 to store the keyvalue pair of DB_PW and my password, eiluj.
So now I’m going to tell dockercompose that I want to also have it spin up a SQL Server container. The SQL Server for Linux image is an official image in the Microsoft Container Registry (MCR), though it’s also listed on Docker Hub for discoverability. By referencing it here, Docker will first look in the local registry (on the dev machine where I’m working) and if the image isn’t found there, it will then pull the image from the MCR. See Figure 1 for an example of these changes.
But the new service I added, which I named db, does more than just point to the mssql/server image. Let’s, as they say, unpack the changes. Keep in mind that there’s another modification coming after I work through this step.
The first change is within the dataapidocker service—the original one that describes the container for the API. I’ve added a mapping called depends_on, which contains what YAML refers to as a sequence item, named db. That means that before running the dataapidocker container, Docker will check the dockercompose file for another service named db and will need to use its details to instantiate the container defined by that service.
The SQL Server for Linux image is an official image on Docker Hub.
The db service begins by pointing to the SQL Server for Linux image using its official Docker name. This image requires that you pass in two environment variables when running a container— SA_Password and ACCEPT_EULA—so the service description also contains that information. And, finally, you need to specify the port that the server will be available on: 1433:1433. The first value refers to the host’s port and the second to the port inside the container. Exposing the server through the host’s default port 1433 makes it easy to access the database from the host computer. I’ll show you how that works after I’ve gotten this project up and running.
Figure 2 Defining a Data Volume in Docker-Compose
When it’s time to run this dockercompose outside Visual Studio, I’ll also need to expose ports from the dataapidocker service. The Visual Studio tooling created a second dockercompose file, dockercompose.override.yml, that Visual Studio uses during development. In that file are a few additional mappings, including the ports mapping (ports: 80) for the dataapidocker service. So for now I’ll let the tooling take care of allowing me to browse to the Web site when debugging in Visual Studio.
Exposing the server through the host’s default port 1433 makes it easy to access the database from the host computer.
Defining a Separate Volume for Persistent Data
There’s still more work to be done in dockercompose, however. With the existing description, any databases and data will be created inside the same container that’s running the SQL Server. This is probably not a problem for testing if a clean database is desired for each test. It’s better practice, however, to persist the data separately. There are a few ways to do this, but I’ll be using what’s called a data volume. Take a look at my blog post at bit.ly/2pZ7dDb, where I go into detail about data volumes and demonstrate how they persist even if you stop or remove the container running the SQL Server.
You can leave instructions in the dockercompose file to specify a volume for persisting the data separately from the SQL Server container, as I’ve done in Figure 2. A volume isn’t the same as a container, so it isn’t another service. Instead, you create a new key called volumes that’s a sibling to services. Within the key, you provide your own name for the volume (I’ve called mine mssqlserverjuliedata). There’s no value associated with this key. Naming a volume this way allows you to reuse it with other con tainersifyouneed.YoucanreadmoreaboutvolumesintheDocker reference for dockercompose at docs.docker.com/compose/compose-file or, for more detail, check out Elton Stoneman’s Pluralsight course on stateful data with Docker (pluralsight.pxf.io/yoLYv).
Notice that the db mapping also has a new mapping for volumes. This mapping contains a sequence item in which I’ve mapped the named volume to a target path in the source container where the data and log files will be stored.
Setting Up the Connection String
for the Containerized SQL Server
As a reminder, in the previous version of the app, I left a connection string to SQL Server LocalDB in appsettings.Development.json for times I want to test the API running in the Kestrel or local IIS server. The connection string to the Azure SQL database, named ConnectionStrings:MagsConnectionMssql, is stored in an envi ronment variable in the API’s Dockerfile, just above the section that defines the build image. Here are the first few lines of that Dockerfile. The connection string has placeholders for the user id
services: dataapidocker: [etc] db:
image: mcr.microsoft.com/mssql/server volumes:
- mssql-server-julie-data:/var/opt/mssql/data environment:
SA_PASSWORD: "${DB_PW}"
ACCEPT_EULA: "Y" ports:
- "1433:1433" volumes:
mssql-server-julie-data: {}
msdnmagazine.com
June 2019 7