Page 14 - MSDN Magazine, May 2019
P. 14
Figure 2 The Dockerfile for the DataAPIDocker Project with the Connection String in Place
Compose, which enables you to work with multiple images. This is controlled by a docker-compose instruction file that can trigger and run one or more Dockerfiles, with each Dockerfile controlling its own Docker image. Currently, I have only a single image and I’m going to stick with that. But I can still take advantage of Docker Compose to pass values into the image’s environment variables.
Why is this so important? It allows me to keep my secrets out of the Dockerfile and out of the image, as well. I can pass in the secrets when the container instance is starting up, along with any dynamic configuration information, such as varying connection strings. There’s an excellent article on this topic at bit.ly/2Uuhu8F, which I found very helpful.
Using a docker-compose file to coordinate multiple containers is referred to as container orchestration. The Visual Studio tooling for Docker can help with this. Right-click on the project in Solution Explorer, then select Add and choose Container Orchestrator Support. You’ll be presented with a dropdown from which you should select Docker Compose and, when prompted, choose Linux as the Target OS. Because the Dockerfile already exists, you’ll be asked if you’d like to rename it and create a new Dock- erfile. Answer No to that question to keep your existing Dock- erfile intact. Next, you’ll be asked if you want to overwrite the hidden .dockerignore file. Because you haven’t touched that file, either option is OK.
Using a docker-compose file to coordinate multiple containers is referred to as container orchestration.
When this operation completes, you’ll see a new solution folder called docker-compose with two files in it: .dockerignore and docker-compose.yml. The yml extension refers to the YAML lan- guage (yaml.org), a very sleek text format that relies on indentation to express the file schema.
The tooling created the following in docker-compose.yml:
version: '3.4'
services: dataapidocker:
image: ${DOCKER_REGISTRY-} dataapidocker build:
context: .
dockerfile: DataAPIDocker/Dockerfile
It has only one service: for the dataapidocker project. It notes that the image name is dataapidocker and the location of the dock- er file for when it’s time to build that image.
There are a lot of ways to use docker-compose to pass environ- ment variables into a container (dockr.ly/2TwfZub). I’ll start by putting the variable directly in the docker-compose.yml file. First, I’ll add an environment section inside the dataapidocker service—at the same level as image and build. Then, within the new section, I’ll
FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base WORKDIR /app
EXPOSE 80
ENV ConnectionStrings:MagsConnectionMssql= "Server=tcp:msdnmaglerman.database.windows.net ...”
FROM microsoft/dotnet:2.2-sdk AS build
WORKDIR /src
COPY ["DataAPIDocker/DataAPIDocker.csproj", "DataAPIDocker/"] RUN dotnet restore "DataAPIDocker/DataAPIDocker.csproj"
COPY . .
WORKDIR "/src/DataAPIDocker"
RUN dotnet build "DataAPIDocker.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "DataAPIDocker.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "DataAPIDocker.dll"]
bit.ly/2F5jfE8, the last setting overrides earlier settings. And environ- ment variables are read after appsettings. Therefore, even though there are two ConnectionStrings:MagsConnectionMssql values, the one specified in the Dockerfile is the one being used. If you were running this in Kestrel or IIS, the Dockerfile wouldn’t be executed and its environment variables wouldn’t exist in Configuration.
Creating Placeholders for the Secrets
But the ENV variable isn’t variable. Because it’s hardcoded, it’s only static right now. Also, remember, it still contains my secrets (login and password). Rather than having them in the connec- tion string, I’ll begin by extracting these two secrets into their own ENV variables. For the placeholder names, I’ll use ENVID and ENVPW. Then I’ll create two more ENV variables in the Dockerfile for the user ID and password and, as a first pass, I’ll specify their values directly:
ENV ConnectionStrings:MagsConnectionAzSql= "Server=tcp:msdnmaglerman.database.windows.net,1433; Initial Catalog=DP0419Mags;
User ID=ENVID;Password=ENVPW; [etc...]
ENV DB_UserId="lerman" ENV DB_PW="eiluj"
Back in Startup.cs ConfigureServices, I’ll read all three environ- ment variables and build up the connection string with its credentials:
var config = new StringBuilder (Configuration[“ConnectionStrings:MagsConnectionMssql”]);
string conn = config.Replace(“ENVID”, Configuration[“DB_UserId”]) .Replace(“ENVPW”, Configuration[“DB_PW”])
.ToString(); services.AddDbContext<MagContext>(options => options.UseSqlServer(conn));
This works easily because all the needed values are in the Docker- file. But still the variables are static and the secrets are exposed in Dockerfile.
Moving the Secrets out of Dockerfile
and into Docker-Compose
Because a Docker container can only access environment variables inside the container, my current setup doesn’t provide a way to define the value of the password or other ENV variables specified in the Dockerfile. But there is a way, one that lets you step up your Docker expertise a little bit further. Docker has a feature called Docker
10 msdn magazine
Data Points