Environment Specific Configuration Files with Bind Mounts on ECS

Some applications still rely on configuration files - short: config files. This can pose a problem in container-based runtime environments: Container images are typically pre-packed and stored in a container registry. The container images are then run in different stages (e.g., test, stage, and production) and configured via environment variables. Ideally, all stages run the same image so that you can be sure if an image worked in test stage, it will also work in production. But still, each of these stages comes with its own set of resources, like databases or S3 buckets, and thus different configuration is needed.

Motivation

Oftentimes, this configuration data can be supplied via environment variables, which is then read by the application when it starts. Numerous libraries exist in various programming languages to parse the environment automatically. Unfortunately, we are not living in an ideal world and some applications still require configuration to be passed as files to the application.

You can, of course, add a runner script, which takes environment variables and renders the required config files. This means extending an already existing image and adding another layer with a runner script, which, after rendering the config files, launches the actual application. This launcher script does not necessarily need to consider only environment variables, but could also pull configurations from the AWS Parameter Store, environment variables, or whatever other source of information.
For most of these tasks, the runner script has to rely on dependencies, e.g., to render templates, that need to be installed into the image.

While this works, you might already think: โ€œWait, I need to install a bunch of software that is not even needed during the runtime of my container? โ€ and I thought the same.
Adding dependencies on libraries that are not even needed during runtime add an additional security risk. If your application ever gets compromised, exploiting security issues in installed software is always possible. That is definitely something you want to avoid. You also probably want small images. And have you ever had dependency conflicts because the library that pulls config data from somewhere uses the same library that your application uses, just in a different version, which is not compatible? – dependency hell… Luckily for us, Amazon ECS allows for a single task to be composed of more than one container, similar to pods in Kubernetes. These containers can also share volumes among each other. So, can we run the config rendering script just in a different container and have it render the config files into the application container, but also exit before our actual application starts? Sure enough, we can! ๐Ÿ™Œ

The Solution

In this article I will walk you through some lines of CDK code that creates a sidecar, helping us rendering config files and providing them to the main container. The complete project is available on GitHub.

Setup

First of all, we are going to define the base of the task: lets create a simple Fargate task definition and then add our main container to it.

    const taskDef = new ecs.FargateTaskDefinition(
      this,
      'TaskDef',
      {
        cpu: 256,
        memoryLimitMiB: 512,
      },
    );
    
    const mainContainerId = 'Main';
    const main = taskDef.addContainer(
      mainContainerId,
      {
        essential: true,
        image: ecs.ContainerImage.fromAsset('main/'),
        logging: new ecs.AwsLogDriver(
          {
            streamPrefix: 'main',
            logGroup: logGroup,
          },
        ),
      },
    );

In our case, the main container is an essential container. This means that if the container stops for whatever reason, the whole task is terminated. Each task will have to have at least one essential container defined.

Volumes and Mounts

The application provided in the main container requires a configuration file in /etc/awsconfig, though. As mentioned before, this file is not provided in the main container itself, and we would like to use a different file in each environment. For this purpose, we define a volume in the task definition that is mounted to the main container at /etc/awsconfig:

    const volumeName = 'config';
    taskDef.addVolume({
      name: volumeName,
      host: {},
    });
    main.addMountPoints(
      {
        containerPath: '/etc/awsconfig',
        readOnly: true,
        sourceVolume: volumeName,
      },
    );

The Sidecar

Now that our main container is set up, we need to add a second container that will take care of rendering the configuration.

    const sidecar = taskDef.addContainer(
      'Sidecar',
      {
        essential: false,
        image: ecs.ContainerImage.fromAsset('sidecar/'),
        logging: new ecs.AwsLogDriver(
          {
            streamPrefix: 'sidecar',
            logGroup: logGroup,
          },
        ),
      },
    );

This container will also need some volume configuration. As we already have this ready for the main container, we can just add them from there:

    sidecar.addVolumesFrom(
      {
        readOnly: false,
        sourceContainer: mainContainerId,
      },
    );

This will mount the volume with the same configuration as in the main container, so it will be made available under /etc/awsconfig. As the sidecar container will render a file into this volume, it must be writable.

Setting up Dependencies between the Containers

Next, we need to configure the order in which the containers are started. We want the main container to start only after config file are rendered successfully. This can be done by a simple container dependency:

    main.addContainerDependencies(
      {
        container: sidecar,
        condition: ecs.ContainerDependencyCondition.SUCCESS,
      },
    );

You can now run the task in the example project. If you setup the parameter in Parameter Store, the sidecar will download it, render it to a file, and the main container will then echo the file to standard out, so it’s visible in the container’s log output.

Some Words on Using Secrets

You should not use Secrets Manager secrets as a source for your configuration. If you did that, the whole task needs access to the secrets, defined in the IAM task role. You can instead inject secrets directly into the sidecar container by providing them using the builtin support for Secrets Manager. You can also do that with Systems Manager Parameters, but for the sake of this example, we access the Parameter Store via api.

Conclusion

This is all the magic needed. ๐Ÿง™๐Ÿผ
After these changes your application container doesn’t need any additional dependencies for reading configuration data. Now itโ€™s up to you, to write a config rendering script that could, for instance, fill a template with variables provided in the environment or the parameter store. You could also download files from S3 or ur use a rest service to obtain configurations. In the example project, we download a string parameter from the Parameter Store, but you are free to implement whatever comes to your mind.

The full example project can be found on GitHub.

photo of Robert

Robert is a Cloud Consultant at superluminar and has the AWS Certified Data Analytics Specialty. He writes here about AWS-specific topics.