Direct connection to a docker container with SSH

Łukasz Pawłowski
codeburst
Published in
7 min readApr 27, 2020

--

Web developers, testers, and ops need to run scripts or check logs on a server. They probably use docker or another virtualization tool for the local environment. In some cases, the same settings and virtualization are used in the test and production environments.

If you have access to the host, you can easily connect to docker with one of the docker commands. Let’s assume that you used an image containing bash, and your container name is “app_container”. You can use:

docker exec -it app_container bash

If you use a docker-compose, the situation will be similar. Let’s say that your service is named “app_container”. You can connect to it with:

docker-compose exec app_container bash

But what if you don’t have access to host? How to connect to docker container? Let me explain what we did and why we even had such a problem.

Why we looked at the problem

First, let’s look at some general requirements, which are presented in our company. In most projects, we have four primary environments: dev, test, preview, production. Dev is a local developers environment. Test is automatically updated by Jenkins when code is merged to dev branch in git repo. Preview and production deployment are activated by a hand. Most of this is automated. But sometimes dev/tester/ops need to call some commands on a server or check logs. To do that, he/she needs to ssh to the server and run some command on the terminal.

Now let’s consider our infrastructure. For preview and tests, we have private servers with Ubuntu. We do not deploy an application directly into our servers. Instead, we use docker to unified server configuration. For each system on our server, we create a separate directory. That root directory contains sub-directories attached to the container as volumes. One of that sub-directory is our codebase. With docker, we run each app separately. This way, we have multiple applications with different server configurations on one physical server. Ops can easily change the configuration for a specific application without interfering with other apps.

So, where is the problem? On each project, we have a different team. Each project is different, and sometimes even on preview, we can have sensitive data (clients do bizarre things even on preview servers). We need to restrain access to such data. Also, we want to control what people do on servers. We do not want to give everyone full access to everything. That is why we decide that developers/testers will get access directly to containers instead of the main server with all containers. Ops still have access to main servers.

When we know why we need ssh on docker, let’s make it happen.

Install ssh server on docker

There are two things which we need to configure: host and container. First, we’ll take care of the container.

We’ll use php:7.3-apache image. It does not have an ssh server installed, so we need to add it. We create Dockerfile:

FROM php:7.3-apache
RUN apt-get -y update && \
apt-get -y — no-install-recommends install — fix-missing openssh-server && \
rm -rf /var/lib/apt/lists/*

The first line defines what image is our base image. Next, we update a list of repositories and install openssh-server. In the end, we clear a list of repos to make our docker image smaller.

We also want to use docker-compose so let’s create docker-compose.yml:

version: '3.1'
services:
webserver:
build:
context: ./
restart: always
volumes:
- ./:/var/www/html

We would not add any more service right now. We want to focus on webserver.

If we run `docker-compose up`, our web server will start and we will have an ssh server installed.

Run ssh on start of container

But there is an issue — our ssh server does not start automatically. So whenever we restart the container, we need to run it.

We create ./docker/start.sh file:

#!/bin/bash
service ssh start

Next, we need to adjust our Dockerfile, to make sure we run that script when the container starts. Let’s add one line:

CMD chmod 0755 /var/www/html/docker/start.sh && /var/www/html/docker/start.sh && apache2-foreground

Here we do three things: make sure that we can run script, run the script, start the apache server.

Base security concern — disable root ssh

Nice, we configured ssh, and we should be able to connect to it with the root account. But it is not a good idea to make root available via ssh. We’ll add a new user and disable root ssh.

Let’s start from disabling root. We create a new file ./docker/sshd_config, and we can copy it from the container. We need to find line with PermitRootLogin and change it. If such a line does not exist we need to add it. You should have a line like this:

PermitRootLogin no

Now you need to change docker-compose.yml. In volumes for webserver add

- ./docker/sshd_config:/etc/ssh/sshd_config

After restarting ssh, root should not be able to log in.

Add ssh user

After disabling root user, we need to create new, which will be used by us. We’ll call it webssh. We want to define password per environment, so we’ll keep it in .env file, which should be added to gitignore. We’ll create and configure the user on an image.

Let’s start from creating .env file:

SSH_PASS=somesshpass

File .env will be automatically used by docker-compose. We adjust our docker-compose to pass parameters to build:

version: '3.1'
services:
webserver:
build:
context: ./
args:
SSH_PASS: ${SSH_PASS}
restart: always
volumes:
- ./:/var/www/html
- ./docker/sshd_config:/etc/ssh/sshd_config

In the end, we need to add our user to Docekrfile. First, we add a new line to accept the argument and give it some default value:

ARG SSH_PASS=somesshpass

Before the last line (before the line with CMD), we add

RUN useradd -m -s /bin/bash -p $(openssl passwd -1 $SSH_PASS) webssh

After we rebuild our image and run a container, we should be able to ssh with webssh user and password somesshpass.

Make sure it works with automation

In many applications, there are problems connected with users, groups, file ownership, etc. For example, you can run the command line, which will log errors to a log file. That file would be owned by your root user or your new ssh user. When apache tries to run scripts, and it must log something in the same file, we can get an error, that apache user has no permissions for the file.

First of all, you need to check apache user group. In our case, it is www-data. Now we need to add our ssh user and root user to that group. We add a new line in Dockerfile:

RUN usermod -g www-data webssh && usermod -a -G www-data,root root

Now our user, root user, and apache user are in the same group.

Next step is to make sure that files have correct owners and permissions. Because we use auto-update and we can’t be sure if ownership and permissions on files do not change (or new files are created), we need to adjust scripts for starting our container. You need to verify the requirements for your application and adjust that to your needs. Let’s simplify it and only change ownership of all files. We add a new line in /docker/start.sh:

chown -R :www-data /var/www/html

Host configuration

So we are almost there. Right now, we can ssh with a new user to a container and make changes without braking application. There is one last thing. As for now, we were able to ssh only from the host. We want to make sure that our team ssh from their machines directly to the container.

First of all, we need to map the host port to a container port. We’ll use again .env file to make it independent for each environment. In .env we add new line:

HOST_SSH_PORT=14403

in our docker-compose.yml for webserver we add port mapping:

    ports:
- “${HOST_SSH_PORT}:22”

When we rebuild the container, we should be able to ssh to it with the host address and port 14403. For example, if our host is available under domain “my-test-ssh.com” we should be able to run `ssh webssh@my-test-ssh.com:14403`

Here are two things worth mention:

  • make sure that selected port is opened and available on your host
  • if you use other Internet Gateway to which DNS direct (instead of your host), make sure to configure traffic for the selected port to be redirected to your host from Internet Gateway

Summary

As you see, we are able to give ssh access directly to our container for our team. Of course in real life, our docker files would be much more complex and you would need to adjust configurations for your application and security requirements. But, I hope this short article explained the basic idea.

Let’s summarized it one more time:

  • we installed ssh and make sure it runs when the container is restarted
  • we exposed configuration files to ops and mounted them as volumes
  • we expose 22 on container and mapped it to the selected port on a host
  • we disable root from ssh and created a new user with correct permissions

Here is our Dockerfile:

FROM php:7.3-apache
ARG SSH_PASS=pnvKZE9O9p6M
RUN apt-get -y update && \
apt-get -y — no-install-recommends install — fix-missing openssh-server && \
rm -rf /var/lib/apt/lists/*
RUN useradd -m -s /bin/bash -p $(openssl passwd -1 $SSH_PASS) webssh
RUN usermod -g www-data webssh && usermod -a -G www-data,root root
CMD chmod 0777 /var/www/html/docker/start.sh && /var/www/html/docker/start.sh && apache2-foreground

Here is our docker-compose.yml

version: '3.1'
services:
webserver:
build:
context: ./
args:
SSH_PASS: ${SSH_PASS}
restart: always
volumes:
- ./:/var/www/html
- ./docker/sshd_config:/etc/ssh/sshd_config
ports:
- “${HOST_SSH_PORT}:22”

Our example .env would be that:

HOST_SSH_PORT=14403
SSH_PASS=somesshpass

Our ./docker/start.sh looks like this:

#!/bin/bash
service ssh start
chown -R :www-data /var/www/html

And here is ./docker/sshd_config:

PermitRootLogin no
ChallengeResponseAuthentication no
UsePAM yes
X11Forwarding yes
PrintMotd no
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server

--

--

Web developer, tech advisor, manager, husband & father. Tech Manager at Boozt