PostgreSQL is a popular open source database that can be used as a Docker container. This article will show you how to deploy PostgreSQL as a Docker container. First, you need to install PostgreSQL on your computer. You can find the installation instructions here. Next, you need to create a new PostgreSQL container. To do this, you can use the following command: docker-compose up -d This will create a new PostgreSQL container and start it up. You can now use PostgreSQL to access your database from your computer.
PostgreSQL, also referred to as Postgres, is the leading object-relational database system. It’s popular because of its high level of compliance with the SQL standard and inclusion of additional features that simplify working with complex datasets at scale.
PostgreSQL uses a traditional client-server architecture so you need to run it independently of your application’s code. In this guide, you’ll deploy a PostgreSQL server instance as a Docker container. This avoids adding packages to your host machine and helps to isolate your database from the other parts of your stack. Make sure you’ve got Docker installed before you continue.
Getting Started
PostgreSQL has an official image on Docker Hub which is available in several different variants. Tags let you select between major PostgreSQL versions from v9 to v14 and choose the operating system used as the base image. Alpine, Debian Stretch, and Debian Bullseye are offered.
For the purposes of this tutorial, we’ll use the postgres:14 tag which provides PostgreSQL 14 atop Bullseye. You’re free to select a different version to suit your requirements.
Start a PostgreSQL container using the docker run command:
You must supply a value for the POSTGRES_PASSWORD environment variable. This defines the password which will be assigned to Postgres’ default superuser account. The username defaults to postgres but can be changed by setting the POSTGRES_USER environment variable.
The -v flag is used to mount a Docker volume to the PostgreSQL container’s data directory. A named volume called postgres is referenced; Docker will either create it or reattach the volume if it already exists. You should use a volume to store your database outside the container. Without one you’ll use your data when the container stops.
PostgreSQL listens on port 5432 by default. The container port is bound to port 5432 on your Docker host by the -p flag. The -d flag is used to start the container in detached mode, effectively making it a background service that keeps running until stopped with docker stop.
Supplying the Password as a File
If you’re uncomfortable about supplying your superuser password as a plain-text CLI flag, you can inject it as a file via a volume instead. You should then set the POSTGRES_PASSWORD_FILE environment variable to give Postgres the path to that file:
This technique also works for POSTGRES_USER and other supported environment variables.
Connecting to Your Database
As PostgreSQL was bound to port 5432 above, you could connect to your database on localhost:5432 from any compatible client. Use the credentials you assigned as environment variables when starting the container.
The Docker image also includes the psql binary which you can invoke with docker exec. Use this to quickly interact with your database from a PostgreSQL shell within the container.
Connecting From Other Docker Containers
Creating a Docker network is the preferred way to access PostgreSQL from other containers on the same host. This avoids binding the Postgres server’s port and potentially exposing the service to your host’s wider network.
Create a Docker network:
Start your Postgres container with a connection to the network by using the –network flag with docker run:
Now join your application container to the same network:
The containers in the network can reach Postgres using the postgres hostname, as this is the name assigned to the Postgres container. Use port 5432 to complete the connection.
Configuring PostgreSQL
You can pass PostgreSQL server options using -c flags after the image name in your docker run command:
Everything after the image name gets passed to the command started in the container. This command will be the PostgreSQL server binary in the case of the Postgres image.
You can use a custom config file when you’re setting the values of several options. You’ll need to use another Docker volume to mount your file into the container, then supply one -c flag to instruct Postgres where to look:
This example uses a Docker bind mount to get the postgres.conf file in your working directory mounted into the container’s /etc/postgresql directory. For a reference of the options you can set with binary flags or config file directives, refer to the PostgreSQL documentation.
Seeding the Database
The Docker image supports seed files placed into the /docker-entrypoint-initdb.d directory. Any .sql or .sql.gz files will be executed to initialize the database. This occurs after the default user account and postgres database have been created. You can also add .sh files to run arbitrary shell scripts. All scripts are executed in alphabetical order.
This mechanism means all you need to seed your database is a set of SQL or shell scripts named in the correct sequential order. Mount these into your new container using a -v flag with docker run:
The initialization scripts will only be used when the Postgres data directory is empty. For practical purposes, that means they’ll run the first time the container starts with a new empty volume attached.
Creating a Custom Database Image
You could choose to encapsulate your config file and initialization scripts in your own Docker image. This would let anyone with access to the image spin up a new PostgreSQL instance that’s preconfigured for your application. Here’s a simple Dockerfile which you could use:
Build your custom image:
The build instructions in the Dockerfile will copy the PostgreSQL config file and initialization scripts from your working directory and embed them into the container image. Now you can start a database container without manually supplying the resources:
Should You Containerize Your Production Database?
It can be difficult to decide whether to run a database in Docker. Containerizing PostgreSQL makes for an easier set up experience but is sometimes more challenging to maintain. You need to take care when managing your container to avoid data loss in the future. Docker also adds a modest performance overhead which is worth considering when you anticipate tour database will be working with very large data volumes.
Docker’s benefits are increased portability, ease of scaling, and developer efficiency. Containerizing your database lets anyone spin up a fresh instance using Docker, without manually installing and configuring PostgreSQL first. Writing a Dockerfile for your PostgreSQL database that adds your config file and SQL seed scripts is therefore a good way to help developers rapidly start new environments.
Summary
PostgreSQL is an advanced SQL-based database engine that adds object-relational capabilities. While you may choose to run a traditional deployment in production, using a containerized instance simplifies set up and helps developers quickly spin up their own infrastructure.
The most critical aspect of a Dockerized deployment is to ensure you’re using a volume to store your data. This will allow you to stop, replace, and update your container to a later image version without losing your database. Beyond storage you should assess how you’re going to connect to Postgres and avoid binding ports to your host unless necessary. When connecting from another container, it’s best to use a shared Docker network to facilitate access.