One of the most frustrating moments in development projects is when I spend too much time with the project environment setup (especially when it fails). Let me share with you how I'm using Docker to improve this!
Last week, I resumed the development of a project in which I faced an excessibly complicated project setup.
On the one hand, the project has some external dependencies (S3 and integrations with other custom APIs) not trivial to set up. Even with a good documentation, it's easy to accidentally skip a step, causing everything to break.
In just one week, I experienced the following kind of issues:
For these reasons (and others, like being able to run MSSQL in any os), I'm excited about using Docker as a controlled development environment.
I have a confession to do: I don't have neither Postgres nor MySQL installed on my Mac. Not even redis. And I don't plan to install them at all.
First thing I do, when starting a project, is to set up the services using docker compose. We start with a docker-compose.yml
file:
version: '3.4'
services:
postgres:
image: postgres:11.1
volumes:
- postgres:/var/lib/postgresql/data
ports:
- 5432:5432
redis:
image: redis:3.2-alpine
volumes:
- redis:/data
ports:
- 6379:6379
volumes:
postgres:
redis:
Then I add a some scripts to package.json
:
"scripts": {
"docker:start": "docker-compose up -d",
"docker:stop": "docker-compose down",
"docker:purge": "docker-compose down --volumes",
"docker:logs": "docker-compose logs -f"
},
With this setup, when I want to start development of the project, I run:
$ yarn docker:start
To test everything is running I use docker ps
:
$ docker ps
CONTAINER ID IMAGE ... PORTS
bcfecc563fd5 redis:3.2-alpine 0.0.0.0:32769->6379/tcp
cb6eb58dfaa9 minio/minio 0.0.0.0:32770->9000/tcp
343aa530bbe9 kartoza/postgis:9.6-2.4 0.0.0.0:32768->5432/tcp
At the end of the day (or when I change project, more on that below), I stop the services using:
$ yarn docker:stop
One important thing to notice is that, in this configuration, containers expose the ports to the host computer. You can access services from your app exactly the same way as if they were installed locally.
But this has two important consequences:
Both problems can be solved by changing the ports mapping at docker compose configuration. See here:
postgres:
image: postgres:11.1
volumes:
- postgres:/var/lib/postgresql/data
ports:
- 55432:5432
That will map the container's port 5432 to http://localhost:55432
. This can be used to avoid port collisions.
In order to work with this setup, the Rails application must implement a twelve-factor compatible config.
That means storing the configuration using environment variables. For Rails I use the rails-dotenv gem:
# Gemfile
group :development, :test do
gem 'dotenv-rails'
...
end
And then create a gitignored .env
file with the following configuration:
DATABASE_URL: postgres://postgres:postgres@localhost:5432
REDIS_URL: redis://localhost:6379/
With this configuration, no database.yml file is (strictly) required (although we would need to add the database name in the DATABASE_URL
variable).
Anyway, I think it is better to provide one and use it as documentation, so pretty please, add it to git!:
# Database is configured using DATABASE_URL environment variable.
# See .env or .env.sample file
development:
database: my_blog_development
production:
database: my_blog_production
test:
database: my_blog_test
Finally, If we want to use redis, we will need to install the gem first and load the configuration in an initializer:
# config/initializers/redis.rb
Redis.current = Redis.new(url: ENV["REDIS_URL"])
If you want to replace your S3 instance with minio, just add it to services inside docker-compose.yml
file:
minio:
image: minio/minio
volumes:
- minio:/data
environment:
MINIO_ACCESS_KEY: my-blog
MINIO_SECRET_KEY: 2NVQWHTTT3asdasMgqapGchy6yAMZn
ports:
- 9000:9000
command: server /data
volumes:
postgres:
redis:
minio:
And configure ActiveStorage using storage.yml
file (also included in git):
default: &default
service: S3
region: <%= ENV["AWS_REGION"] %>
access_key_id: <%= ENV["AWS_ACCESS_KEY_ID"] %>
secret_access_key: <%= ENV["AWS_SECRET_ACCESS_KEY"] %>
bucket: <%= ENV["S3_BUCKET"] %>
development:
<<: *default
bucket: my-blog-development
endpoint: <%= ENV["S3_ENDPOINT"] %>
force_path_style: <%= ENV["S3_FORCE_PATH"] %>
production:
<<: *default
bucket: l-photo-booth-production
test:
service: Disk
root: <%= Rails.root.join('tmp/storage') %>
Notice that, unlike real S3, minio needs two additional variables. You also need to add the following lines to the .env
file:
AWS_REGION: eu-west-1
AWS_ACCESS_KEY_ID: my-blog
AWS_SECRET_ACCESS_KEY: 2NVQWHTTT3asdasMgqapGchy6yAMZn
S3_BUCKET: my-blog-development
S3_FORCE_PATH: 'true'
S3_ENDPOINT: http://localhost:9000
Run yarn docker:start
and connect to http://localhost:9000
to create your bucket.
This post is long enough. In this part we saw how to create services that can be used by your app and how to configure it all.
In the next part, I'm going to talk about how to run the application itself inside a container and what possibilities it opens.
Stay tuned!
We've recently gotten our hands on Kubernetes, and we've compiled a quick guide on how to get started.
Read full articleHere's some technical post for the Rails folks out there. If you're into performance optimisation, this is for you!
Read full articleIn this blog post, our CTO, Xavi, will show us how to query data from PostgreSQL to represent it in a time series graph.
Read full article