Any software developer will tell you that running applications locally during development is crucial. Due to the wide variety of available operating systems, programming languages, and web frameworks, this isn’t always so straightforward. In our development firm alone, we use multiple programming languages and a sleuth of frameworks regularly–we typically gravitate towards Django (Python), Phoenix (Elixir), Ruby on Rails (Ruby), and Gin (Go)–depending on each client’s requirements.
Is it really that bad?
You may have Sarah who loves developing on her Mac, whereas Frank has been a Windows power user since the late 90s. Allowing both developers to use a duplicate setup is not an easy task due to the inherent differences between the operating systems alone, much less the code editors of choice–which Sarah is all about Vim and Frank is on the opposite end with a diehard passion for Visual Studio and only Visual Studio. Of course you could institute a single setup and force all of your developers use the same setup, but that sure seems very tyrannical.
Obviously this is a problem, but what can you do?
We’ve been there and know the pain that comes along with it. Fortunately, one of the tenets of our software development principles is that all of our applications must be able to run locally on any developer’s machine, no matter the aforementioned differences in workstation technologies.
We accomplish this by integrating containerization with all of our software development projects. By utilizing containers, we’re able to ensure a develop anywhere, run anywhere mantra.
What are containers?
A container packages application code and all its dependencies so that the application can run quickly and reliably from one computing environment to another. Multiple containers are able to run on the same machine and share the OS kernel with other containers, without collisions or conflicts in sharing a host. This is accomplished due to each container running as isolated processes.
This sure sounds like Virtual Machines (VMs)
Okay, we can see where this can be confusing. While VMs such as those provided by VirtualBox have been around for a while, they really are different.
One of the biggest differences with containers vs. VMs is the amount of space needed to manage and run each option. A VM will typically consist of many gigabytes in size, which ultimately occupies a much larger footprint on a host system. With containers, we’re able to use a similarly implemented image but with a much. much smaller footprint, sometimes only a few megabytes in size.
While the two technologies are able to do essentially the same thing with resource isolation and allocation benefits, they each accomplish it differently. With containers, developers are able to capitalize on the virtualization of the host operating system instead of depending on compatible hardware, which is the case with VMs.
Are all containers the same?
Just as there are many different software frameworks to choose, there are also multiple options for containerization. Our software developers utilize Docker as the preferred container infrastructure of choice–we once ventured to Vagrant, but fortunately, the software development industry drifted towards Docker.
With Docker, we get started by creating what’s called a Dockerfile
, which can be thought of as a set of instructions similar to steps in a recipe. Included below is an example Dockerfile
for a Django application.
# Choose the base image (since we're using Django, we'll use Python 3.8 based off of Alpine Linux)
FROM python:3.8-alpine
# Expose the default Django WSGI server port for browser access
EXPOSE 8000
# Set our working directory to our application directory
WORKDIR /app
# Copy over our requirements file containing our desired Python packages
COPY requirements.txt /app
# Install the defined Python packages
RUN pip3 install -r requirements.txt
# Copy our application code from the current directory to our app directory in the container
COPY . /app
# Launch the development server to access the application in the browser
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
We can take it a step further by utilizing Docker Compose to manage multiple services (containers). We’ll stick with the default docker-compose.yml
file for containing our service configurations. To help illustrate multiple services, we’ll also include Mailhog for sandboxing our email delivery to our localhost.
version: '3.7'
services:
app:
container_name: app
build:
context: .
ports:
- 8000:8000
mailhog:
container_name: mailhog
image: mailhog/mailhog
ports:
- 1025:1025
- 8025:8025
Now, we can run docker-compose up -d
to use docker-compose
to launch our two defined services in a detached state. Now we’re able to load our browser and access our new Django application at http://localhost:8000 and Mailhog* at http://localhost:8025.
*You may have noticed the 1025 port definition–this is to replace the default SMTP port (port 25) to avoid conflicts with the local mailserver (if installed)
Still have questions?
We’ve touched on a few concepts above relating to running and developing applications in containers, and more specifically, our desired container technology, which is Docker.
If you haven’t already integrated containers into your workflow or have questions with an existing integration, then reach out to us today to learn how we can help improve the consistency of your application and software development.
Comments are closed.