banner



Is There Any Reason To Put Akka Apps In Docker

To actualize a Django project, most of the time you need an off-the-shelf solution in the form of a library or dependency.

This is usually not an issue and is often documented in the requirements.txt file.

The trouble starts when you attempt to share the entire project with another individual who wishes to run and test it because, unfortunately, the user will have to perform the setup from scratch every time you make significant changes in the libraries and dependencies.

What is Docker?

This is where containerization and Docker come in. Docker is an incredibly popular containerization platform that solves the libraries and dependency issue once and for all.

But its best feature? Regardless of host or underlying infrastructure, your containerized application will always run the same way.

This guide will walk you through setting up a Django project with Docker.

Why should you use Docker?

Docker is a product that offers hardware virtualization at the Operating System (OS) level. This ability allows developers to package and ship software and its dependencies in order to distribute as containers.

In simple terms, you can now wrap up all the pieces your software needs in a single unit called a docker image, then ship or share this image with anyone. And, as long as the recipient has Docker, they will be able to run or test your project. Gone are the days of, "But it worked on my machine!"

Docker also offers a service called DockerHub that allows the sharing and managing of Docker images amongst developers and communities — essentially, a "GitHub" for Docker images. It does share some similarities with the code repository platform, such as uploading and downloading images via CLI commands contained within the Docker CLI.

Prerequisites for using Docker

  • Proficiency in Django development
  • Intermediate level with CLI and bash

For this tutorial, Docker scripting is done in YAML files and the files executed via the Docker CLI.

This guide will explore setting up Docker on an Ubuntu machine.

For other common OS platforms:

  1. Windows, follow this page.
  2. MacOs, follow this page.

To download and set up Docker, run the instructions below:

sudo apt-get update   sudo apt-get install docker-ce docker-ce-cli containerd.io        

Django App

Because this guide assumes you are already proficient in Django, let's demonstrate the steps of running a basic Django Rest Framework app in Docker and displaying the default page.

Consider it the Hello world of Django and Docker. After this, you can dockerize any previous or future Django project you may have, especially one that has libraries listed in requirements.txt.

To start, run the below commands to get your Django app setup.

django-admin startproject dj_docker_drf

Navigate into your project folder, start an app named sample, and add rest_framework and sample to the INSTALLED_APPS list in settings.py.

In the views.py file, write a function that returns the message, "HELLO WORLD FROM DJANGO AND DOCKER"

from rest_framework.views import APIView   from django.http import JsonResponse    class HomeView(APIView):     def get(self, request, format=None):     return JsonResponse({"message":     'HELLO WORLD FROM DJANGO AND DOCKER'})        

Connect the main URL file and the app URL file such that HomeView is the default view accessed when a user accesses the app on the browser.

A critical step that many forget is to set up the ALLOWED_HOSTS to '*' in order to allow access to the Django application from any IP. The code snippet is shared below

ALLOWED_HOSTS = ['*']        

Finally, create a requirements.txt file in your root project folder where it normally lives and add the DRF library:

django-rest-framework==0.1.0        

The app is now ready to be dockerized.

Creating the Docker files and Docker CLI

Notice that the Docker file is named. This is to allow the Docker CLI to track it.

In your project root, create a file named Dockerfile and open it. The Docker directives have been explained by the comments:

# base image   FROM python:3.8    # setup environment variable   ENV DockerHOME=/home/app/webapp    # set work directory   RUN mkdir -p $DockerHOME    # where your code lives   WORKDIR $DockerHOME    # set environment variables   ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1   # install dependencies   RUN pip install --upgrade pip   # copy whole project to your docker home directory. COPY . $DockerHOME   # run this command to install all dependencies   RUN pip install -r requirements.txt   # port where the Django app runs   EXPOSE 8000   # start server   CMD python manage.py runserver        

Running the app in Docker

To run the app, you need to perform only two steps.

  1. Build the image: This is done using the build command, which uses the Dockerfile you just created. To build the image, run the command docker build . -t docker-django-v0.0. This command should be executed in the directory where the Docker file lives. The -t flag tags the image so that it can be referenced when you want to run the container.
  2. Run the image: This is done using the docker run command. This will convert the built image into a running container

Now, the app is ready for use!

To run the app, execute the command docker run docker-django-v0.0 and view your app on the browser at 0.0.0.0:8000.

Running multiple containers with Docker Compose

With the proficiency gained in Docker, the logical next step is to know how to run multiple containers and in which order.

This is the perfect use-case for Docker Compose, which is a tool used for defining and running multi-container applications of any kind. Simply put, if your application has several containers, you will use Docker-Compose CLI to run them all in the required order.

Take, for example, a web application with the following components:

  1. Web server container such as NGINX
  2. Application container that hosts the Django app
  3. Database container that hosts the production database such as POSTGRES
  4. Message container that hosts the message broker such as RabbitMq

To run such as system, you will declare directives in a Docker-compose YAML file where you state how the images will be built, on which port will the images be accessible, and most importantly, the order in which the containers will be executed (i.e., which container depends on which container for the project to run).

In this particular example, can you take a calculated guess? Which should be the first container to be spun up and which container depends on the other?

To answer this question, we will explore Docker Compose. First, follow this guide to install the CLI tool on your host operating system.

With Docker Compose (and similarly to Docker), a particular file with a special name is required. This is what the CLI tool uses to spin up the images and run them.

To create a Docker Compose file, create a YAML file and name it docker-compose.yml. This ideally should exist at the root directory of your project. To better understand this process, let's explore Docker Compose using the scenario demonstrated above: a Django app with a Postgres database, RabbitMQ message broker, and an NGINX load balancer.

Using Docker Compose with a Django app

version: '3.7'  services: # the different images that will be running as containers   nginx: # service name     build: ./nginx # location of the dockerfile that defines the nginx image. The dockerfile will be used to spin up an image during the build stage     ports:       - 1339:80 # map the external port 1339 to the internal port 80. Any traffic from 1339 externally will be passed to port 80 of the NGINX container. To access this app, one would use an address such as 0.0.0.0:1339     volumes: # static storages provisioned since django does not handle static files in production       - static_volume:/home/app/microservice/static # provide a space for static files     depends_on:       - web # will only start if web is up and running     restart: "on-failure" # restart service when it fails   web: # service name     build: . #build the image for the web service from the dockerfile in parent directory.     # command directive passes the parameters to the service and they will be executed by the service. In this example, these are django commands which will be executed in the container where django lives.     command: sh -c "python manage.py makemigrations &&                     python manage.py migrate &&                     gunicorn microservice_sample_app.wsgi:application --bind 0.0.0.0:${APP_PORT}" # Django commands to run app using gunicorn     volumes:       - .:/microservice # map data and files from parent directory in host to microservice directory in docker container       - static_volume:/home/app/microservice/static     env_file: # file where env variables are stored. Used as best practice so as not to expose secret keys       - .env # name of the env file     image: microservice_app # name of the image      expose: # expose the port to other services defined here so that they can access this service via the exposed port. In the case of Django, this is 8000 by default       - ${APP_PORT} # retrieved from the .env file     restart: "on-failure"     depends_on: # cannot start if db service is not up and running       - db   db: # service name     image: postgres:11-alpine # image name of the postgres database. during build, this will be pulled from dockerhub and a container spun up from it     volumes:       - ./init.sql:/docker-entrypoint-initdb.d/init.sql       - postgres_data:/var/lib/postgresql/data/     environment: # access credentials from the .env file       - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}       - POSTGRES_DB=${DB_NAME}       - PGPORT=${DB_PORT}       - POSTGRES_USER=${POSTGRES_USER}     restart: "on-failure"   rabbitmq:         image: rabbitmq:3-management-alpine # image to be pulled from dockerhub during building         container_name: rabbitmq # container name         volumes: # assign static storage for rabbitmq to run            rabbitmq: - ./.docker/rabbitmq/etc/:/etc/rabbitmq/             - ./.docker/rabbitmq/data/:/var/lib/rabbitmq/            rabbitmq_logs:  - ./.docker/rabbitmq/logs/:/var/log/rabbitmq/         environment: # environment variables from the referenced .env file             RABBITMQ_ERLANG_COOKIE: ${RABBITMQ_ERLANG_COOKIE}             # auth cretendials             RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}              RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}         ports: # map external ports to this specific container's internal ports             - 5672:5672             - 15672:15672         depends_on: # can only start if web service is running             - web   volumes:   postgres_data:   static_volume:   rabbitmq:   rabbitmq_logs:        

One of the highlights of Docker Compose is the depends_on directive. From the above script, we can deduce that:

  • NGINX depends on web
  • Web depends on db
  • RabbitMQ depends on web

With this setup, db will be the first one to start up, followed by web, followed by RabbitMQ and lastly, NGINX. When you decide to destroy the environment and stop the running containers, the order will be in reverse. NGINX will be the first to run and db the last.

Building and running Docker Compose scripts

Just like a Docker script, the Docker Compose script has a similar structure in that it has build and run commands. The build command will build all the images defined under services in the order of the dependency hierarchy. The run command will spin up the containers in order of the dependency hierarchy.

Luckily, there is a command that combines both build and run. It is called up. To run this command, execute the command below:

          docker-compose up        

You can also add the build flag. This is useful when you have run this command before and you wish to build new images all over again.

          docker-compose up —build                  

Once you're done with the containers, you may wish to shut down all of them and remove any static storage they were using, e.g., the postgres static volume. To do this, run the below command:

docker-compose down -V        

The -V flag stands for volumes. This ensures that the containers and attached volumes are shut down.

Follow the official documentation to learn more about various Docker Compose commands and their usage.

Supporting files in Docker

There are some files referenced in the script above that make the file less bulky, thus, making code management easier. These include the ENV file and NGINX Docker and config files. Below are samples of what each entails:

ENV file
The main purpose of this file is to store variables such as keys and credentials. This is a safe coding practice that ensures your personal keys are not exposed.

#Django SECRET_KEY="my_secret_key" DEBUG=1 ALLOWED_HOSTS=localhost 127.0.0.1 0.0.0.0 [::1] *   # database access credentials ENGINE=django.db.backends.postgresql DB_NAME=testdb POSTGRES_USER=testuser POSTGRES_PASSWORD=testpassword DB_HOST=db DB_PORT=5432 APP_PORT=8000 #superuser details DJANGO_SU_NAME=test [email protected] DJANGO_SU_PASSWORD=mypass123 #rabbitmq RABBITMQ_ERLANG_COOKIE: test_cookie RABBITMQ_DEFAULT_USER: default_user RABBITMQ_DEFAULT_PASS: sample_password        

NGINX Docker file

This is hosted in an nginx folder in the root directory. It mainly contains two directives: the image name to be pulled from Docker hub and the location of the configuration files. It's named as any other Docker file, Dockerfile.

FROM nginx:1.19.0-alpine  RUN rm /etc/nginx/conf.d/default.conf COPY nginx.conf /etc/nginx/conf.d        

NGINX config file

This is where one writes the NGINX configuration logic. It is placed in the same location as the NGINX Docker file in the NGINX folder.

This file is what dictates how the NGINX container will behave. Below is a sample script that lives in a file commonly named nginx.conf.

upstream microservice { # name of our web image     server web:8000; # default django port }  server {      listen 80; # default external port. Anything coming from port 80 will go through NGINX      location / {         proxy_pass http://microservice_app;         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;         proxy_set_header Host $host;         proxy_redirect off;     }     location /static/ {         alias /home/app/microservice/static/; # where our static files are hosted     }  }        

Conclusion

The Docker tips and tricks in this guide are vital for DevOps and full stack developer positions in any organization, and Docker is also a convenient tool for backend developers.

Because Docker packages dependencies and libraries, new developers do not necessarily need to install several dependencies and lose precious time trying to get libraries and dependencies to work. Thanks for reading.

LogRocket: Full visibility into your web apps

LogRocket Dashboard Free Trial Banner

LogRocket is a frontend application monitoring solution that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen, or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store.

In addition to logging Redux actions and state, LogRocket records console logs, JavaScript errors, stacktraces, network requests/responses with headers + bodies, browser metadata, and custom logs. It also instruments the DOM to record the HTML and CSS on the page, recreating pixel-perfect videos of even the most complex single-page apps.

Try it for free.

Is There Any Reason To Put Akka Apps In Docker

Source: https://blog.logrocket.com/dockerizing-a-django-app/

Posted by: calderonades1986.blogspot.com

0 Response to "Is There Any Reason To Put Akka Apps In Docker"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel