Deploying with Docker¶
The Keystone API can be deployed as a single container using Docker, or as several containers using Docker Compose. Single container deployments are best suited for those looking to test-drive Keystone's capabilities. Multi-container deployments are strongly recommended for teams operating at scale.
Using Docker Standalone¶
Danger
The default container instance is not suitable for production out of the box. See the Settings page for a complete overview of configurable options and recommended settings.
The following command will automatically pull and launch the latest API image.
In this example, the image is launched as a container called keystone
and the API is mapped to port 8000 on the local machine.
docker run --detach --publish 8000:80 --name keystone ghcr.io/better-hpc/keystone-api
The health of the running API instance can be checked by checking the container status or by querying the API health
endpoint.
curl -L http://localhost:8000/health/json | jq .
The default container command executes the quickstart
utility, which automatically spins up system dependencies (Postgres, Redis, etc.) within the container.
The command also checks for any existing user accounts and, if no accounts are found, creates an admin account with username admin
password quickstart
.
This behavior can be overwritten by manually specifying the docker deployment command.
New administrator accounts are created by running the createsuperuser
command from within the container.
docker exec -i -t keystone keystone-api createsuperuser
Keystone uses session tokens to manage user authentication and permissions You can test the new credentials by authenticating against the API and generating a new authentication token.
credentials='{"username": "user", "password": "userpassword"}'
headers='Content-Type: application/json'
curl -s -X POST \
-c cookies.txt \
-H "$headers" \
-d "$credentials" \
http://localhost:8000/authentication/login/
cat cookies.txt
If successful, the cookies file will contain a sessionid
token similar to the following:
# Netscape HTTP Cookie File
# https://curl.se/docs/http-cookies.html
# This file was generated by libcurl! Edit at your own risk.
#HttpOnly_localhost FALSE / FALSE 1731614955 sessionid to8ut2q5l2t3trikm8zaq1yh9vstodyq
Using Docker Compose¶
The following compose recipe provides a functional starting point for building a scalable API deployment.
Application dependencies are defined as separate services and settings values are configured using environmental
variables in various .env
files.
version: "3.7"
services:
cache: # (1)!
image: redis
container_name: keystone-cache
command: redis-server
restart: always
volumes:
- cache_data:/data
db: # (2)!
image: postgres
container_name: keystone-db
restart: always
env_file:
- db.env
volumes:
- postgres_data:/var/lib/postgresql/data/
api: # (3)!
image: ghcr.io/better-hpc/keystone-api
container_name: keystone-api
entrypoint: sh
command: |
-c '
keystone-api migrate --no-input
keystone-api collectstatic --no-input
gunicorn --bind 0.0.0.0:8000 keystone_api.main.wsgi:application'
restart: always
depends_on:
- cache
- db
ports:
- "8000:8000"
env_file:
- api.env
volumes:
- static_files:/app/static
- uploaded_files:/app/upload_files
celery-worker: # (4)!
image: ghcr.io/better-hpc/keystone-api
container_name: keystone-celery-worker
entrypoint: celery -A keystone_api.apps.scheduler worker --uid 900
restart: always
depends_on:
- cache
- db
- api
env_file:
- api.env
celery-beat: # (5)!
image: ghcr.io/better-hpc/keystone-api
container_name: keystone-celery-beat
entrypoint: celery -A keystone_api.apps.scheduler beat --scheduler django_celery_beat.schedulers:DatabaseScheduler --uid 900
restart: always
depends_on:
- cache
- db
- api
- celery-worker
env_file:
- api.env
volumes:
static_files:
uploaded_files:
postgres_data:
cache_data:
- The
cache
service acts as a job queue for background tasks. Note the mounting of cache data onto the host machine to ensure data persistence between container restarts. - The
db
service defines the application database. User credentials are defined as environmental variables in thedb.env
file. Note the mounting of database data onto the host machine to ensure data persistence between container restarts. - The
api
service defines the Keystone API application. It migrates the database schema, configures static file hosting, and launches the API behind a production quality web server. - The
celery-worker
service executes background tasks for the API application. It uses the same base image as theapi
service. - The
celery-beat
service handles task scheduling for thecelery-worker
service. It uses the same base image as theapi
service.
The following examples define the minimal required settings for deploying the recipe.
The DJANGO_SETTINGS_MODULE="keystone_api.main.settings"
setting is required by the application.
Warning
The settings provided below are intended for demonstrative purposes only. These values are not iherintly secure and should be customized to meet the needs at hand.
# General Settings
DJANGO_SETTINGS_MODULE="keystone_api.main.settings"
STORAGE_STATIC_DIR="/app/static"
STORAGE_UPLOAD_DIR="/app/upload_files"
# Security Settings
SECURE_ALLOWED_HOSTS="*"
# Redis settings
REDIS_HOST="cache" # (1)!
# Database settings
DB_POSTGRES_ENABLE="true"
DB_NAME="keystone"
DB_USER="db_user"
DB_PASSWORD="foobar123"
DB_HOST="db" # (2)!
- This value should match the service name defined in the compose file.
- This value should match the service name defined in the compose file.
# Credential values must match api.env
POSTGRES_DB="keystone"
POSTGRES_USER="db_user" # (1)!
POSTGRES_PASSWORD="foobar123"
- Database credentials must match those defined in
api.env
.