Migrating a Dockerised Django app from .env to AWS Secrets Manager
Step-by-step walkthrough for a DevOps engineer moving a Django + DRF application running on EC2 with Docker from flat .env files to AWS Secrets Manager — using evnx convert to handle the migration safely and repeatably.
Prerequisites
Migrating a Dockerised Django app from .env to AWS Secrets Manager
This guide follows a single DevOps engineer — let's call her Priya — through a complete,
production-grade migration of a Django + Django REST Framework application. The app runs in
Docker on two EC2 instances (one for dev, one for prod) and currently loads config from
plain .env files. The goal is to move every secret into AWS Secrets Manager and never
write sensitive values to disk on EC2 again.
Before you start
All AWS CLI commands assume AWS CLI v2 and a profile named priya-devops.
Substitute your own profile or use environment credentials as appropriate.
Scenario overview
The stack
GitHub repo
└── myapp/ Django + DRF project
├── Dockerfile
├── docker-compose.yml
├── .env # dev values (on dev EC2)
├── .env.prod # prod values (on prod EC2)
├── .env.example # tracked in git (no values)
└── manage.py
Two EC2 instances:
| Instance | AMI | Role | Docker Compose profile |
|---|---|---|---|
i-0dev… | Amazon Linux 2023 | Development | docker-compose.yml |
i-0prod… | Amazon Linux 2023 | Production | docker-compose.yml + docker-compose.prod.yml |
The problem with flat .env files
- ›
.env.prodlives on the prod EC2 filesystem. Onecat .env.prodaway from a full credential leak if the instance is accessed by the wrong person or compromised. - ›Rotating a credential means SSH-ing to each EC2, editing the file, and restarting containers — error-prone and unaudited.
- ›There is no access log. No way to know who read
SECRET_KEYorDATABASE_PASSWORD. - ›Dev and prod use the same rotation process (or lack of one).
The goal
Before After
────────────────────── ──────────────────────────────────────
EC2 (dev) EC2 (dev)
.env ─────────────────► IAM role → Secrets Manager
Docker reads env file arn:aws:secretsmanager:…:dev/myapp/config
EC2 (prod) EC2 (prod)
.env.prod ──────────────► IAM role → Secrets Manager
Docker reads env file arn:aws:secretsmanager:…:prod/myapp/config
No .env files on EC2. The app fetches secrets at startup via the AWS SDK.
evnx convert handles the conversion and upload steps.
Part 1 — Audit and prepare your .env files
Before touching AWS, get your local files into clean shape.
1.1 — Current .env files
# .env (dev)
DJANGO_SETTINGS_MODULE=myapp.settings.dev
DJANGO_SECRET_KEY=dev-insecure-key-change-me
DEBUG=True
ALLOWED_HOSTS=localhost,127.0.0.1,dev.internal.example.com
DATABASE_URL=postgres://django:devpass@db:5432/myapp_dev
DATABASE_HOST=db
DATABASE_PORT=5432
DATABASE_NAME=myapp_dev
DATABASE_USER=django
DATABASE_PASSWORD=devpass
REDIS_URL=redis://redis:6379/0
REDIS_HOST=redis
REDIS_PORT=6379
AWS_STORAGE_BUCKET_NAME=myapp-dev-media
AWS_S3_REGION_NAME=us-east-1
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
CELERY_BROKER_URL=redis://redis:6379/1
CELERY_RESULT_BACKEND=redis://redis:6379/2
SENDGRID_API_KEY=SG.dev_placeholder_key
DEFAULT_FROM_EMAIL=dev@internal.example.com
SENTRY_DSN=
LOG_LEVEL=DEBUG# .env.prod
DJANGO_SETTINGS_MODULE=myapp.settings.prod
DJANGO_SECRET_KEY=a-real-50-char-random-secret-key-goes-here-prod
DEBUG=False
ALLOWED_HOSTS=api.example.com,www.example.com
DATABASE_URL=postgres://django:Xk9#mP2$vLqR@prod-db.us-east-1.rds.amazonaws.com:5432/myapp_prod
DATABASE_HOST=prod-db.us-east-1.rds.amazonaws.com
DATABASE_PORT=5432
DATABASE_NAME=myapp_prod
DATABASE_USER=django
DATABASE_PASSWORD=Xk9#mP2$vLqR
REDIS_URL=redis://prod-redis.example.com:6379/0
REDIS_HOST=prod-redis.example.com
REDIS_PORT=6379
AWS_STORAGE_BUCKET_NAME=myapp-prod-media
AWS_S3_REGION_NAME=us-east-1
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7PROD0001
AWS_SECRET_ACCESS_KEY=PRODSECRETKEY/K7MDENG/bPxRfiCYPRODKEY
CELERY_BROKER_URL=redis://prod-redis.example.com:6379/1
CELERY_RESULT_BACKEND=redis://prod-redis.example.com:6379/2
SENDGRID_API_KEY=SG.live_real_sendgrid_key_here
DEFAULT_FROM_EMAIL=noreply@example.com
SENTRY_DSN=https://abc123@o123456.ingest.sentry.io/789
LOG_LEVEL=WARNING1.2 — Validate both files with evnx
Before migrating, confirm there are no placeholder values or obvious misconfigs hiding in your files:
# Validate dev
evnx validate \
--env .env \
--example .env.example
# Validate prod (strict mode — any warning becomes an error)
evnx validate \
--env .env.prod \
--example .env.example \
--strictFix any issues reported before continuing. Common findings on a Django project:
- ›
DJANGO_SECRET_KEYstill set to the insecure default - ›
DEBUG=Truedetected when the example declares it as a bool - ›
SENTRY_DSNempty in dev (allowed) but also empty in prod (not allowed)
1.3 — Scan for accidentally committed secrets
evnx scan --path .If evnx scan flags your .env.prod as containing live credentials, that file should
never have been committed to git. Rotate any flagged keys immediately before
continuing.
1.4 — Decide on a secret naming convention
AWS Secrets Manager uses path-like names. Agree on a convention before you create anything — renaming secrets later is destructive (delete + recreate).
Priya's team uses:
{env}/myapp/{group}
| Secret name | Contains |
|---|---|
dev/myapp/config | All dev variables |
prod/myapp/config | All prod variables |
A single JSON blob per environment keeps retrieval simple and minimises API calls at
startup. For projects where different teams own different secret groups, you could split
further: prod/myapp/database, prod/myapp/aws, etc.
Part 2 — AWS setup
2.1 — IAM policy for Secrets Manager
Create a least-privilege IAM policy that grants read access to the app secrets only.
# Save as secretsmanager-myapp-policy.json
cat > secretsmanager-myapp-policy.json << 'EOF'
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadAppSecrets",
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
],
"Resource": [
"arn:aws:secretsmanager:us-east-1:123456789012:secret:dev/myapp/*",
"arn:aws:secretsmanager:us-east-1:123456789012:secret:prod/myapp/*"
]
}
]
}
EOF
aws iam create-policy \
--policy-name myapp-secretsmanager-read \
--policy-document file://secretsmanager-myapp-policy.json \
--profile priya-devopsReplace 123456789012 with your AWS account ID.
Run aws sts get-caller-identity --query Account --output text to retrieve it.
2.2 — IAM roles for EC2 instances
EC2 instances will use IAM instance roles — no static credentials on the filesystem.
# Trust policy — allows EC2 to assume the role
cat > ec2-trust-policy.json << 'EOF'
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "ec2.amazonaws.com" },
"Action": "sts:AssumeRole"
}
]
}
EOF
# Dev role
aws iam create-role \
--role-name myapp-ec2-dev \
--assume-role-policy-document file://ec2-trust-policy.json \
--profile priya-devops
aws iam attach-role-policy \
--role-name myapp-ec2-dev \
--policy-arn arn:aws:iam::123456789012:policy/myapp-secretsmanager-read \
--profile priya-devops
# Prod role
aws iam create-role \
--role-name myapp-ec2-prod \
--assume-role-policy-document file://ec2-trust-policy.json \
--profile priya-devops
aws iam attach-role-policy \
--role-name myapp-ec2-prod \
--policy-arn arn:aws:iam::123456789012:policy/myapp-secretsmanager-read \
--profile priya-devopsCreate instance profiles and attach the roles:
# Dev
aws iam create-instance-profile \
--instance-profile-name myapp-ec2-dev \
--profile priya-devops
aws iam add-role-to-instance-profile \
--instance-profile-name myapp-ec2-dev \
--role-name myapp-ec2-dev \
--profile priya-devops
# Prod
aws iam create-instance-profile \
--instance-profile-name myapp-ec2-prod \
--profile priya-devops
aws iam add-role-to-instance-profile \
--instance-profile-name myapp-ec2-prod \
--role-name myapp-ec2-prod \
--profile priya-devopsAttach each profile to the running instance:
# Dev EC2
aws ec2 associate-iam-instance-profile \
--instance-id i-0dev1234567890abc \
--iam-instance-profile Name=myapp-ec2-dev \
--profile priya-devops
# Prod EC2
aws ec2 associate-iam-instance-profile \
--instance-id i-0prod1234567890abc \
--iam-instance-profile Name=myapp-ec2-prod \
--profile priya-devopsThe dev and prod roles both have access to both secret paths in this example because
Priya uses a single shared policy. For stricter separation, create two policies:
myapp-secretsmanager-dev-read scoped to dev/myapp/* only, and
myapp-secretsmanager-prod-read scoped to prod/myapp/* only.
2.3 — Create the secrets with evnx convert
This is where evnx convert does the heavy lifting.
Dev secret
# Preview what will be uploaded — review carefully
evnx convert \
--env .env \
--to aws-secrets \
--exclude "SENTRY_DSN" \
--verbose
# Upload to AWS Secrets Manager
evnx convert \
--env .env \
--to aws-secrets \
--exclude "SENTRY_DSN" | \
aws secretsmanager create-secret \
--name "dev/myapp/config" \
--description "Django myapp dev environment variables" \
--secret-string file:///dev/stdin \
--tags '[{"Key":"env","Value":"dev"},{"Key":"app","Value":"myapp"},{"Key":"managed-by","Value":"evnx"}]' \
--profile priya-devops \
--region us-east-1evnx convert --to aws-secrets produces a flat JSON object:
{
"DJANGO_SETTINGS_MODULE": "myapp.settings.dev",
"DJANGO_SECRET_KEY": "dev-insecure-key-change-me",
"DEBUG": "True",
"ALLOWED_HOSTS": "localhost,127.0.0.1,dev.internal.example.com",
"DATABASE_URL": "postgres://django:devpass@db:5432/myapp_dev",
"DATABASE_HOST": "db",
"DATABASE_PORT": "5432",
"DATABASE_NAME": "myapp_dev",
"DATABASE_USER": "django",
"DATABASE_PASSWORD": "devpass",
"REDIS_URL": "redis://redis:6379/0",
"REDIS_HOST": "redis",
"REDIS_PORT": "6379",
"AWS_STORAGE_BUCKET_NAME": "myapp-dev-media",
"AWS_S3_REGION_NAME": "us-east-1",
"AWS_ACCESS_KEY_ID": "AKIAIOSFODNN7EXAMPLE",
"AWS_SECRET_ACCESS_KEY": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"CELERY_BROKER_URL": "redis://redis:6379/1",
"CELERY_RESULT_BACKEND": "redis://redis:6379/2",
"SENDGRID_API_KEY": "SG.dev_placeholder_key",
"DEFAULT_FROM_EMAIL": "dev@internal.example.com",
"LOG_LEVEL": "DEBUG"
}SENTRY_DSN is excluded because it is empty in dev — storing an empty string in
Secrets Manager is valid but wastes space and creates a confusing dummy entry.
Prod secret
# Preview
evnx convert \
--env .env.prod \
--to aws-secrets \
--verbose
# Upload
evnx convert \
--env .env.prod \
--to aws-secrets | \
aws secretsmanager create-secret \
--name "prod/myapp/config" \
--description "Django myapp prod environment variables" \
--secret-string file:///dev/stdin \
--tags '[{"Key":"env","Value":"prod"},{"Key":"app","Value":"myapp"},{"Key":"managed-by","Value":"evnx"}]' \
--profile priya-devops \
--region us-east-1Verify both secrets exist
aws secretsmanager list-secrets \
--filter Key=name,Values=dev/myapp,prod/myapp \
--query "SecretList[*].{Name:Name,ARN:ARN,LastChanged:LastChangedDate}" \
--output table \
--profile priya-devops \
--region us-east-1---------------------------------------------------------------------------
| ListSecrets |
+---------------------+-------------------------------------------+-------+
| LastChanged | ARN | Name |
+---------------------+-------------------------------------------+-------+
| 2026-03-11T09:12:00 | arn:aws:secretsmanager:us-east-1:…:dev/… | dev/myapp/config |
| 2026-03-11T09:14:22 | arn:aws:secretsmanager:us-east-1:…:prod/… | prod/myapp/config |
+---------------------+-------------------------------------------+-------+
Spot-check a value to confirm the upload succeeded:
aws secretsmanager get-secret-value \
--secret-id "dev/myapp/config" \
--query SecretString \
--output text \
--profile priya-devops \
--region us-east-1 | python3 -m json.tool | head -10Part 3 — Update the Django application
The app needs to fetch secrets at startup instead of reading them from environment
variables. The cleanest pattern for Django is to do this inside settings/__init__.py
or a settings/base.py before any other settings are evaluated.
3.1 — Install boto3
pip install boto3Add to requirements.txt:
boto3>=1.34.0
3.2 — Create a secrets loader utility
# myapp/settings/aws_secrets.py
"""
Fetch a Secrets Manager secret and inject all key-value pairs into
os.environ so Django settings can read them with os.environ.get().
Call load_secret() as early as possible — before any other settings
module imports os.environ values.
"""
import json
import logging
import os
import boto3
from botocore.exceptions import ClientError, NoCredentialsError
logger = logging.getLogger(__name__)
def load_secret(secret_name: str, region: str = "us-east-1") -> None:
"""
Fetch a JSON secret from AWS Secrets Manager and populate os.environ.
Existing environment variables are NOT overwritten. This means values
set by Docker Compose (e.g., DJANGO_SETTINGS_MODULE) take precedence
over what is stored in Secrets Manager.
Raises SystemExit on credential or permissions failure so the container
fails fast rather than starting with missing config.
"""
session = boto3.session.Session()
client = session.client(
service_name="secretsmanager",
region_name=region,
)
try:
response = client.get_secret_value(SecretId=secret_name)
except NoCredentialsError:
logger.critical(
"No AWS credentials found. "
"Ensure the EC2 instance has an IAM role attached."
)
raise SystemExit(1)
except ClientError as exc:
error_code = exc.response["Error"]["Code"]
if error_code == "ResourceNotFoundException":
logger.critical("Secret '%s' not found in Secrets Manager.", secret_name)
elif error_code == "AccessDeniedException":
logger.critical(
"IAM role does not have permission to read '%s'.", secret_name
)
else:
logger.critical("Unexpected Secrets Manager error: %s", exc)
raise SystemExit(1)
secret_string = response.get("SecretString", "{}")
try:
secrets = json.loads(secret_string)
except json.JSONDecodeError as exc:
logger.critical("Secret '%s' is not valid JSON: %s", secret_name, exc)
raise SystemExit(1)
injected = 0
for key, value in secrets.items():
if key not in os.environ: # respect existing env vars
os.environ[key] = str(value)
injected += 1
logger.info(
"Loaded %d variables from Secrets Manager secret '%s'.",
injected,
secret_name,
)3.3 — Update Django settings
Priya's project uses a split settings layout:
myapp/settings/
├── __init__.py (empty, or re-exports base)
├── base.py (shared settings — imports from os.environ)
├── dev.py (extends base, DEBUG=True overrides)
└── prod.py (extends base, DEBUG=False, strict security)
Add the secret loader call at the top of base.py, before any os.environ reads:
# myapp/settings/base.py
import os
# ── AWS Secrets Manager bootstrap ────────────────────────────────────────────
# Load secrets into os.environ before any setting reads the environment.
# SECRET_NAME is set by Docker Compose via the non-sensitive `environment:`
# block — it identifies which secret to fetch, not the secret itself.
_secret_name = os.environ.get("AWS_SECRET_NAME")
if _secret_name:
from myapp.settings.aws_secrets import load_secret
load_secret(
secret_name=_secret_name,
region=os.environ.get("AWS_DEFAULT_REGION", "us-east-1"),
)
# ─────────────────────────────────────────────────────────────────────────────
# Now all settings can use os.environ normally
SECRET_KEY = os.environ["DJANGO_SECRET_KEY"]
DEBUG = os.environ.get("DEBUG", "False") == "True"
ALLOWED_HOSTS = os.environ.get("ALLOWED_HOSTS", "").split(",")
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": os.environ["DATABASE_NAME"],
"USER": os.environ["DATABASE_USER"],
"PASSWORD": os.environ["DATABASE_PASSWORD"],
"HOST": os.environ["DATABASE_HOST"],
"PORT": os.environ.get("DATABASE_PORT", "5432"),
"CONN_MAX_AGE": 60,
"OPTIONS": {"sslmode": "require"} if not DEBUG else {},
}
}
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": os.environ["REDIS_URL"],
}
}
CELERY_BROKER_URL = os.environ["CELERY_BROKER_URL"]
CELERY_RESULT_BACKEND = os.environ["CELERY_RESULT_BACKEND"]
# Email
EMAIL_BACKEND = "sendgrid_backend.SendgridBackend"
SENDGRID_API_KEY = os.environ.get("SENDGRID_API_KEY", "")
DEFAULT_FROM_EMAIL = os.environ.get("DEFAULT_FROM_EMAIL", "noreply@example.com")
# Sentry
import sentry_sdk
_sentry_dsn = os.environ.get("SENTRY_DSN", "")
if _sentry_dsn:
sentry_sdk.init(dsn=_sentry_dsn)AWS_SECRET_NAME and AWS_DEFAULT_REGION are non-sensitive identifiers —
it is safe to set them directly in Docker Compose. They tell the app where to fetch
secrets, not what the secrets are.
Part 4 — Update Docker Compose
4.1 — docker-compose.yml (shared base)
Remove all secret values from the environment: block. Replace with only the two
non-sensitive pointers that tell the app where to find its secrets.
# docker-compose.yml
version: "3.9"
services:
web:
build: .
command: gunicorn myapp.wsgi:application --bind 0.0.0.0:8000 --workers 4
volumes:
- static_volume:/app/static
ports:
- "8000:8000"
environment:
# Non-sensitive: tells the app which secret to fetch, not the values
AWS_SECRET_NAME: "" # overridden per environment below
AWS_DEFAULT_REGION: "us-east-1"
DJANGO_SETTINGS_MODULE: "" # overridden per environment below
depends_on:
- db
- redis
worker:
build: .
command: celery -A myapp worker --loglevel=info
environment:
AWS_SECRET_NAME: ""
AWS_DEFAULT_REGION: "us-east-1"
DJANGO_SETTINGS_MODULE: ""
depends_on:
- redis
db:
image: postgres:16-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_DB: myapp_dev # only used locally/dev
POSTGRES_USER: django
POSTGRES_PASSWORD: devpass
redis:
image: redis:7-alpine
volumes:
postgres_data:
static_volume:4.2 — docker-compose.dev.yml (dev overrides)
# docker-compose.dev.yml
version: "3.9"
services:
web:
environment:
AWS_SECRET_NAME: "dev/myapp/config"
DJANGO_SETTINGS_MODULE: "myapp.settings.dev"
ports:
- "8000:8000"
worker:
environment:
AWS_SECRET_NAME: "dev/myapp/config"
DJANGO_SETTINGS_MODULE: "myapp.settings.dev"4.3 — docker-compose.prod.yml (prod overrides)
# docker-compose.prod.yml
version: "3.9"
services:
web:
restart: always
environment:
AWS_SECRET_NAME: "prod/myapp/config"
DJANGO_SETTINGS_MODULE: "myapp.settings.prod"
ports:
- "80:8000"
worker:
restart: always
environment:
AWS_SECRET_NAME: "prod/myapp/config"
DJANGO_SETTINGS_MODULE: "myapp.settings.prod"4.4 — Starting the app
# Dev EC2
docker compose \
-f docker-compose.yml \
-f docker-compose.dev.yml \
up -d
# Prod EC2
docker compose \
-f docker-compose.yml \
-f docker-compose.prod.yml \
up -dThe startup sequence for the web container is now:
Container starts
│
▼
DJANGO_SETTINGS_MODULE=myapp.settings.prod (from Compose)
AWS_SECRET_NAME=prod/myapp/config (from Compose)
│
▼
Django imports settings/base.py
│
▼
load_secret("prod/myapp/config") called
│
▼
boto3 calls EC2 instance metadata → gets temporary credentials from IAM role
│
▼
secretsmanager:GetSecretValue("prod/myapp/config")
│
▼
All key-value pairs injected into os.environ
│
▼
Rest of settings.py reads os.environ["DATABASE_PASSWORD"] etc.
│
▼
Gunicorn workers start serving requests
Part 5 — Updating secrets (day-2 operations)
5.1 — Rotating a single value
To rotate DATABASE_PASSWORD without recreating the entire secret:
# Fetch current secret
CURRENT=$(aws secretsmanager get-secret-value \
--secret-id prod/myapp/config \
--query SecretString \
--output text \
--profile priya-devops)
# Update the password field and put the new version
echo "$CURRENT" | \
python3 -c "import sys, json; d=json.load(sys.stdin); d['DATABASE_PASSWORD']='NewStr0ngP@ssword'; print(json.dumps(d))" | \
aws secretsmanager put-secret-value \
--secret-id prod/myapp/config \
--secret-string file:///dev/stdin \
--profile priya-devopsThen restart the affected containers so they fetch the new value at startup:
# On prod EC2
docker compose -f docker-compose.yml -f docker-compose.prod.yml restart web worker5.2 — Adding a new variable
When the team adds a new variable (e.g., STRIPE_SECRET_KEY), the workflow is:
1. Add to .env.example (in git):
STRIPE_SECRET_KEY=your_stripe_secret_key_here2. Add to local .env / .env.prod:
STRIPE_SECRET_KEY=sk_test_abc1233. Use evnx to push only the new variable to Secrets Manager:
# Dev
evnx convert \
--env .env \
--to aws-secrets \
--include "STRIPE_SECRET_KEY" | \
python3 -c "
import sys, json, subprocess, boto3
new_vars = json.load(sys.stdin)
client = boto3.client('secretsmanager', region_name='us-east-1')
existing = json.loads(
client.get_secret_value(SecretId='dev/myapp/config')['SecretString']
)
existing.update(new_vars)
client.put_secret_value(
SecretId='dev/myapp/config',
SecretString=json.dumps(existing)
)
print('Updated dev/myapp/config with:', list(new_vars.keys()))
"For frequent updates, Priya's team wraps this pattern in a small shell script
scripts/push-secret.sh that accepts an env file and a secret name as arguments.
5.3 — Full re-sync after a bulk .env change
If the local .env.prod has diverged significantly from what is in Secrets Manager
(e.g., after a major deployment with many new variables), re-upload the whole file:
evnx convert \
--env .env.prod \
--to aws-secrets | \
aws secretsmanager put-secret-value \
--secret-id "prod/myapp/config" \
--secret-string file:///dev/stdin \
--profile priya-devops \
--region us-east-1
echo "Secret updated. Restarting containers..."
ssh ec2-user@prod.example.com \
"docker compose -f docker-compose.yml -f docker-compose.prod.yml restart web worker"Part 6 — Remove .env files from EC2
Once both secrets are live and the containers start cleanly from Secrets Manager, remove the flat files from the EC2 instances.
# Dev EC2
ssh ec2-user@dev.internal.example.com "
shred -u ~/.env 2>/dev/null || true
shred -u ~/myapp/.env 2>/dev/null || true
echo 'Dev .env files removed'
"
# Prod EC2
ssh ec2-user@prod.example.com "
shred -u ~/myapp/.env.prod 2>/dev/null || true
echo 'Prod .env.prod removed'
"shred overwrites the file contents before unlinking, preventing forensic recovery
from the filesystem. Use rm -f if shred is unavailable, but note it does not
guarantee overwrite on SSDs or journaling filesystems.
Keep your local copies of .env and .env.prod in a password manager or encrypted
vault (see evnx backup) — you may need them to recreate secrets after an accidental
delete.
Part 7 — CI/CD integration
If Priya's team deploys via GitHub Actions, the workflow can push updated secrets
automatically on merge to main.
# .github/workflows/deploy-prod.yml
name: Deploy to prod
on:
push:
branches: [main]
jobs:
push-secrets:
name: Sync secrets to AWS Secrets Manager
runs-on: ubuntu-latest
permissions:
id-token: write # required for OIDC
contents: read
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials (OIDC — no static keys)
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/github-actions-deploy
aws-region: us-east-1
- name: Install evnx
run: cargo install evnx
- name: Reconstruct .env.prod from GitHub secrets
run: |
cat > .env.prod << EOF
DJANGO_SECRET_KEY=${{ secrets.PROD_DJANGO_SECRET_KEY }}
DATABASE_PASSWORD=${{ secrets.PROD_DATABASE_PASSWORD }}
AWS_SECRET_ACCESS_KEY=${{ secrets.PROD_AWS_SECRET_ACCESS_KEY }}
SENDGRID_API_KEY=${{ secrets.PROD_SENDGRID_API_KEY }}
SENTRY_DSN=${{ secrets.PROD_SENTRY_DSN }}
EOF
# Append non-sensitive vars from a committed .env.prod.defaults file
cat .env.prod.defaults >> .env.prod
- name: Validate before upload
run: evnx validate --env .env.prod --example .env.example --strict
- name: Push to Secrets Manager
run: |
evnx convert --env .env.prod --to aws-secrets | \
aws secretsmanager put-secret-value \
--secret-id prod/myapp/config \
--secret-string file:///dev/stdin
- name: Restart containers on EC2
run: |
aws ssm send-command \
--instance-ids i-0prod1234567890abc \
--document-name AWS-RunShellScript \
--parameters 'commands=["cd /home/ec2-user/myapp && docker compose -f docker-compose.yml -f docker-compose.prod.yml restart web worker"]'.env.prod.defaults is a committed file containing only non-sensitive
configuration: ALLOWED_HOSTS, AWS_S3_REGION_NAME,
LOG_LEVEL, etc. Sensitive values come exclusively from GitHub Secrets via
the reconstruct step above.
Migration checklist
Use this checklist to track progress across environments.
Pre-migration
☐ evnx validate passes on both .env and .env.prod
☐ evnx scan finds no unintended secrets committed to git
☐ Secret naming convention agreed and documented
AWS setup
☐ IAM policy myapp-secretsmanager-read created
☐ IAM role myapp-ec2-dev created and attached to dev EC2
☐ IAM role myapp-ec2-prod created and attached to prod EC2
☐ Secret dev/myapp/config created (via evnx convert | aws secretsmanager create-secret)
☐ Secret prod/myapp/config created (via evnx convert | aws secretsmanager create-secret)
☐ Both secrets spot-checked with aws secretsmanager get-secret-value
Application
☐ boto3 added to requirements.txt
☐ aws_secrets.py loader utility created
☐ settings/base.py calls load_secret() before any os.environ reads
☐ No secret values remain hardcoded anywhere in settings files
Docker Compose
☐ All secret values removed from environment: blocks
☐ AWS_SECRET_NAME and AWS_DEFAULT_REGION set per-environment
☐ docker-compose.dev.yml and docker-compose.prod.yml updated
Validation
☐ Dev containers start cleanly and fetch secrets from Secrets Manager
☐ prod containers start cleanly and fetch secrets from Secrets Manager
☐ Django /healthz endpoint returns 200 after startup
☐ Database connection works (run manage.py check --deploy on prod)
☐ Celery worker connects to Redis and processes a test task
Cleanup
☐ .env files removed from dev EC2 (shred -u)
☐ .env.prod removed from prod EC2 (shred -u)
☐ Local .env.prod backed up to password manager or evnx backup
☐ .env and .env.prod added to .gitignore (verify with git status)
☐ Team notified of new rotation procedure
Security notes
Why not pass AWS credentials to Docker via environment variables?
EC2 IAM instance roles provide temporary credentials automatically through the instance
metadata service. boto3 finds them without any configuration. Hardcoding
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in Compose or Secrets Manager for the
purpose of fetching other secrets is circular and exposes static long-lived credentials —
exactly the problem this migration is solving.
Why store all variables in one secret rather than one secret per variable?
AWS Secrets Manager charges per secret per month plus per API call. A single JSON blob
per environment costs one secret and one API call at startup, versus 20+ secrets and
20+ API calls. The trade-off is coarser IAM control (all-or-nothing access to the
bundle). If different teams need different access levels, split by concern:
prod/myapp/database, prod/myapp/aws, prod/myapp/thirdparty.
What about secret caching?
The load_secret() implementation above calls Secrets Manager on every container
startup. For high-churn deployments (many restarts per hour), consider the
AWS Secrets Manager caching client for Python,
which caches secrets in memory and refreshes them on a configurable TTL.
See also
- ›evnx convert reference — Full flag reference
- ›Convert basics — Introductory examples
- ›Convert reference — Format-by-format output samples
- ›evnx migrate — Direct migration command (requires
migratefeature) - ›evnx scan — Detect secrets before they leave your machine
- ›evnx backup — AES-256-GCM encrypted backup of .env files