Managing configuration across multiple microservices quickly becomes painful:
- Scattered property files - Each service has its own
application.properties, making it hard to see or change configuration across services - Redeployment required - Changing a single property means rebuilding and redeploying the entire service
- Environment drift - Copy-paste errors lead to inconsistent configuration between dev, staging, and prod
This project is a POC showing how these problems can be solved with S3 environment files and automatic service restart:
- Single source of truth - All configuration in one place, organized by environment and service
- Automatic updates - Change an env file and ECS services restart automatically with new configuration
- Terraform-managed - Infrastructure-as-code with clear separation of common vs environment-specific settings. We can have predefined hierarchy when environment configuration overrides common default values.
- Zero-downtime deployments - ECS gracefully restarts tasks with rolling updates
- Native ECS integration - Uses built-in
environmentFilessupport for S3
- 2 Spring Boot microservices running on ECS Fargate
- S3 bucket stores
.envfiles per service (Terraform-managed)- All configuration loaded via ECS
environmentFilesat container startup - JVM options (
JAVA_OPTS) included in the env file
- All configuration loaded via ECS
- EventBridge watches for S3 object changes (
.envfiles) - Lambda triggers ECS service restart via AWS API (
forceNewDeployment) - ECS orchestrates graceful rolling restart of tasks
- ALB routes traffic based on path prefix with zero downtime during restarts
Compared to SSM Parameter Store:
- Simpler runtime - Single file load vs multiple SSM API calls
- Atomic updates - All config changes applied together
- Cost effective - S3 cheaper than SSM for many parameters
- Native ECS support - Built-in
environmentFilesin task definitions - Easy debugging - Download and inspect the entire config file
The solution uses ECS service restart instead of runtime refresh because:
- Works with all Spring patterns - @Value, @ConfigurationProperties, constructor injection
- Fresh application state - No stale caches, connections, or memory state
- Zero-downtime - ECS handles graceful rolling deployment
- Guaranteed consistency - All configuration loaded at startup
Runtime refresh (@RefreshScope) only works with specific Spring patterns and doesn't reload infrastructure configs like database connections, thread pools, or security settings.
For demonstration purposes:
- service-1 runs with
devprofile - service-2 runs with
prodprofile
Each service has a .env file in S3:
s3://ecs-config-demo-dev-config-{account-id}/
├── service-1.env
└── service-2.env
Example service-1.env:
# Generated by Terraform
# Environment: dev
# Service: service-1
# JVM Configuration
JAVA_OPTS=-XX:+UseG1GC -Xmx384M -XX:MaxGCPauseMillis=100
# Application Configuration
APP_NAME=ecs-config-demo
APP_VERSION=1.0.0
APP_LOG_LEVEL=DEBUG
APP_FEATURE_FLAG=true
APP_ENVIRONMENT=dev
# Spring Configuration
SPRING_PROFILES_ACTIVE=devVariables in the .env file are automatically available to the container:
| Env Variable | Spring Property | Description |
|---|---|---|
JAVA_OPTS |
N/A (JVM args) | JVM options passed to java command |
APP_NAME |
app.name |
Application name |
APP_LOG_LEVEL |
app.log.level |
Log level |
APP_FEATURE_FLAG |
app.feature.flag |
Feature toggle |
JVM options are defined in infra/config/common/jvm.tf (defaults for all services):
# infra/config/common/jvm.tf
locals {
common_jvm_params = {
"jvm/opts" = join(" ", [
"-XX:+UseG1GC",
"-Xmx384M",
"-XX:MaxGCPauseMillis=100",
"-XX:+UseStringDeduplication",
# ... more options
])
}
}To override for a specific environment, add to infra/config/dev/env.tf:
# infra/config/dev/env.tf
locals {
env_params = {
"app/log/level" = "DEBUG"
"app/environment" = "dev"
# Override JVM options for dev (e.g., enable remote debugging)
"jvm/opts" = "-XX:+UseG1GC -Xmx512M -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005"
}
}To override for a specific service, add to infra/config/dev/service-1.tf:
# infra/config/dev/service-1.tf
locals {
service_1_params = {
# Give service-1 more memory (note: key includes service prefix)
"service-1/jvm/opts" = "-XX:+UseG1GC -Xmx768M -XX:MaxGCPauseMillis=100"
}
}After changing, apply and the service will restart automatically:
cd infra/config/dev && terraform applyConfiguration is managed via Terraform in infra/config/:
infra/config/
├── modules/s3-env-file/ # Module that generates .env files
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
├── common/ # Defaults for ALL environments
│ ├── common.tf # Env-wide defaults (all services)
│ ├── jvm.tf # JVM options defaults
│ ├── service-1.tf # Service-1 defaults
│ ├── service-2.tf # Service-2 defaults
│ └── outputs.tf
├── dev/ # Dev environment
│ ├── main.tf # Calls s3-env-file module
│ ├── s3-bucket.tf # S3 bucket for config files
│ ├── env.tf # Dev env-wide overrides
│ ├── service-1.tf # Service-1 dev overrides
│ └── service-2.tf # Service-2 dev overrides
└── prod/ # Prod environment
├── main.tf
├── s3-bucket.tf
├── env.tf
├── service-1.tf
└── service-2.tf
Parameters are merged with the following priority (later overrides earlier):
| Priority | Source | Example | Description |
|---|---|---|---|
| 1 (lowest) | common/common.tf |
APP_LOG_LEVEL=INFO |
Default for all services in all envs |
| 2 | common/service-X.tf |
APP_NAME=service-1 |
Default for specific service in all envs |
| 3 | {env}/env.tf |
APP_LOG_LEVEL=DEBUG |
Override for all services in this env |
| 4 (highest) | {env}/service-X.tf |
APP_LOG_LEVEL=TRACE |
Override for specific service in this env |
- Create environment directory:
mkdir infra/config/staging- Copy structure from existing environment:
cp infra/config/dev/*.tf infra/config/staging/-
Update
staging/env.tfwith environment-specific values -
Update
staging/main.tf- change environment name -
Deploy:
cd infra/config/staging
terraform init && terraform apply- Add service defaults in
common/service-3.tf - Update
common/outputs.tfto include new service params - Add
service-3.tfto each environment that needs it - Add
"service-3"toserviceslist in each env'senv.tf
- AWS CLI configured
- Terraform >= 1.0
- Java 21
- Maven
- Docker
# Build framework, services, Docker images and push to ECR
./build.sh
# Deploy all infrastructure
./deploy.sh
# Destroy all resources
./destroy.shNote: Ensure AWS_REGION is set (defaults to eu-west-1) and AWS credentials are configured.
Click to expand manual steps
cd app/framework && mvn clean install
cd ../service-1 && mvn clean package -DskipTests
cd ../service-2 && mvn clean package -DskipTestsexport AWS_REGION=eu-west-1
AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text)
aws ecr get-login-password | docker login --username AWS --password-stdin $AWS_ACCOUNT.dkr.ecr.$AWS_REGION.amazonaws.com
cd app/service-1
docker build -t $AWS_ACCOUNT.dkr.ecr.$AWS_REGION.amazonaws.com/service-1 .
docker push $AWS_ACCOUNT.dkr.ecr.$AWS_REGION.amazonaws.com/service-1
cd ../service-2
docker build -t $AWS_ACCOUNT.dkr.ecr.$AWS_REGION.amazonaws.com/service-2 .
docker push $AWS_ACCOUNT.dkr.ecr.$AWS_REGION.amazonaws.com/service-2cd infra/platform && terraform init && terraform apply
cd ../config/dev && terraform init && terraform apply
cd ../../lambda-config-refresh && terraform init && terraform apply
cd ../service-1 && terraform init && terraform apply
cd ../service-2 && terraform init && terraform applycd infra/platform
terraform output alb_endpointALB=<your-alb-dns>
# View service-1 config (dev environment)
curl -s http://$ALB/service-1/api/config | jq
# View service-2 config (prod environment)
curl -s http://$ALB/service-2/api/config | jqNotice the differences:
- service-1 (dev):
feature.flag=true,log.level=DEBUG - service-2 (prod):
feature.flag=false,log.level=INFO
Open two terminals:
Terminal 1 - Watch service-1 config:
while true; do
echo "=== $(date) ==="
curl -s http://$ALB/service-1/api/config | jq '.application.featureFlag'
sleep 5
doneTerminal 2 - Update the env file in S3:
# Download current env file
AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text)
BUCKET="ecs-config-demo-dev-config-$AWS_ACCOUNT"
aws s3 cp s3://$BUCKET/service-1.env /tmp/service-1.env
# Edit the file - change APP_FEATURE_FLAG=true to APP_FEATURE_FLAG=false
sed -i 's/APP_FEATURE_FLAG=true/APP_FEATURE_FLAG=false/' /tmp/service-1.env
# Upload back to S3
aws s3 cp /tmp/service-1.env s3://$BUCKET/service-1.envWithin 30-60 seconds, you'll see the configuration update. The flow is:
- S3 object changes
- EventBridge detects the
.envfile update - Lambda is triggered
- Lambda calls ECS API:
update_service(forceNewDeployment=True) - ECS starts new tasks with fresh configuration
- ECS waits for new tasks to pass health checks
- ECS gracefully stops old tasks
- Service now running with updated configuration
Monitor the ECS deployment:
aws ecs describe-services \
--cluster ecs-config-demo-cluster \
--services ecs-config-demo-service-1 \
--query 'services[0].deployments'You'll see two deployments during the rollout: PRIMARY (new) and ACTIVE (old).
For persistent changes, update the Terraform config:
# Edit infra/config/dev/env.tf or service files to change values
# Then apply:
cd infra/config/dev
terraform applyThis regenerates the .env file and uploads to S3, triggering the automatic restart.
| Endpoint | Description |
|---|---|
GET /service-1/api/config |
View service-1 configuration |
GET /service-2/api/config |
View service-2 configuration |
GET /service-{n}/api/health |
Health check |
If needed, you can manually trigger a service restart:
# Restart service-1
aws ecs update-service \
--cluster ecs-config-demo-cluster \
--service ecs-config-demo-service-1 \
--force-new-deployment
# Restart service-2
aws ecs update-service \
--cluster ecs-config-demo-cluster \
--service ecs-config-demo-service-2 \
--force-new-deploymentTo exec into a running container:
TASK_ID=$(aws ecs list-tasks --cluster ecs-config-demo-cluster --service-name ecs-config-demo-service-1 --query 'taskArns[0]' --output text | cut -d'/' -f3)
aws ecs execute-command \
--cluster ecs-config-demo-cluster \
--task $TASK_ID \
--container service-1 \
--interactive \
--command "/bin/sh"Tip: If terminal lines are truncated, run this after connecting:
stty rows 50 cols 200./destroy.sh