From Elastic Beanstalk to Docker on EC2: How I Saved Costs on My Hobby Project (and What I Learned)

Introduction

When I started working on the NikeRuns project, I knew I needed a centralized configuration server to manage settings across multiple microservices. My initial instinct was to use AWS Elastic Beanstalk—it’s managed, it scales automatically, and it seemed like the obvious choice. But after some deliberation and mentorship, I discovered that a simpler, more cost-effective solution was hiding in plain sight: Docker on a single EC2 instance, orchestrated with Terraform.

In this post, I’ll walk you through:

  • Why I initially chose Elastic Beanstalk
  • Why we pivoted to Docker on EC2
  • The architecture we built with Terraform
  • Key learnings from the development process
  • How mentorship shaped the solution

The Initial Problem: Configuration at Scale

The NikeRuns project is a microservices application with 14+ independent services, each needing its own configuration for different environments (dev, staging, production). Without centralization, every configuration change would require rebuilding and redeploying each service—a nightmare for agility.

I decided to implement Spring Cloud Config Server, which offers several advantages:

  • Profile-aware configuration: Each service fetches {service-name}-{profile}.yml from a Git repository at startup
  • RSA encryption: Sensitive values (database passwords, API keys) are encrypted with a {cipher} prefix and decrypted on-the-fly
  • Single source of truth: All configurations live in one auditable, version-controlled repository
  • Zero infrastructure overhead: It’s just a Spring Boot application backed by Git

Now came the deployment question: How should I host this?


The First Idea: Elastic Beanstalk

My initial thought was straightforward. Elastic Beanstalk is AWS’s managed service for deploying web applications. It handles:

  • Auto-scaling based on load
  • Load balancing
  • Health monitoring
  • Easy environment management

For a hobby project, it seemed like overkill, but it felt “safe” and AWS-native. The problem? Cost. Even the smallest Elastic Beanstalk environment costs more per month than a single t3.micro EC2 instance, and for a config server that rarely changes and only needs to serve requests to 14 internal services, that extra cost seemed wasteful.


The Pivot: Docker on EC2 with Terraform

Through mentorship and discussion, I realized that a hobby project’s config server doesn’t need to be highly available or auto-scaling. The critical insight was:

  • The config server starts before all other services
  • Its uptime is important, but it doesn’t need multi-region redundancy
  • A single t3.micro instance (~$10/month) is more than sufficient

We decided to:

  1. Build a Docker image of the config server
  2. Deploy it to a single EC2 instance
  3. Use Terraform to define the entire infrastructure as code

This approach gave us:

  • Cost: ~$10/month for a t3.micro instance vs. $50+/month for Elastic Beanstalk
  • Simplicity: Direct control without managing Elastic Beanstalk’s abstractions
  • Reproducibility: Infrastructure defined in code, not clicking through AWS console
  • Learning: Better understanding of AWS networking, security groups, and IAM

The Solution: Terraform Architecture

Here’s what we built:

1. VPC and Networking

resource "aws_vpc" "this" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name = "config-server-vpc"
  }
}

resource "aws_subnet" "this" {
  vpc_id                  = aws_vpc.this.id
  cidr_block              = "10.0.1.0/24"
  availability_zone       = data.aws_availability_zones.available.names[0]
  map_public_ip_on_launch = true

  tags = {
    Name = "config-server-subnet"
  }
}

We created a dedicated VPC with a single subnet, Internet Gateway for public access, and a route table to direct traffic appropriately.

2. Security Groups: The Principle of Least Privilege

resource "aws_security_group" "config_server" {
  name        = "config-server-sg"
  description = "Allow SSH from trusted IP and config server port 8888"
  vpc_id      = aws_vpc.this.id

  ingress {
    description = "SSH from trusted IP only"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = [var.allowed_ssh_cidr]
  }

  ingress {
    description = "Config server port"
    from_port   = 8888
    to_port     = 8888
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "config-server-sg"
  }
}

Key security decisions:

  • SSH access is restricted to a single trusted IP (the developer’s IP)
  • Config server port (8888) is open to the world, but only that port
  • All outbound traffic is allowed (for pulling Docker images, fetching from Git)

3. Secrets Management: SSM Parameter Store

One of the most important decisions was never embedding secrets in user_data. Instead, we use AWS Systems Manager (SSM) Parameter Store:

resource "aws_ssm_parameter" "git_uri" {
  name  = "/config-server/git_uri"
  type  = "String"
  value = var.git_uri
}

resource "aws_ssm_parameter" "encrypt_key" {
  name  = "/config-server/encrypt_key"
  type  = "SecureString"
  value = var.encrypt_key
}

resource "aws_ssm_parameter" "username" {
  name  = "/config-server/username"
  type  = "String"
  value = var.username
}

resource "aws_ssm_parameter" "pass" {
  name  = "/config-server/pass"
  type  = "SecureString"
  value = var.pass
}

Sensitive values are stored as SecureString type, which encrypts them using AWS KMS. Non-sensitive values (like git URI) use String type.

4. IAM Role: Least Privilege Access

resource "aws_iam_role" "config_server" {
  name = "config-server-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect    = "Allow"
      Action    = "sts:AssumeRole"
      Principal = { Service = "ec2.amazonaws.com" }
    }]
  })
}

resource "aws_iam_role_policy" "ssm_read" {
  name = "config-server-ssm-read"
  role = aws_iam_role.config_server.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect   = "Allow"
      Action   = ["ssm:GetParameter", "ssm:GetParameters"]
      Resource = "arn:aws:ssm:*:*:parameter/config-server/*"
    }]
  })
}

resource "aws_iam_instance_profile" "config_server" {
  name = "config-server-profile"
  role = aws_iam_role.config_server.name
}

The EC2 instance is assigned a role that only allows reading parameters under /config-server/*. Nothing more, nothing less.

5. EC2 Instance with Security Hardening

resource "aws_instance" "config_server" {
  ami                    = data.aws_ami.this.id
  instance_type          = var.instance_type
  key_name               = aws_key_pair.config_server.key_name
  subnet_id              = aws_subnet.this.id
  vpc_security_group_ids = [aws_security_group.config_server.id]
  iam_instance_profile   = aws_iam_instance_profile.config_server.name

  # Enforce IMDSv2 — prevents SSRF attacks from reading instance metadata
  metadata_options {
    http_tokens                 = "required"
    http_put_response_hop_limit = 1
  }

  # Encrypt root volume at rest
  root_block_device {
    encrypted = true
  }

  user_data = file("${path.module}/scripts/user_data.sh")

  tags = {
    Name = "sathish-config-server"
  }
}

Security hardening:

  • IMDSv2 enforcement: Prevents Server-Side Request Forgery (SSRF) attacks that could otherwise extract instance metadata
  • Root volume encryption: All data at rest is encrypted
  • Minimal permissions via IAM: The instance can only access the config server parameters

6. The User Data Script: Secrets Never Hardcoded

#!/bin/bash
set -e

# Update and install Docker + AWS CLI
yum update -y
yum install -y docker aws-cli
systemctl start docker
systemctl enable docker

# Fetch secrets from SSM Parameter Store — nothing sensitive in this script
GIT_URI=$(aws ssm get-parameter \
  --name /config-server/git_uri \
  --query 'Parameter.Value' \
  --output text \
  --region us-east-1)

ENCRYPT_KEY=$(aws ssm get-parameter \
  --name /config-server/encrypt_key \
  --with-decryption \
  --query 'Parameter.Value' \
  --output text \
  --region us-east-1)

USERNAME=$(aws ssm get-parameter \
  --name /config-server/username \
  --query 'Parameter.Value' \
  --output text \
  --region us-east-1)

PASS=$(aws ssm get-parameter \
  --name /config-server/pass \
  --with-decryption \
  --query 'Parameter.Value' \
  --output text \
  --region us-east-1)

# Write to root-only env file and clear variables from shell
install -m 600 /dev/null /etc/config-server.env
printf 'GIT_URI=%s\nencrypt_key=%s\nusername=%s\npass=%s\nAPP_PORT=8888\n' \
  "$GIT_URI" "$ENCRYPT_KEY" "$USERNAME" "$PASS" > /etc/config-server.env

unset GIT_URI ENCRYPT_KEY USERNAME PASS

# Pull and run the config server container
docker pull travelhelper0h/sathishproject-config-server:latest

docker run -d \
  --name config-server \
  --restart unless-stopped \
  -p 8888:8888 \
  --env-file /etc/config-server.env \
  travelhelper0h/sathishproject-config-server:latest

What’s happening here:

  1. Docker and AWS CLI are installed
  2. Secrets are fetched from SSM Parameter Store at boot time (not hardcoded anywhere)
  3. An environment file is created with root-only permissions (mode 600)
  4. Variables are explicitly unset from the shell to prevent leakage
  5. The Docker image is pulled and run with the environment variables passed in

The key insight: The script itself contains no secrets, only the logic to fetch them securely.


Key Learnings from This Project

1. Secrets Management is Critical

Never, ever embed secrets in code, Terraform files, or user_data scripts. Use a secrets manager (SSM Parameter Store, Vault, etc.). This took some iteration to get right, but it’s now a non-negotiable practice.

2. Infrastructure as Code Pays Dividends

With Terraform, I can:

  • Destroy and recreate the entire infrastructure in minutes
  • Share the code with team members
  • Review changes through version control
  • Understand exactly what’s deployed

Clicking through the AWS console would have been faster initially, but Terraform wins long-term.

3. Security Groups Require Thoughtful Design

Understanding CIDR notation, inbound/outbound rules, and the principle of least privilege took some learning, but it’s foundational. Overly permissive security groups are a common vulnerability.

4. Docker Simplifies Deployment

Instead of installing Java, managing Spring Boot configurations, and monitoring the application manually, Docker abstracts all of that. The entire application is a single container that can be rebuilt and deployed consistently.

5. IMDSv2 and Encryption Matter

These aren’t just buzzwords—they’re real security improvements that protect against real attacks. Enforcing IMDSv2 prevents SSRF attacks, and encrypting the root volume protects data at rest.


The Role of Mentorship

This solution wouldn’t have existed without good mentorship. My initial instinct was to use Elastic Beanstalk because it felt “AWS-native” and safe. But through discussion, I was challenged to:

  • Question assumptions (why does a hobby project config server need to be highly available?)
  • Evaluate trade-offs (cost vs. complexity)
  • Learn foundational concepts (VPCs, security groups, IAM, secrets management)

The mentorship didn’t consist of someone handing me a solution; it was collaborative problem-solving that helped me think through the options and ultimately make a better decision.


What’s Next?

This solution is working well for the NikeRuns project. Future improvements might include:

  • Monitoring and alerting: CloudWatch dashboards to track config server health
  • Automated backups: Regular snapshots of the EC2 instance
  • Multi-region failover: For production-grade systems (likely overkill for this hobby project)
  • Configuration hot-refresh: Using Spring Cloud Bus to refresh configurations without restarting services

Conclusion

Sometimes the best solution isn’t the most complex or the most managed. By taking the time to understand the problem, evaluate options, and challenge assumptions, I found a solution that was simpler, cheaper, and more educational than my initial instinct.

If you’re deploying a Spring Cloud Config Server or any other application to AWS, I’d encourage you to:

  1. Define your actual requirements (not theoretical ones)
  2. Consider trade-offs between managed services and DIY approaches
  3. Use Infrastructure as Code from the start
  4. Treat secrets management as non-negotiable
  5. Find a mentor to challenge your thinking

The result is infrastructure that’s not only cost-effective but also secure, reproducible, and maintainable.


References