One of our fintech clients came to us with a problem: their AWS bill had quietly grown from EUR 5K to EUR 14K per month over 18 months, with no proportional increase in traffic. The system was not slow — but it was expensive. They asked us to find savings without touching uptime.
We spent two weeks profiling, optimizing, and restructuring their infrastructure. The result: a 41% cost reduction with measurably better response times. Here is what we found and fixed.
The Audit: Where the Money Was Going
The first step was mapping every dollar to a service. We used AWS Cost Explorer with resource-level tagging and found three main cost drivers:
- RDS (PostgreSQL): EUR 4,200/month — a db.r6g.2xlarge instance running 24/7
- EC2/ECS: EUR 5,100/month — 12 always-on containers across 3 services
- Data transfer + S3: EUR 2,800/month — cross-region replication they no longer needed
- Other (CloudWatch, Lambda, etc.): EUR 1,900/month
The database alone was 30% of the bill — and it was dramatically over-provisioned.
Fix 1: Right-Size the Database
The RDS instance had 64 GB of RAM, but peak memory usage never exceeded 18 GB. CPU averaged 12% with spikes to 35% during batch jobs.
We migrated to a db.r6g.large (16 GB RAM) and moved batch processing to off-peak hours using scheduled ECS tasks. We also enabled RDS Proxy to handle connection pooling — which actually improved query latency by reducing connection overhead.
Savings: EUR 2,800/month
Fix 2: Auto-Scaling Instead of Always-On
All 12 containers ran 24/7, but traffic analysis showed clear patterns: 80% of requests came between 7:00 and 22:00 CET, with a sharp drop overnight. Weekend traffic was 40% of weekday volume.
We configured ECS auto-scaling based on CPU and request count metrics: minimum 3 containers during off-peak, scaling up to 12 during business hours. We also consolidated two microservices that always scaled together into a single service — reducing baseline container count from 12 to 8.
Savings: EUR 1,900/month
Fix 3: Eliminate Unnecessary Data Transfer
The system was replicating all S3 data to a second region — a disaster recovery setup configured during initial launch. But the client had since moved to a multi-AZ architecture within eu-central-1, making cross-region replication redundant. Nobody had turned it off.
We removed the replication, cleaned up 2.3 TB of duplicate data, and switched remaining S3 access to VPC endpoints to avoid data transfer charges.
Savings: EUR 1,100/month
The Results
Before: EUR 14,000/month After: EUR 8,200/month Savings: EUR 5,800/month (41%)
But the numbers only tell part of the story. Response times actually improved by 15% — partly from RDS Proxy connection pooling, partly from the consolidated services having less network overhead.
Lessons for Any Cloud Setup
1. Tag Everything
You cannot optimize what you cannot measure. Every resource should be tagged with project, environment, and team. Without tags, cost attribution is guesswork.
2. Review Reserved Instances Quarterly
Our client was paying on-demand prices for instances that ran 24/7. We moved stable workloads to 1-year reserved instances with no upfront payment — an additional 25% saving on those resources.
3. Set Up Cost Alerts Before You Need Them
AWS Budgets with alerts at 80% and 100% of target spend would have caught this growth months earlier. We configured alerts for every cost center as part of the optimization.
4. Schedule Non-Critical Workloads
Development and staging environments do not need to run on weekends. Scheduled start/stop for non-production environments saved another EUR 400/month that is not even in the numbers above.
The Bottom Line
Cloud cost optimization is not about cutting corners — it is about eliminating waste. Most companies are over-provisioned because they set up infrastructure for peak load and never revisit it. A focused two-week audit typically finds 30-50% savings. The question is not whether you are overspending — it is how much.