MentorMate has clients across a variety of industries with very specific technology needs. Our Cloud Center of Excellence, on its part, helps fintech, healthcare, and education companies — among others — achieve a well-architected environment. As specified in the AWS Well-Architected framework, there are five distinct pillars in this regard: Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization.
Cost is important to our clients and often plays a role in their business decisions. Deployments in the public cloud are much easier to audit and review compared to those in a private data center, and this is because the resources being used are much more visible.
Let’s go through some of the tools and approaches that we recommend our clients use to achieve even better visibility and hence, cost optimizations.
AWS Tagging Strategy
A recommended first step in optimizing cost is making use of AWS Tags. A tag is a label that you, or AWS, assign to an AWS resource. By using tags, you can organize the resources in use to bring visibility on who is using a particular AWS service, and how it’s being utilized.
Here’s a guide to AWS tagging strategies that you can adapt and modify according to your organizational structure. The tagging structure varies depending on the business context, and here are a few examples: department, organizational unit, owner, cost center, project, environment, application, region.
Tags can also be used for Cost Allocation, which helps you categorize and track your AWS costs. AWS Cost Explorer (see below) and detailed billing reports support the ability to break down AWS costs by tag.
A recent AWS addition makes it possible to enforce the use of predefined tags at the organization level. Once a tagging strategy is in place, tags usage can be monitored and enforced at an organizational view. This allows you to watch for non-compliant resources that may be in use.
AWS Cost Explorer
AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. As you can see in the screenshot below, there are multiple options for filtering on the right side of the screen. Once tags have been applied, this service is the first you should start with when performing cost optimization.
This video from AWS offers a deeper dive into how AWS Cost Explorer can help you manage your costs.
In the AWS Budgets tool, you set custom budgets and receive alerts when your costs or usage have exceeded (or are forecasted to exceed) a predefined threshold. Budgets can be set per account as the Cost Explorer supports multiple filters. We view setting a budget as an extremely important path of every deployment. For instance, we had a client that had provisioned by mistake huge resources in the development environment, and it was the AWS Budget alarm that saved them.
AWS Compute Optimizer
The easiest way to achieve optimization is to review your resources and stop those that are not in use or are running forgotten.
In the cases of having dev or feature environments running, the scheduled start and stop of resources is also an option. Automation around that comes from AWS as an AWS Instance Scheduler solution. If you take an Infrastructure as Code approach, you can create and destroy environments dynamically.
EC2 is probably the most widely-used AWS service, and there are many options to optimize your compute resources. Right sizing is a good starting point to optimize EC2 usage. AWS even provides an integrated right sizing service as part of its Cost Optimizations.
Scaling based on load is another great way to pay for only what you need. By leveraging the AWS EC2 AutoScaling, you can scale your EC2 instances up and down based on load. This way you can have many instances during the day when the traffic is high and just a single one during the night when there might not be that much traffic. AutoScaling can be manually triggered or triggered by a metric like average CPU usage. Scaling can be scheduled ahead of time and one of the latest features, Predictive Scaling for EC2, is even powered by machine learning algorithms.
Reserved Instances is another useful tool for companies willing to make long-term commitments. With this tool, you can save up to 72%, which is a great improvement as compared to an on-demand pricing model. There are 1-year and 3-year commitments with 3 payment options — All Upfront, Partial Upfront, and No Upfront.
Another great service is the newly-released AWS Savings Plan, offering up to 72% savings on Amazon EC2 instances usage in exchange for a commitment to a consistent amount of usage for a fixed term. The solution is very easy to use — it makes a recommendation plan based on an automatic calculation of previous usage. The payment options are the same as for Reserved Instances, and there are 1- or 3-year terms you can choose from. The true power of the service is that you commit to compute resources (Amazon EC2, AWS Fargate, and AWS Lambda), and not to a specific EC2 instance type of family.
Spot Instances might bring up to 90% cost optimization for workloads that are fault-tolerant, stateless, or flexible. Examples of such workloads are big data, containerized workloads, CI/CD, web servers, and high-performance computing (HPC).
While there are many options to optimize the EC2 cost, changing the infrastructure and shifting your mindset towards using containers and Lambda functions to achieve a serverless architecture might have an even greater impact. In this model, compute infrastructure is started on client request, so no resources are sitting in idle mode, nor is there a need for right-size instances — the resource utilization is 100%. These case studies present some of the companies using serverless like Robot, FINRA, and Thomson Reuters.
As MentorMate is designated as an AWS Lambda Delivery Partner, we can help you build a well-architected serverless solution.
Amazon Relational Database Service (Amazon RDS) provides six database engines to choose from — Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. Amazon RDS makes managing databases much easier compared to on-premises deployment. The service also supports the on-demand pricing model where you pay only for the time the database is online and supports the Reserved Instances model as the EC2 instances. Points of interest for cost optimization might be changes in the licensing model if you’re using a higher-level license compared to what your actual needs are.
The biggest cost optimizations, however, are achieved by migrating the database engine to one where you don’t have to pay the additional price of licensing. AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases, be it a homogenous migration or one between different platforms.
Cleaning Up Unused Resources
When it comes to cleaning AWS resources, open-source solutions like AWS-nuke come in handy. Once configured, the tool is easy to use, and we periodically use it to clean unused resources from an AWS account. If you’d like to optimize the process and create incident response procedures, you can even execute AWS-nuke based on a budget alarm. This means that as soon as you reach a budget threshold the tool is executed and resources are cleaned.
AWS provides many tools that help you achieve visibility and cost optimization. AWS tags play a key role in making resource usage visible, and they are a great way to create custom filters over the cost reports. Moreover, right sizing and rearchitecting your application with serverless approaches can lower the bill. Paying less for licenses and migrating to open-source databases is also a win.
You can use all or some of the services reviewed here in order to cut your costs. But keep in mind that, while numerous tools and strategies exist, investing in a cost-conscious culture is also important.
The MentorMate team has more than 100 AWS accreditations and certifications, validating our passion, experience, and expertise. We’re also an AWS Advanced Consulting Partner. Learn more about our cloud services and contact us with any questions on how you can implement IaC into your workflows.