Monday morning, monthly IT cost review. The CFO looks at the AWS invoice - 847,000 PLN. “That’s 40% more than a year ago, and we didn’t launch any new projects.” The CTO shrugs - “that’s how cloud is, costs rise.” No one knows which resources generate these costs, which are needed, and which can be turned off. Another month - another 850,000 PLN. And so on.

This situation is the standard, not the exception. Flexera State of the Cloud 2025 shows that organizations waste an average of 32% of public cloud spending. For a company spending 10 million PLN annually on cloud, that’s 3.2 million PLN thrown away. Not due to bad intentions - due to lack of visibility, processes, and competencies to manage this new type of IT expenditure.

Why do cloud costs spiral out of control in most organizations?

“Organizations that implement a robust SAM program can expect to reduce their software spending by up to 30% in the first year.”

Gartner, Market Guide for Software Asset Management Tools | Source

The cloud payment model - pay-as-you-go - was supposed to be an advantage over traditional IT. You pay for what you use, not for unused infrastructure. In theory. In practice, the ease of provisioning resources without upfront cost leads to uncontrolled growth. A developer needs a VM - clicks and has it. Needs a bigger one - clicks and has it. Project ends - the VM stays because “it might be useful.”

Lack of ownership and accountability for costs makes things worse. In traditional IT, buying a server required budget, approval, procurement. In the cloud, a developer with the right permissions can spin up resources costing thousands of zlotys monthly in a minute. Who’s responsible for these costs? Often no one specific - the IT department gets an aggregated invoice and throws up their hands.

Cloud provider pricing complexity intentionally complicates optimization. AWS has over 300 different services, each with its own pricing model. Azure isn’t far behind. Combinations of regions, instance types, billing models, volume discounts create a matrix impossible to grasp without dedicated tools and expertise.

The dynamics of change don’t help. Prices change, new instance types appear, promotions come and go. A Reserved Instance bought a year ago may be unprofitable today because a cheaper VM type appeared. An organization that configured infrastructure once and forgot overpays more with each passing month.

Shadow IT in the cloud is a separate category of problem. Business departments, frustrated with slow IT, set up their own AWS or Azure accounts. Marketing has theirs, Sales has theirs, every pilot project has theirs. Consolidating these costs and optimization is impossible because IT doesn’t even know these accounts exist.

What are the main sources of waste in Azure and AWS?

Zombie resources - resources started and forgotten - are the most common source of waste. VMs from projects completed months ago. Snapshots of disks from non-existent machines. Elastic IP addresses not associated with any resource. Load balancers without backends. In a large organization, such zombies can account for 10-20% of total cloud cost.

Oversized instances - resources larger than needed. The developer doesn’t know what size VM they need, so they take a large one “just in case.” It turns out the application uses 5% CPU and 10% RAM, but we pay for 100%. Case studies show that 40-60% of VMs in a typical organization are oversized by at least one tier.

Unused Reserved Instances and Savings Plans. The organization bought a yearly reservation for a specific instance type. Then changed architecture and stopped using that type. The reservation still costs, but doesn’t provide savings. Flexera reports that 25% of Reserved Instances in enterprise are unused or underutilized.

Suboptimal storage choices. Data kept on the most expensive tier when it could be on a cheaper one. Archival logs on premium SSD instead of cold storage. No lifecycle policies automatically moving old data to cheaper tiers. Storage is often 20-30% of cloud costs and often the most neglected area.

Development and test environments running 24/7. Environments needed 8 hours a day, 5 days a week, run non-stop. 76% of the time they generate costs with no value. Automatic shutdown outside working hours is a quick win, but rarely implemented.

Lack of using spot instances and preemptible VMs for workloads that allow it. Batch processing, CI/CD pipelines, rendering - all of this can run on spot instances for 60-90% cheaper. Requires certain architecture, but ROI is enormous.

How to conduct a cloud cost audit from scratch?

The first step is full inventory - gathering all cloud accounts in one place. Sounds trivial, but in large organizations it’s a challenge. Accounts scattered across departments, different credit cards, invoices going to different places. Without consolidation, there’s no optimization.

Enabling Cost Allocation Tags is the foundation of visibility. Every resource should be tagged: project, owner, environment (prod/dev/test), cost center. Without tags, you don’t know who generates costs and why. AWS and Azure offer native tools to enforce tags when creating resources - enforce them.

Using native cost management tools. AWS Cost Explorer, Azure Cost Management - free, built-in, powerful. They allow analyzing costs across different dimensions, identifying anomalies, forecasting future spending. Start with them before buying an external tool.

Identifying zombie resources requires a systematic approach. List of VMs with zero or minimal CPU usage over the last 30 days. Unattached disks. Snapshots older than 90 days. Unassociated IP addresses. Load balancers with zero traffic. Every cloud provider has tools for this (AWS Trusted Advisor, Azure Advisor).

Rightsizing analysis - comparing actual usage with provisioned capacity. A VM has 8 vCPU, uses an average of 0.3 vCPU - a candidate for downsizing. Tools like AWS Compute Optimizer or Azure Advisor automatically generate rightsizing recommendations with estimated savings.

Reserved Instance coverage and utilization analysis. How many workloads qualify for RI but run on-demand? How many RIs are unused? What would be the savings with optimal coverage? This requires understanding commitment timelines and flexibility options of different reservation types.

How does the Reserved Instances model work and when is it worth it?

Reserved Instances are a commitment - you commit to using a specific resource type for 1 or 3 years in exchange for a 30-72% discount compared to on-demand. The longer the commitment and larger the upfront payment, the bigger the discount. Sounds simple, but the devil is in the details.

Reserved Instance types differ in flexibility. Standard RI - least flexible, biggest discount, assigned to a specific instance type in a specific region. Convertible RI - you can change instance type during the period, smaller discount. Flexible RI (only some services) - automatically applies to matching workloads.

Break-even point - the point from which RI pays off. If you plan to use the resource for the entire commitment period, RI always pays off. If you’re not sure - calculate. 1-year RI with 30% discount pays off if you use the resource >7 months. 3-year with 50% discount - if you use >18 months. Factor in the risk of changes.

AWS Savings Plans is an evolution of Reserved Instances - commitment to a specific hourly spending amount (e.g., $10/h) instead of specific resources. Greater flexibility because the discount automatically applies to matching workloads. Compute Savings Plans cover EC2, Fargate, and Lambda - ideal for organizations with mixed workloads.

Azure Reservations work similarly to AWS RI. Additionally, Azure offers Hybrid Benefit - the ability to use owned Windows Server and SQL Server licenses in the cloud instead of paying for them as part of the VM. This can reduce VM cost by 40-80% for Windows workloads.

Managing an RI portfolio requires continuous attention. Tracking utilization - are RIs being used? Monitoring expiration - when do they end and should they be renewed? Rightsizing - does the RI type still fit the workloads? AWS RI Marketplace allows selling unused reservations.

What quick wins can be achieved in the first 30 days?

Turning off zombie resources is immediate savings with no risk. Identify VMs with zero usage - turn them off. Remove unattached disks - savings. Release unassociated Elastic IPs - small savings, but zero effort. Remove old snapshots - often surprisingly large savings.

Automatic shutdown of dev/test outside working hours. Development environments are typically needed 8h x 5 days = 40h weekly. Running 24/7, they cost 168h weekly. Shutdown outside hours is 76% savings. Implementation: AWS Instance Scheduler, Azure Automation, or simple Lambda/Function with cron.

Rightsizing the most oversized instances. Take the top 10 VMs with lowest CPU utilization. Check if they can be reduced by 1-2 sizes. Often a $100/m VM can be changed to $25/m with no impact on performance. Start with non-prod, build confidence, move to prod.

Migration to newer instance generations. AWS regularly introduces new generations (m5 -> m6i -> m7i) that are cheaper and more efficient. Older generations don’t get cheaper - you stay on more expensive, slower hardware. Migration is usually a VM restart - low risk, immediate 10-20% savings.

Enabling automatic storage tiering for S3/Blob Storage. Intelligent-Tiering (AWS) and Cool/Archive tiers (Azure) automatically move rarely used data to cheaper storage. Configuration is an hour of work, savings can be 50%+ on storage costs.

Negotiating with the cloud provider. If you spend >$100k monthly, you have leverage to negotiate an enterprise agreement with additional discounts. AWS and Azure have dedicated account managers who can offer custom pricing. Don’t ask - don’t get.

How to build a continuous cloud cost optimization process?

FinOps - Financial Operations - is a framework and emerging practice for cloud cost management. It connects Finance, IT, and Business in a cross-functional approach to optimization. Not a one-time project, but a continuous process. FinOps Foundation offers certifications and best practices.

Establish cost visibility as the first step. Dashboard showing costs in real-time, with breakdown per team/project/environment. Anomaly detection alerting on unusual increases. Forecast showing predicted spending at month end. Without visibility, there’s no accountability.

Assign cost ownership. Each team is responsible for costs generated by their resources. Chargeback or showback model - either you actually charge team budgets or at least show how much they generate. When a developer sees that their “temporary” cluster costs 5000 PLN monthly, motivation to optimize grows.

Regular review cadence. Weekly review of anomalies and trends. Monthly deep-dive into biggest cost drivers. Quarterly RI/Savings Plans planning. Annual cloud strategy review and provider negotiations. Without regularity, optimization is reactive not proactive.

Automation where possible. Automatic tagging of new resources. Automatic shutdown of dev/test outside hours. Automatic alerts when budget is exceeded. Automatic reporting to stakeholders. People are expensive and have more interesting things to do than manually tracking costs.

Continuous rightsizing. Not a one-time action, but an ongoing process. Workloads change, usage patterns evolve. Rightsizing recommendations should be generated and reviewed regularly. Tools like Spot.io or CloudHealth automate this process.

Which cloud cost optimization tools are worth considering?

Native cloud provider tools are the starting point. AWS Cost Explorer + Cost Anomaly Detection + Compute Optimizer + Trusted Advisor. Azure Cost Management + Advisor + reservations portal. Free (or included in price), integrated, sufficient for many organizations.

Multi-cloud cost management platforms for organizations with workloads in multiple clouds. Flexera One (formerly RightScale) - comprehensive FinOps platform. CloudHealth by VMware - strong in analytics and governance. Apptio Cloudability - focus on FinOps and showback. Spot by NetApp - automatic optimization and spot management.

Specialized tools for specific use cases. Kubecost for Kubernetes costs - shows cost per namespace, deployment, pod. Infracost for Infrastructure as Code - shows cost of changes before deployment. Vantage - modern platform with emphasis on developer experience.

Open source options for cost-conscious organizations. Cloud Custodian - policy engine for automatic resource management (shutdown, tagging, compliance). Komiser - cost and security dashboard for multi-cloud. Steampipe - SQL interface to cloud APIs, including costs.

Tool selection criteria. Multi-cloud support if you need it. Data granularity - per-resource, per-hour, per-tag. Actionable recommendations, not just reports. Workflow integration (Slack, Jira, Terraform). Pricing - some tools cost % of savings, others flat fee.

Beware of tool sprawl. One good tool is better than five mediocre ones. Data scattered between tools makes analysis difficult. Choose a platform that covers most needs and standardize.

How to manage software licenses in the cloud context?

Bring Your Own License (BYOL) is an option for organizations with existing on-premise licenses. Instead of paying for licenses built into VM price (included licensing), you use your own. For Windows Server, SQL Server, Oracle - savings can be 40-80% of VM cost. Requires License Mobility through Software Assurance (Microsoft) or appropriate agreements with other vendors.

Azure Hybrid Benefit allows using Windows Server and SQL Server licenses purchased with Software Assurance in Azure without additional license fees. VM costs only compute, not compute + license. For large Windows environments, this is often the biggest single source of savings.

AWS License Manager helps track license usage in AWS. You define rules (e.g., “I have 100 SQL Server Enterprise licenses”), and the system tracks how many are used. Alerts when exceeded. Integration with Systems Manager for discovery.

Per-core vs per-VM licenses in the cloud is a trap. Many enterprise licenses (Oracle, SQL Server) are licensed per-core. In the cloud, a VM can have many vCPUs, each of which may require a license. vCPU is not the same as physical core - conversion ratios differ between providers and processors. Consultation with a SAM specialist before migration is necessary.

Software Asset Management (SAM) in the cloud requires a new approach. Traditional SAM tools focused on on-premise don’t see cloud resources. You need either an extended SAM platform (Flexera, Snow, ServiceNow SAM) or dedicated integration between cloud cost management and SAM.

License audits in hybrid environments are particularly complicated. Oracle, Microsoft, IBM regularly audit and look for non-compliance. An environment that’s partially on-premise, partially in the cloud is difficult to document. Preparation: document everything, have legal review of cloud contracts, consider true-up before audit.

How to avoid the most common mistakes in cloud cost optimization?

Optimization without understanding workloads is a recipe for disaster. You turn off a VM that “does nothing” - turns out it’s a batch job run once a month, critical for financial reporting. Always verify before delete. Tagging and ownership eliminate such situations.

Over-commitment on Reserved Instances without flexibility planning. You buy 3-year RI because the discount is biggest. A year later, architecture changes, serverless replaces VMs, RIs lie unused. Better: start with 1-year, build confidence in predictability, then 3-year. Use Convertible RI for less certainty.

Ignoring data transfer costs. Compute and storage are visible, data transfer hidden in the invoice. And it can be significant - transfer between regions, to the internet, between availability zones. Architecture affects these costs - e.g., keep related resources in the same region/AZ.

Lack of governance leads to return of chaos. You optimized, savings achieved. Six months later - costs returned to previous level because no one was watching. Optimization without governance is like a diet without changing habits - yo-yo effect guaranteed.

Focusing only on costs, ignoring value. Cost cutting that reduces performance, reliability, or developer productivity may be net negative. The goal is optimization - maximizing value per dollar - not minimizing costs at all costs. Sometimes a bigger VM is worth it.

Working in silos. FinOps requires collaboration of Finance (budgets, forecasting), IT (implementation, architecture), and Business (priorities, value). If only IT optimizes without buy-in from Finance and Business, effort is limited. Cross-functional team with mandate from leadership works best.

How to prepare a business case for a cloud cost optimization initiative?

Quantify current waste - count how much you’re wasting today. The 32% waste benchmark is a starting point, but your organization may be better or worse. Quick assessment from native tools (AWS Trusted Advisor, Azure Advisor) will give specific numbers: “we have $X unused RIs, $Y zombie resources, $Z oversized instances.”

Estimate achievable savings - realistically, not optimistically. 10-15% savings in the first year is a safe assumption for a typical organization. More aggressive targets (30%+) require significant investment in tooling and people. Show a range: conservative, realistic, optimistic.

Calculate ROI for investment in optimization. Cost: tools (e.g., $50k/year), people (e.g., 1 FTE FinOps Engineer = $150k/year), consulting (e.g., $30k one-time). Benefit: savings recurring every year. ROI is usually very attractive - payback in months.

Address non-financial benefits. Better visibility into costs = better architectural decisions. Governance = lower risk of bill shock and security issues. Accountability = culture of responsibility. Competitive advantage = lower cost per customer, ability to invest savings in growth.

Risk mitigation framing for risk-averse leadership. “Without optimization, cloud costs grow 20% YoY. In 3 years we’re spending 2x more. With optimization - flat or declining costs with growing workloads.” Show trend extrapolation - without action, the problem worsens.

Start small, prove value, scale. Instead of asking for budget for an enterprise platform right away, start with a pilot project. Optimize one department or one account. Show specific savings. Use as a case study to expand the initiative.

What is the role of a SAM provider in cloud cost optimization?

Licensing expertise is key for BYOL and hybrid environments. Can you legally use this license in Azure? What are the core conversion ratios for Oracle in AWS? What limitations does Software Assurance have in GCP? A SAM provider knows the answers or knows where to look.

Tools and processes for managing complexity. SAM platforms like Flexera One integrate data from cloud providers with traditional IT. You see the whole picture - what you have on-premise, what in the cloud, what licenses are used where. Without this visibility, optimization is shooting blind.

Negotiations with vendors and cloud providers. A SAM provider with a portfolio of many clients has leverage that a single company doesn’t. Knows price benchmarks and what’s negotiable. Enterprise Agreement with Microsoft or AWS is a complicated contract - an expert negotiator can save percentage points that count in hundreds of thousands.

Audit defense and compliance. When an audit from Oracle or Microsoft comes, the SAM provider represents and defends you. Knows auditors’ tactics, knows what data to provide (and what not to), can negotiate settlement. In a hybrid cloud/on-premise environment, this expertise is particularly valuable.

Continuous optimization as a service. Instead of a one-time project, an ongoing relationship. Quarterly cost reviews, annual RI/Savings Plans planning, advisory on architectural changes. FinOps as a Service for organizations that don’t want to build competencies in-house.

Strategic perspective on technology roadmap. A SAM provider sees industry trends, knows vendor plans, can advise long-term. “Microsoft is changing the SQL Server licensing model in a year - better to buy now” is valuable insight that requires expertise.

Table: Cloud cost optimization maturity model

LevelNameVisibilityOptimizationGovernanceCultureTypical Savings
0ChaosNo tags, scattered accountsNoneNoneIgnorance0%
1ReactiveBasic tagging, consolidated billingAd-hoc, fire-fightingManual reviewAwareness5-10%
2ActiveFull tagging, cost allocationSystematic rightsizing, RIBasic policiesOwnership assigned15-25%
3OptimizedReal-time visibility, anomaly detectionContinuous optimization, Savings PlansAutomated enforcementFinOps culture25-35%
4IntelligentPredictive analytics, ML-drivenAuto-scaling, spot instances, serverlessProactive, preventiveCost-aware development35-45%

Cloud cost optimization is not a one-time project - it’s an ongoing practice requiring tools, processes, and culture. Organizations that take it seriously can recover 30% or more of cloud spending without impacting functionality. Those that ignore the problem will pay more and more for the same value.

Key takeaways:

  • Visibility is the foundation - without tagging and consolidation, there’s no optimization
  • Quick wins (zombie resources, dev/test shutdown, rightsizing) give immediate savings
  • Reserved Instances and Savings Plans are significant savings, but require planning
  • FinOps is a framework connecting Finance, IT, and Business - not just an IT task
  • Software licenses in the cloud (BYOL, Hybrid Benefit) are often an overlooked source of savings
  • Continuous process, not project - without governance, savings quickly disappear

The first step is understanding how much you’re wasting today. Run the cloud provider’s native tools, review recommendations, calculate potential savings. The numbers speak for themselves.

ARDURA Consulting offers comprehensive Software Asset Management services covering cloud and on-premise license optimization. We help organizations regain control of IT costs - from audit through tool implementation to ongoing optimization. Contact us to discuss your environment and savings potential.