Cloud Technologies

3 Harmful Mistakes Companies Are Making In Cloud Cost Management

By applying learned lessons, you can more optimally use the cloud and control your cloud spend
Pinterest LinkedIn Tumblr

Public cloud may sound quite new to a lot of people but in fact public cloud as we know it has been around since 2006, when Amazon Web Services (AWS) launched its S3 storage and EC2 IaaS services. The first major organizations started to move their workloads to the public cloud in the early 2000s, prior to 2010. In the 10 years following the migrating and running of workloads in the cloud, a lot of lessons will have been learned. However, it’s surprising that some of these learned lessons are still not applied today. Three major lessons learned that should be applied by everyone migrating to public cloud are the following:

  1. Don’t just lift and shift your workloads to the public cloud
  2. Cloud management requires new practices
  3. Cloud expertise shouldn’t just be left to application teams

Lesson 1: Don’t just lift and shift your workloads to the public cloud

Lifting and shifting applications does sound attractive at first: a lift and shift migration is relatively fast and cheap. Since the applications themselves are being touched as minimal as possible during the migration, the risk of defects resulting from the migration is low. But then comes the drawback: the added value of the cloud isn’t there. By lifting and shifting applications, you also lift and shift the mistakes made in the past and the rubbish from running the application for years in the datacenter. On top of this, public cloud isn’t just another datacenter. Design decisions perfectly suited for the datacenter do often not work out in the cloud. IDC Metri estimates from experience that a lift and shift migration gets at its best what you already got in the datacenter, except for a 30% higher cost.

A cloud migration should be thoroughly prepared, with an application based approach rather than an infrastructure based one. For every application it should be determined whether the application will be retired (phased out), retained (kept ‘as is’ in the datacenter), rehosted (migrated ‘as is’), replatformed (reinstalled on a newly designed cloud platform without adapting source code), refactored (adapted and optimized for the cloud) or repurchased (bought or built from scratch). Tools offering scans on source code level can greatly enhance the speed and quality of such an inventory, thus helping achieve a successful migration with actual added value.

Lesson 2: Cloud management requires new practices

A typical way of working when it comes to a datacenter is that the datacenter and the infrastructure inside of it is managed by an infrastructure department (which can also be a Managed Services Provider). The infrastructure department acts as the king of its own castle, with a gatekeeper controlling everything that’s coming into the datacenter.

The ease of use of public cloud makes it far easier for application teams managing the full application stack instead of only the software part. A tedious task like installing and configuring infrastructure manually is being replaced by creating infrastructure trough Infrastructure-as-Code scripts and ready made building blocks. This way public cloud has become a driving force behind DevOps and the credo ‘you build it, you run it’.

But there is a drawback here too. Knocking down the castle wall and associated bureaucracy between the application teams and infrastructure department also means the gatekeeper isn’t there anymore. With that, the natural control mechanism concerning rolling out infrastructure and its associated costs is also gone. Organizations are experiencing their cloud costs running out of hand due to a missing control mechanism.

Controlling the cloud without throwing away the advantages such as flexibility and scalability requires new practices. Reinstating the gatekeeper is a practice IDC Metri encounters regularly, but this also reinstates bureaucracy and throws away the advantages of the cloud. The way to go here is making application teams responsible for their cloud costs by giving them insight and control mechanisms, building guardrails by policies restricting them on instantiating overly expensive resources, creating alerts that warn them when costs are getting out of hand, and by actively helping them analyze and manage their costs.

Lesson 3: Cloud expertise should be bundled and not just left to application teams

Lesson two leads to lesson three. While making application teams responsible for managing their own cloud costs, you can’t expect them to know all the ins and outs of the cloud. In-depth knowledge of the cloud should be bundled within a single entity in the organization: the Cloud Center of Excellence (CCoE). The CCoE acts as a both as an assistance dog and a watchdog. The CCoE helps the application teams with their design decisions for using the cloud optimally and using the right cloud services for the right task. The CCoE also monitors the cloud costs on an organizational level and alerts the application teams if they see costs going out of hand. Besides assisting and watching application teams, the CCoE also manages the discount schemes, such as reserved instances and savings plans, on the organizational level. Also, more complex costs savings such as license optimization should be driven by the CCoE since license management appears to be a science on its own.

Critical success factors for the CCoE are a proper mandate or backup by senior management to prevent it from becoming a paper tiger and, of course, the availability of skilled and competent team members. On top of this, the CCoE should have an organizational and financial approach, rather than a pure technical one. Application teams running their applications in the cloud do have the technical knowledge, the CCoE enhances this by bringing in knowledge on cloud billing, pricing models and governance.

Conclusion

Cloud brings a lot of opportunities, but also a lot of challenges. Ten years of widespread public cloud usage has taught us valuable lessons on how to use the cloud in a way that actually brings us forward instead of staying where we are at a higher cost. By applying these lessons, public cloud can be used optimally while keeping control on spend.

Controlling cloud costs and delivering cloud value is within your reach with IDC Metri’s Cloud Economics service. Learn how you can achieve great results.

Paul is an experienced sourcing consultant, with a strong focus on supplier selections, contracting and cloud economics. Due to his technical background, Paul is capable to see through solutions in supplier’s proposals, and to advise on their suitability for the client organization. On top of IDC Metri’s capability to compare pricing against market standards, this provides a total view on proposals. In cloud economics, Paul helps not only to analyze cloud costs to find possible savings, but also on actually realizing those savings by interacting with the client’s organization.