Category: servers

#1: Ops is becoming more tactical

Ten years ago the goal for systems management was to use one set of tooling to manage a diverse set of platforms for the entire lifecycle of a server. There are still some organizations who have such a model, but most have dropped the ambition to use one tool to rule them all. Many shops are moving towards tactical tooling that is “best of breed” and keeping a siloed approach to automation use cases.

#2: Infrastructure is less important

Infrastructure is becoming less important from an investment perspective and is being viewed as a commodity. This trend started with virtualization and is accelerating with cloud. Ironically configuration management problems only get worse with the ability to easily provision more resources but organizations are tackling those problems in silos.

#3: DEV is driving DEVops

In most organizations operations is still being viewed as an obstacle to business progress. That has been the driver towards external and internal cloud. The whole devops trend is being driven by DEV and not ops so the typical value achieved by say server automation is nice, but time to market of releases is what’s driving IT spend.

#4: IT is already lean

One of drivers for server automation 10 years ago was increasing operational efficiency. Are OPS teams running at high efficiency? No, but from a staffing perspective they are running so lean that there is no headcount to reduce. In an environment where server count is growing it’s still very easy to justify a big automation project, but if server count is going down and headcount stays flat it’s hard to justify a large automation spend.

#5: Outsourcing

Depending on how the outsourcing arrangement is setup, it can either be a driver or a roadblock for automation. In a lot of deals the outsourcer needs to invest in automation to meet their SLA’s but many times it can also be the opposite. I’ve seen many cases where the outsourcer resists automation because they are financially rewarded for operating in an inefficient manner.

Cloud computing is the most overhyped, misunderstood computing trend since “Web 2.0.” In recent polling it’s also the #2 CIO initiative for 2010, with virtualization being #1. Like any popular IT fixall buzzword, people seem to ignore the prerequisites required for a successful implementation.

Cloud computing is really just an evolution of virtualization. Like virtualization, there are prerequisites that are required for a successful implementation. In the case of virtualization, a sound SAN strategy is needed. Having a hypervisor utilizing one local disk controller among 10-20 virtual machines is a recipe for disaster.

In the case of the cloud, whether it’s internal or external, full stack OS provisioning is a requirement for any true cloud computing initiative. What is full stack OS provisioning? It’s the ability to provision a production ready server (physical, virtual, or cloud) without requiring any manual software configuration handoffs before it is production ready.

When you look at most organizations, there is generally a large gap from the time a server is requested till the time that server is ready for business. Base OS installation is generally not the problem; it’s everything that goes on after the operating system is laid down: monitoring, backup, middleware, applications, and application configurations. Each one of those items usually requires human handoffs and manual configurations in order to finally get a server to a business ready state.

If your server OS provisioning process is not producing business ready compute nodes, then any cloud initiative is going to suffer from the same problems your organization already experiences with regular servers. Cloud computing and virtualization can rapidly speed up the ability to provision new compute nodes, but its only as fast as your provisioning process.