Greetings everyone. I'm Erik Svenson, a biz value strategist in the System Center team at Microsoft and I focus on application platform management costs. I posted a couple weeks ago about the different types of apps out there (web, RIA, rich client, etc.). In all cases, they suffer from one common, cultural issue that plagues the consistent management of enterprise applications: developers aren't paid to design and build applications with manageability in mind.
In my 25 years in the industry, I've consistently gotten the message that building manageability into the application is, at best, an afterthought. When I was a developer back in the mid '80s and early '90s, the only thought we had around the management aspect of an app was to put in somewhat meaningful messages when an error occurred. There were no conversations with IT about how the app should perform or even a document produced about what the application did. Nope. We just tested it and pushed it out to IT with a request for the right amount of disk and processing capacity ("right" being defined by us developers, by the way!).
Part of that was due to the fact that there were no management tools out there and have only become mainstream in data centers in the past fifteen years or so for the distributed computing platform.
Now, there's no excuse. With System Center Operations Manager (SCOM), WMI and the .NET Framework, we have a rich platform to easily build management capability into applications through custom alerts that are fed into Ops Manager (or any WMI consumer) as well as custom management packs. This is all wrapped in a strategic bow we call the Dynamic Systems Initiative also known as "Dynamic IT".
And yet, few developers do this at all or, it's an afterthought. Why? Well, I think it's roots are primarily cultural supported by a lack of incentives. Developers simply aren't paid to build proactive management capabilities into their applications. Even though it may take just a few lines of C# to do build an alert these days, in the crush of trying to get an app out the door, these tasks are considered nice-to-haves and generally don't get done, much in the same way commenting code isn't a requirement.
So what's to be done? Now that we have the tools for developers to easily build manageability, how do we do it?
First, business stakeholders have to see the link between their needs for agility and reliability of apps in the business and the capabilities offered by the management platform. This is the old "an ounce of prevention is worth a pound of cure" adage.
Second, development teams need to be incented not only to deliver applications on time but also based on the quality of those applications. This quality metric needs to be extended past the ideas of fixing bugs. Quality has to also be a function of how costly it is to recover from an app failure. Of course, this requires that these costs are tracked as part of a set of key performance indicators (KPIs) managed by IT.
Finally, IT operations needs to be "in the room" with development teams early enough in the development lifecycle to provide requirements as well as to understand the nature of the application.
In the coming months, I'll be studying what it costs to manage a "bad app" and a "good app" across the different types of applications out there. In the meantime, what do you think? Does this ring true for you and your organization? Let me know.
All the best,