Happy holidays! Your Ops team can pack their bags. IT management and IT management tools are dead.
Or at least that’s the word according to a new blog from Tech Target on AWS’s new Managed Services (MS) offering. According to the blog, AWS is launching their AWS MS program to expedite the adoption of cloud by Fortune 1000 and Global 2000 companies. The article published last week notes AWS’ belief that companies want:
[T]o add additional automation, make use of standard components that can be used more than once, and to relieve their staff of as many routine operational duties as possible.
Further explanation is provided in AWS’ announcement of their new product which they claim is designed to take over system monitoring, incident management, change control provisioning and patch management. Indeed, these are usually functions that fall under the auspices of IT Ops. And as the Tech Target article goes on to note:
After all of this, the only ones left standing could be application developers, despite — or thanks to — Amazon’s vast array of development tools.
So, if we follow AWS’s logic, we might think that they have sunk their claws into the whole IT management life-cycle. The question then becomes, has AWS set the stakes for IT management to meet its maker? Ummmm, not so fast, Cowboy.
The first thing to note in reading the Tech Target blog is that for the moment, AWS is only focusing on large enterprises. That is, Fortune 1000 and Global 2000 are the first targets. So, you could think that they are leaving small MSPs alone …. for now. Although in AWS style, they will probably go for smaller targets in the months and years to come.
So is it only a matter of time before AWS tries a ‘one cloud to rule them all’ approach? Will AWS be like Ma Bell of years past? We don’t think so. It seems that most companies have adopted a multi-tiered approach to cloud. According to Hy-trust
Most companies feel uncomfortable about being locked into one cloud provider. Companies don’t believe that a true serverless environment is a smart idea. They want the ability to test on local servers and not feel like they have all their eggs in one basket. Furthermore, because of security fears or risks of downtime, companies often wish to have multiple cloud providers. So, you will definitely need your IT department to manage the use of servers.
While there’s no problem with putting Devs on call, you cannot just assume you can farm out all of your IT concerns and only have Devs run the show. At the risk of seeming too much of an evangelist, Ops are people too. More to the point, they have an important role in the DevOps process.
Ops plays an important role in sprint planning to ensure that quality of service, tools, resource management and security are prioritized along with the other components of the business. Furthermore, Ops provides support to development as well as to customers. Ops are, as one article put it, in charge of “building the highway so the rest of us can use fast cars.”
There is a strong requirement for Ops. There’s a lot of technical debt that you create when you don’t reinforce code with strong Ops and security practices. It’s a debt that cannot be paid at a time and in a way of your choosing. Ops teams put the brakes on Dev’s rapid automation in order to keep a more orderly ship. They also pay down the technical debt of the team by creating strong scripts that create a healthier operating environment.
Ops is more than just your backup team. And if you think of them as just provisioning servers then you have it wrong. In addition to the roles and responsibilities mentioned above, they also have many important roles around responding to critical alerts. AWS MS does intend to provide incident monitoring and resolution. But, AWS is not the architect of your product so they are unable to know if incidents are or are not issues. More importantly, they do not have the focus to provide incident alerting best practices.
You really need Ops and Dev to work on call together to ensure proper alert management. You cannot expect AWS or its engineers to intelligently investigate the issues alerted to by your product for the simple reason they are not the product architects.
In last week’s blog, we highlighted OnPage’s alerting best practices. However, if it is worth noting once, it is worth mentioning again the following critical alerting best practices:
While alerting lets you know when things have gone wrong, it also lets you know where your code and system need improvement. While much of this improvement might be elucidated in the test environment, you cannot always understand how a product will respond under the real-time pressures of deployment. You simply cannot farm out that functionality and expect a solid outcome.
The AWS argument that we no longer need any cloud other than AWS seems too visionary. For the many reasons discussed above, we at OnPage think IT still has a rosy future in front of it. That’s not just because we provide alerting for IT and hope they will need our services for years to come. Critical alerting is an important part of the development lifecycle that cannot be farmed out. We think it will stay that way for a good long time.
Learn more about how OnPage critical alerting can help your stack. Contact us.
We’re thrilled to announce the launch of OnPage’s new Multiple Account Login feature. Designed to…
Whether it's your first or hundredth home call shift, preparing yourself both physically and mentally…
Gartner’s Magic Quadrant for CC&C recognized OnPage for its practical, purpose-built solutions that streamline critical…
Site Reliability Engineer’s Guide to Black Friday It’s gotten to the point where Black Friday…
Cloud engineers have become a vital part of many organizations – orchestrating cloud services to…
Organizations across the globe are seeing rapid growth in the technologies they use every day.…