There's an interesting piece of fallout to having an Operations team get more DevOps-supportive, and it's that the Operations team becomes a kind of special-purpose Development team. This fallout, in fact, creates one of the biggest misunderstandings about DevOps: the believe that DevOps means "Operations turning into coders."
DevOps does not mean Operations turning into coders. It means Operations working to smooth the path between coder and user. It turns out that the most common way for Operations to do that is by providing automation, and the most common way to provide automation usually involves some coding. So DevOps usually results in Operations turning into coders, at least to some degree.
Most operating systems that Operations will deal with have some systems programming-level language that's designed to facilitate operational automation. In Linux, for example, Perl and Python are extremely common scripting languages. In Microsoft Windows, Windows PowerShell has taken on that role. So this isn't programming a la C++, C#, or another "deep" programming language; it's "scripting," usually in a higher-level language that's more purpose-built for the task of operational automation. As I noted in the previous chapter, the main skill that Operations needs to bring to the DevOps picnic is skill in an environment-appropriate scripting language.
But once Operations begins producing units of automation - that is, code - Operations itself needs to start acting like a DevOps shop. Those units of automation are the application that the coder (Ops) delivers to the user (in this case, other roles in the IT team). So Operations needs the tools and management approaches that let them quickly iterate their code, test it, and deliver it to production. As users (in this case, that's probably developers) define new needs (such as the ability to deploy code to end-users), Operations must deliver.
This entire concept is often one of the biggest obstacles to organization-wide DevOps mentality, especially in shops that are heavily built on Microsoft Windows. The hurdle happens because Windows administrators, in general, haven't had decades of investment in coding and automation, in large part because the OS only started offering the capability in 2006, and didn't offer a significant capability until 2012. Administrators in the space simply haven't had the tools, and so they haven't learned the techniques. Change is always scary for some people (and for some organizations), and so the change of switching from GUI-based administration (which doesn't support DevOps) to code-based administration (which does), can be scary.
Many administrators - again, the Windows space perhaps has this the most - are accustomed to getting fully-fledged tools for administering their environments. They'll maybe complain that the tools don't work quite the way they want them to, but they're close enough. Moving into a DevOps-centric world, though, simply introduces too many variables. What kind of code are you delivering? What methodology do your developers use? What are the production concerns around stability and availability? How much room is there for error? What sort of maintenance windows are available? How do you communicate with the user base? The sheer number of variables means that pretty much every organization is unique, which means off-the-shelf tools simply can't be produced that are "close enough." As a result, DevOps almost demands that Operations build its own tools and processes, usually by "gluing" together off-the-shelf platform technologies. That's what I covered in the previous chapter, albeit from a slightly different perspective.
That "gluing" process is where Operations strays into its own development. You might be using Microsoft System Center Virtual Machine Manager to manage your virtual infrastructure - but you'll be writing some code to make it do what you want in accordance with your particular processes. You might use Chef to handle declarative configuration of virtual machine environments - but you'll be writing some code to tell Chef exactly what it is you want, and to manage custom elements that exist only in your environment.
Another result of this DevOps approach is that, once you get really good at it, you start to treat your infrastructure as code, and you start approaching infrastructure management in a more agile (if not Agile) manner. Virtualization in particular has made this tremendously easy, because we can tear down and re-create entire massive environments with the push of a button. Don't like the current environment configuration? No problem - modify the declarative configuration document and recycle the environment. Not happy with the result? Repeat. Reconfiguring the environment can (and should) be as easy as modifying a code-like structure, just as modifying an application is as easy as changing the code. In other words, once you're using code, or something like it, to describe how your environment should look, then you're basically treating the infrastructure as code. Development methodologies like Agile and Lean start to become an option for managing the infrastructure... and suddenly, you're looking a lot more DevOps-ish.
Mind-melding completely with all of these concepts - infrastructure as code, Operations as glue-coders - really opens up some possibilities. You're no longer constrained to this "big vendor" approach, where you have to find one vendor stack that meets all of your needs (which was never really practical anyway). Instead, you become comfortable dragging in components from multiple vendors as needed, because you grow confident in your ability to glue them all together into the Fraken-structure you need.