If you’re not sure what “DevOps” means, and whether or not you need a DevOps team in your organization, this article is for you. Here, I provide an overview of DevOps and its various facets, discuss why you most probably want a dedicated DevOps team in your company, and cover those edge cases where you might not need one.
What Is “DevOps”?
“DevOps” is a workplace culture that merges “development” and “operations.” Before the DevOps methodology was established, engineers worked in silos, focusing solely on their particular area of expertise and usually unwilling to learn about other fields. DevOps eliminates silos by ensuring collaboration between developers and operations engineers throughout the software development lifecycle (SDLC). Teams can thus deliver optimized products much faster.
The traditional siloed work environment was made up of developers on one side—responsible for writing the software code and making sure it worked on their machines—and operations on the other side, trying their best to run that software in a production environment. From the developer’s perspective, their responsibility ended when the software was released, meaning, any issue that arose in production would be the operation team’s problem. The operations engineers, on the other hand, felt it was not up to them to investigate the code if any bugs manifested in the deployed software, meaning, they would just throw the ball back to the developers. The truth is, in most cases, operations engineers wouldn’t have the necessary skills to debug the software anyway.
“DevOps” has sought to bridge this gap and, in practice, has taken on a much wider meaning, embracing continuous integration, continuous deployment, automation, observability, cloud architecture, and more.
As a result of this, you might have noticed that there aren’t too many sysadmins anymore. That’s because they all became DevOps engineers! So in some cases, DevOps engineers could be considered merely glorified sysadmins, while in others…/and.. .
In its essence, I believe DevOps is a philosophy: Use a wide array of tools and techniques in order to deliver a software product efficiently through a number of means:
- Automation (arguably DevOps’ greatest contribution to software engineering)
- Security
- Reliability
- Reproducibility
- Scalability
- Elasticity
- Observability
Automating Software Delivery
We are now entering the realm of continuous integration (CI) and continuous delivery/deployment (CD), which is at the heart of DevOps. I will speak about them both separately below.
Continuous Integration
Technically speaking, CI is not part of DevOps, but a technique that is part of agile software development (although DevOps engineers can contribute, for example, by automating the running of static analysis or unit tests as part of a CI pipeline). CI essentially means that developers commit their changes to the main branch of code quickly and often.
In the past, teams of developers would often spend weeks or months working separately on different features. When the time to release the software came, they would need to merge all their changes. Usually, the differences would be very large and lead to the dreaded “big bang merge,” where teams of developers would sometimes spend days trying to make each other’s code work together.
The main advantage of CI is that it avoids individual pieces of work diverging too much and becoming difficult to merge. If a CI pipeline is created with unit tests, static analysis, and other such checks, it allows for quick feedback to developers and thus lets them fix issues before they cause further damage or prevent other developers from working.
Continuous Delivery/Deployment
CD can be considered part of DevOps and builds on CI. A CD pipeline automates the delivery of software by building software automatically whenever changes are committed to a code repository and making the artifacts available in the form of a software release. When the pipeline stops at this stage, we call it “Continuous Delivery.” Additionally, a CD pipeline can automatically deploy artifacts, in which case it is called “Continuous Deployment.”
In the past, building and deploying software were typically manual processes, tasks that were time-consuming and prone to errors.
The main advantage of CD is that it automatically builds deliverables using a sanitized (and thus entirely controlled) environment, thus freeing up valuable time for engineers to work on more productive endeavors. Of course, the ability to automatically deploy software is certainly attractive too, but this may be one step outside the comfort zone for some engineers and managers. CD pipelines can also include high-level tests, such as integration tests, functional and non-functional tests, etc.
Automating Software Security
This sub-branch of DevOps is sometimes called DevSecOps. Its goal is to automate security and best practices in software development and delivery. Also, it makes it easier to comply with security standards, as well as produce and retain the evidence required to prove adherence to such standards.
Often, in software development, security is an afterthought, something that has to be done at some point but often left to the last moment when there is no time to properly do it. Developers are under pressure to perform and deliver within timeframes that can typically be very tight. Introducing a DevSecOps team may thus be a positive contribution, in the sense that it will establish which security aspects must be met and will use a variety of tools to enforce those requirements.
DevSecOps can be at all levels of the software lifecycle, for example:
- Static analysis of code
- Automatic running of tests
- Vulnerability scanning of the produced artifacts
- Threat detection (and possibly automated mitigation) when the software is running
- Auditing
- Automatically checking that certain security standards are followed
Automating Reliability
DevOps is often tasked with ensuring that a given system is highly available, which is achieved using tools such as load balancers, application meshes, and other tools that automatically both detect failed instances and take remedial action. Autoscaling is also an important aspect and is often implemented as an automated process by DevOps engineers.
The key to all of this is that the whole system must be designed so that each of its components is ephemeral. In this way, any component can instantly be replaced by a new, healthy one, rendering a system that is self-healing. Designing such a system is usually not the remit of developers, but that of the DevOps team.
Traditionally, organizations used snowflake servers running monolithic software stacks, with everything on that single server. Such a design is very fragile, with everyone living in fear of the next breakdown and engineers on duty 24/7. Admittedly, you also need engineers on duty in an automated system, just in case, but they would typically seldom be used.
Automating Reproducibility
There are various tools out there that let you automate the configuration of servers and systems and the provisioning of infrastructure elements (networks, databases, servers, containers). Examples of these are configuration management and infrastructure-as-code (IaC) tools.
Leveraging these, you can ensure that an exact mirror of a given system can be automatically instantiated at the press of a button. They also let you deploy new versions of software or keep the configuration of servers or serverless services up to date.
IaC often integrates with CD. Indeed, one of the final stages of a CD pipeline can be the deployment of a software release in a production environment using IaC.
When to Avoid DevOps Practices
Compared to traditional, manual, software development, DevOps practices require a significant amount of work upfront. This initial investment usually pays for itself many times over in the long term, but if your project is short-lived, this is probably a bad business decision.
So, in any situation where you want to achieve “good enough” software that won’t be used in production, blindly applying DevOps practices isn’t likely a great idea and will only increase your development time for little added benefit. Typical examples include:
- Minimum viable product
- Demonstration
- Experiments
- Proof of concept
In any of the above cases, moving to a production-ready product would usually require re-writing the software from scratch, in which case the DevOps practices can then be planned as part of the overall effort.
Conclusion
The most recurring word in the DevOps world is “automation,” as you probably noticed in this article. As a DevOps engineer, my motto is: “If you can’t reproduce it, you don’t own it.”
Compared to traditional development, DevOps usually requires more work upfront in order to establish the automation patterns. After this initial period, the productivity of developers is improved, and the effort required by the operations team is greatly reduced.
Perhaps, you have also noticed that I didn’t mention anything about the cloud. This is intentional because DevOps practices apply to both cloud and on-premises environments. However, in the case of cloud-based workloads, DevOps practices are pretty much mandatory for software teams today. This is because manually provisioning and managing cloud resources is cumbersome and, of course, prone to human error. Many aspects of cloud engineering are also intrinsically tied to DevOps practices.
In conclusion, it is fair to assume that unless you’re rushing to develop a minimum viable product, a DevOps team will allow you to structure your workloads in a way that is more efficient for both your developers and your operations team—and will definitely make both groups happier. Remember: “DevOps” is a philosophy that encompasses both your development and operations teams, so “just” introducing a DevOps team won’t be enough. You need to implement the necessary cultural changes across your company to make it—and your cloud environment—work.
This article was originally published on IOD Blog.
Top comments (0)