DevOps Glossary of Terms
A technique in which a new feature, or different variants of a feature, are made available to different sets of users and evaluated by comparing metrics and user behavior.
Typically high-level testing of the entire system carried out to determine whether the overall quality of both new and existing features is good enough for the system to go to production.
A precursor to DevOps. Agile is a software development and, more broadly, business methodology, that emphasizes short, iterative planning and development cycles to provide better control and predictability and to support changing requirements as projects evolve.
Tools, scripts, or products that automatically install and correctly configure a given version of an application in a target environment, ready for use. Also referred to as “Application Release Automation” (ARA) or “Continuous Delivery and Release Automation” (CDRA).
A development methodology that asserts software should be specified in terms of the desired behavior of the application, and with syntax that is readable for business managers.
A testing or quality assurance practice that assumes no knowledge of the inner workings of the system being tested, and which thus attempts to verify external rather than internal behavior or state.
Blueprints enable you to on-board projects, applications, and teams across the enterprise to the DevOps toolchain, without a lot of administrative overhead.
A type of agent used in Continuous Integration that can be installed locally or remotely in relation to the Continuous Integration server. It sends and receives messages about handling software builds.
A tool used to organize artifacts with metadata constructs and to allow automated publication and consumption of those artifacts.
Tools or frameworks that allow source code to be automatically compiled into releasable binaries. Usually includes code-level unit testing to ensure individual pieces of code behave as expected.
A go-live strategy in which a new application version is released to a small subset of production servers and heavily monitored to determine whether it behaves as expected. If everything seems stable, the new version is rolled out to the entire production environment.
A term for the general tendency of software and hardware configurations to drift, or become inconsistent, with the template version of the system due to manual ad hoc changes (like hotfixes) that are not introduced back into the template.
A term for establishing and maintaining consistent settings and functional attributes for a system. It includes tools for system administration tasks, such as IT infrastructure automation.
Similar but more lightweight than a virtual machine, containers are stand-alone, executable packages containing everything needed to run a piece of software: code, runtime, system tools, system libraries, settings, and so on.
A set of processes and practices that radically remove waste from your software production process, enable faster delivery of high- quality functionality, and set up a rapid and effective feedback loop between your business and your users.
A development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.
A go-live strategy in which code implementing new features is released to a subset of the production environment but is not visibly, or only partially, activated. The code is exercised, however, in a production setting without users being aware of it.
A sequence of orchestrated, automated tasks implementing the software delivery process for a new application version. Each step in the pipeline is intended to increase the level of confidence in the new version to the point where a go/no-go decision can be made. A delivery pipeline can be considered the result of optimizing an organization’s release process.
The streamlining of applications and configurations to the various environments used in the SDLC. Using a deployment automation solution ensures that teams have secure, self-service deployment capabilities for Continuous Integration, environment provisioning, and testing. A deployment automation solution can help you to deploy more often while greatly reducing the rate of errors and failed deployments.
A portmanteau of development and operations, DevOps is a set of processes, practices, and tools that improve communication, collaboration, and processes between the various roles in the software development cycle, resulting in delivery of better software with speed and stability.
Software delivery in an enterprise environment encompasses many different functional teams and tools. To improve the software delivery process from end to end, you need to analyze and correlate data from every part of the delivery pipeline. A DevOps Intelligence tool can provide this visibility, combining detailed metrics on past activity, real-time visibility into present status, and insightful analytics to provide early warning of problems and predict future performance.
The practice of integrating security into the DevOps process.
Refers to a development technique where all of the components needed to build and deliver software––deployment packages, infrastructure, environments, release templates, dashboards––are defined as code. Defining your delivery pipeline as code gives you a standardized, controlled way to on-board projects, applications, and teams.
Creating fast and continuous feedback between Operations and Development early in the software delivery process is a major principle underpinning DevOps. Doing so not only helps to ensure that you’re giving customers what they actually want, it lightens the load on development, reduces the fear of deployment, creates a better relationship between Dev and Ops, and heightens productivity.
Testing of the end-to-end system to validate functionality. With executable specifications, Functional Testing is carried out by running the specifications against the application.
In IT, governance refers to the process by which organizations evaluate and ensure that their tech investments are performing as expected and not introducing new risk. A formal governance process also helps companies ensure that IT activities are aligned with business goals, while also ensuring that everything is compliant with common standards, such as OWASP, PCI 3.2, and CWE/SANS.
A cloud computing environment that uses a mix of cloud services––on-premises, private cloud, and third-party. As enterprises scale their software delivery processes, their usage needs and costs change. Using a hybrid cloud solution offers greater flexibility and more deployment options.
Cloud-hosted virtualized machines, usually billed on a “pay as you go” basis. Users have full control of the machines but need to install and configure any required middleware and applications themselves.
A system configuration management technique in which machines, network devices, operating systems, middleware, and so on are specified in a fully automatable format. The specification, or “blueprint,” is regarded as code that is executed by provisioning tools, kept in version control, and generally subject to the same practices used for application code development.
Jenkins, the open source automation server written in Java, has long been the de facto standard for Continuous Integration. With Jenkins, developers can integrate their code into a shared repository several times a day. As organizations look to scale their software delivery processes, they often find that Jenkins requires too much scripting and/or maintaining of workflows, and that they need to expand to Continuous Delivery. Continuous Delivery not only leverages tools for Continuous Integration, but also for end-to-end release orchestration, test automation, security, IT service management, and more.
Container-based applications require the same release process as other enterprise applications. In fact, things may even get more complicated as applications evolve to rely on more and more microservices and more and more containers—across Dev, Test, Staging, and Production environments. Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
“Lean manufacturing,” or “lean production,” is an approach or methodology that aims to reduce waste in a production process by focusing on preserving value. Largely derived from practices developed by Toyota in car manufacturing, lean concepts have been applied to software development as part of agile methodologies. The Value Stream Map (VSM), which attempts to visually identify valuable and wasteful process steps, is a key lean tool.
A software architecture design pattern in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are small, highly decoupled, and focus on doing a small task.
The specification of system qualities, such as ease-of-use, clarity of design, latency, speed, and ability to handle large numbers of users, that describe how easily or effectively a piece of functionality can be used, rather than simply whether it exists. These characteristics can also be addressed and improved using the Continuous Delivery feedback loop.
A type of organization in which the management of systems on which applications run is either handled completely by an external party (such as a PaaS vendor) or fully automated. A NoOps organization aims to maintain little or no in-house operations capability or staff.
Refers to a program or application with source code that can be modified by anyone. There are a variety of open source frameworks, like AngularJS and React, open source tools, like Gradle and Jenkins, and open source libraries, like JHipster, that can be used to improve specific software development and deployment processes. In a complex enterprise environment, a DevOps platform can integrate open source tools and streamline them into the delivery pipeline.
Tools or products that enable the various automated tasks that make up a Continuous Delivery pipeline to be invoked at the right time. They generally also record the state and output of each of those tasks and visualize the flow of features through the pipeline.
Cloud-hosted application runtimes, usually billed on a “pay as you go” basis. Customers provide the application code and limited configuration settings, while the middleware, databases, and so on are part of the provided runtime.
A person or role responsible for the definition, prioritization, and maintenance of the list of outstanding features and other work to be tackled by a development team. Product Owners are common in agile software development methodologies and often represent the business or customer organization. Product Owners need to play a more active, day-to-day role in the development process than their counterparts in more traditional software development processes.
The process of preparing new systems for users. In a Continuous Delivery scenario, this work is typically done by development or test teams. The systems are generally virtualized and instantiated on demand. Configuration of the machines to install operating systems, middleware, and so on is handled by automated system configuration management tools, which also verify that the desired configuration is maintained.
A continuous delivery pipeline enables you to act and deliver more quickly, but you still need to deliver a quality product to your users. You can build the expectation of quality into your software development process from the start. Design your tests before a line of code is written. Create a test architecture that can be woven into your CD pipeline. Build a self-adjusting system that applies the right tests at the appropriate time in development, covering unit tests through to performance testing. That way you always have the real-time insight you need into your software quality.
Testing of the end-to-end system to verify that changes to an application did not negatively impacted existing functionality.
The definition and execution of all the actions required to take a new feature or set of features from code check-in to go-live. In a Continuous Delivery environment, this is largely or entirely automated and carried out by the pipeline.
The process of managing software releases from the development stage to the actual software release itself.
Helps enterprises efficiently manage and optimize their release pipelines and is necessary for enterprises that want to realize the benefits of Continuous Delivery and DevOps. Enterprise-focused release orchestration solutions offer crucial real-time visibility into release status and, through detailed reporting and analytics, provide the intelligence needed to make the best decisions. Release orchestration tools offer control over the release process, enforcing compliance requirements and also making it easy to modify release plans in an auditable manner. And they manage a mixture of manual and automated tasks that need to be coordinated across multiple teams, both business and technical.
With increasing delivery speed comes increasing security risks and compliance issues across different applications, teams, and environments. Shifting left refers to integrating risk assessment, security testing, and compliance evaluation processes earlier in the delivery pipeline. Doing so makes it cheaper and easier to address potential release delays or failures, security vulnerabilities that threaten Production, and IT governance violations that result in expensive fines.
Software Chain of Custody provides the evidence about everything that happens in your software delivery pipeline. Just as the chain of custody for a piece of evidence involved in a legal case proves that that evidence was handled properly, the software chain of custody proves what happened, when it happened, where it happened, and who made it happen.
Without this information, it’s impossible to meet compliance and security requirements as you develop and deliver software at scale.
A development practice in which small tests to verify the behavior of a piece of code are written before the code itself. The tests initially fail, and the aim of the developer(s) is then to add code to make them succeed.
From source code management and continuous integration, to environment provisioning and application deployment, there are ton of tools that get specific processes done in an enterprise DevOps practice. A DevOps toolchain refers to the set of tools that work together in the delivery, development, and management of an application.
Code-level (i.e., does not require a fully installed end-to-end system to run) testing to verify the behavior of individual pieces of code. Test-driven development makes extensive use of unit tests to describe and verify intended behavior.
A process visualization and improvement technique used heavily in lean manufacturing and engineering approaches. Value Stream Maps are used to identify essential process steps vs. “waste” that can be progressively eliminated from the process.
A systems management approach in which users and applications do not use physical machines, but simulated systems running
on actual, “real” hardware. Such “virtual machines” can be automatically created, started, stopped, cloned, and discarded in a matter of seconds, giving operations tremendous flexibility.
A software development methodology based on a phased approach to projects, from “Requirements Gathering” through “Development” and so on, to “Release.” Phases late in the process (typically related to testing and QA) tend to be squeezed, as delays put projects under time pressure.
A testing or quality assurance practice that is based on verifying the correct functioning of the internals of a system by examining its (internal) behavior and state as it runs.
YAML, an acronym for “YAML Ain’t Markup Language,” is a human-readable data serialization language. YAML files can be used in software delivery to automate specifications deploy and release processes. With YAML files, you can leverage the configurations from your existing applications and pipelines to represent familiar constructs to use in your development environment.
As in, zero tolerance for failures in Production. Customers have zero tolerance for failure. A deployment failure or any kind of service interruption to customer-facing software can have a catastrophic impact on an organization, especially those in highly regulated industries.