Intro to using YAML Deployment Jobs in Azure DevOps
- /
- Knowledge hub/
- Intro to using YAML Deployment Jobs in Azure DevOps
- Knowledge hub
- /Intro to using YAML Deployment Jobs in Azure DevOps
This a 101 to Azure DevOps Deployment Jobs and Environments.
A while back Omnia's Managed Extensions team worked our CI/CD pipelines into a workable shape, and I thought it might be worthwhile to share some of the stuff we found valuable while building the stuff! Consider this a 101 to Azure DevOps Deployment Jobs and Environments.
Out with the Classic, bring me the code!
The whole idea of Deployment Jobs came with YAML Pipelines. As opposed to the Classic pipelines with a proper GUI, YAML pipelines are defined in code (with some exceptions).
Deployment Jobs are a concept in YAML pipelines, and as opposed to "normal" Jobs, they introduce a few useful features, namely:
- Environments and
- Deployment strategies
Going deep into Deployment Strategies is going to be beyond the scope of this article, but Environments are kinda neat, and something we should take a quick look at!
With Environments, one can define additional features for deployment "targets". It more or less replaces the "Deployment Groups" functionality.
However, Environment doesn't actually need to be linked any real environment. It's essentially just a name you give for a certain set of deployment parameters. It's neat, but can lead to confusion.
Code samples
Let me throw a curve ball and show you the code already halfway through the article!
To describe the differences, see the examples below for a (highly variableized) normal Job:
And how a Deployment Job (again - highly parameterized) looks:
Sorry about all the Omnia magic in there! Hope the idea is still clear :)
The way my team uses environments is to configure additional approvals for different Omnia Cloud instances (like Prod/Test or Enterprise Clouds) and optionally, different tenants as well. The number of required approvals from different people can be configured, and the deployment won't proceed before the requirements are met. Additionally, we get some very basic-level tracking of what got pushed and deployed where and when (by the pipeline).
When the pipeline is running and hits a Deployment Job with an Environment that requires approval, you'll see something like this:
And when you navigate to the particular run of the pipeline, if you have the permission to approve the deployment, you'll see something like this:
Ok - so that's the approvals aspect. But there's more!
By navigating to Environments, you get a list of all the different Environments Azure DevOps is aware of. The list might contain surprising values, as Azure DevOps will kindly create an entry for you in case you're deploying to an Environment that's not already known to it.
Azure DevOps will helpfully show you different Environment names, the last known deployment and an incredibly misleading "Last Activity" value that you should on most cases simply disregard.
Digging in, you can see all matching "deployments" for each environments. Below is a snapshot of our pushes (note the differing nomenclature - a "push" for one of our extensions is a deployment for Azure DevOps, just like a "deploy" for said extension would be, as both are executed by a Deployment Job!) to Omnia Production Cloud:
Clicking through, you can access each "deployment's" associated pipeline run's logs and artifacts. And depending on how you name your runs, you can directly output customer/tenant/extension names and versions to be visible either in the listing directly, or in the details accessible by clicking through.
Last thoughts
Is this perfect yet? No, not at all!
There's plenty that Microsoft could and should do to make the pipelines easier to use. YAML is sometimes tricky.
And there's probably a lot we could do to make managing this easier - right now we, for example, have hardcoded which clouds allow automatic deployments and which don't (the configuration comes from yet another parameterized but kind of horrible YAML template), and adding new tenants to deployment targets requires a code change.
But the pipeline is code - and it's something that can be changed any time, so it's always a work in progress!