In my infrastructure here, I am using munki to take care of all my software distribution needs. The majority of software packages is built with the excellent Autopkg on a dedicated MacMini. Until now this was a manually triggered process. I maintained a list of recipes and from time to time (usually daily) I would run something like autopkg run --recipe-list=my_recipes.list
. Afterwards I would manually rsync the local munki repository onto the distribution infrastructure.
While this process was working alright, it is not exactly elegant and also a bit tedious. I already have a GitLab installation and inspired by a post from Rick Heil 1, I decided to leverage GitLab CI for all my package building and deployment needs.
Preparation
Before the actual CI fun can start there is quite a bit of groundwork to be laid. Especially the munki repository needs to be put into Git and a CI runner that is configured to use the shell
executor needs to be installed on the package building Mac.
Gitify munki
Putting a munki repository into Git is pretty much straightforward. As Git is not exactly suited for versioning large binary files such as packages, I will be using git-lfs to take care of the binaries.
First step is to create a repository in GitLab, in our example here we will be using the following repository URLs:
- SSH:
git@gitlab:tig/munki_repo.git
- HTTPS:
https://gitlab/tig/munki_repo.git
Make sure that LFS in enabled in Settings -> General -> Permissions on project level. Admin rights might be needed for that.
Also, the GitLab server must be configured for LFS.
Time to initialise the Git repository directly in my “master” munki repo on the package building Mac. To make sure that nothing unwanted finds its way into the git repository, I am creating a .gitignore
file first:
$ cat .gitignore
catalogs
.DS_Store
Now it is time to initialise the actual Git stuff:
$ git init
$ git remote add origin git@gitlab:tig/munki_repo.git
$ git lfs track "*.pkg"
$ git lfs track "*.dmg"
$ git add .
$ git commit -m "Initial import"
$ git push -u origin master
Depending on the current repository size, the initial push might take quite a while as all of the packages are now uploaded onto the git server.
Install the GitLab runner
Installing the gitlab-runner is pretty much straightforward. I am installing it in the context of the package building user following the instructions on the GitLab Website. I am assigning the tag autopkg
so I can control later where my builds run.
What we achieved until now:
- The munki repository is Git-versioned and on GitLab
- gitlab-runner is running on the package building Mac
Setting up the autopkg repository
This will be the Git repository that will actually run the CI Pipeline to build the packages. The structure is as following:
$ mkdir autopkg
$ cd autopkg
$ mkdir build overrides
$ git init .
$ git remote add origin git@gitlab:tig/autopkg.git
The build
directory will hold our build scripts while the overrides
directory contains the recipe overrides for all packages to be built:
$ ls ./build
./build:
autopkg_tools.py run-autopkg-recipes.sh
Let’s have a look at the two build scripts.
run-autopkg-recipes.sh
$ cat build/run-autopkg-recipes.sh
#!/bin/bash
# This script triggers a Gitlab CI run for every override in the
# 'overrides' directory
# Originally written by Rick Heil (https://rickheil.com)
# Usage: ./run-autopkg-recipes.sh CI-TOKEN
# bail if no token was passed
if [ $# -eq 0 ]; then
echo "No arguments supplied - please pass the CI token to this script."
exit 1
fi
# make the actual calls
for OVERRIDE in `pwd`/overrides/*.recipe; do
[ -e "$OVERRIDE" ] || continue
# get the name of the recipe instead of full file path
RECIPE_NAME=$(basename "$OVERRIDE" .recipe)
# we need to skip some of these recipes
if [ "$RECIPE_NAME" = "AdobeAfterEffectsCC2018.munki" ] || [ "$RECIPE_NAME" = "AnimateCC.munki" ] || [ "$RECIPE_NAME" = "AuditionCC.munki" ] || [ "$RECIPE_NAME" = "BridgeCC.munki" ] || [ "$RECIPE_NAME" = "CharacterAnimatorCC.munki" ] || [ "$RECIPE_NAME" = "DimensionCC.munki" ] || [ "$RECIPE_NAME" = "DreamweaverCC.munki" ] || [ "$RECIPE_NAME" = "EdgeAnimateCC.munki" ] || [ "$RECIPE_NAME" = "ExperienceDesignCC.munki" ] ; then
continue
else
echo "Triggering CI for $RECIPE_NAME..."
/usr/bin/curl -s -w "Respose code: %{http_code}\n\n" -o /dev/null --request POST \
--form token=$1 \
--form ref=master \
--form variables[RECIPE_NAME]="$RECIPE_NAME" \
https://gitlab/api/v4/projects/$CI_PROJECT_ID/trigger/pipeline
fi
done
echo "Updating munki build repo"
(cd /path/to/munki/build/repo && git checkout master && git pull && /usr/local/munki/makecatalogs . )
exit 0
When run, this script creates a new CI pipeline for each recipe override found in the overrides
directory. This means that each package is built in a single pipeline of its own. The main advantage is that if a package fails building the logs are easily accessible and not cluttered with output from other package builds.
autopkg_tools.py
Thankfully, Facebook’s CPE team did most of the work for me.2 Their autopkg_tools
script takes care of running autopkg and handling the git operation.
Here is a diff that highlights the differences (some modifications from Rick Heil included also):
$ diff -u autopkg_tools.py src/autopkg/build/autopkg_tools.py
--- autopkg_tools.py 2018-05-02 13:55:46.000000000 +0100
+++ src/autopkg/build/autopkg_tools.py 2018-04-30 08:41:28.000000000 +0100
@@ -334,6 +334,20 @@
gitcommitcmd.append(message)
git_output = git_run(gitcommitcmd)
+def push_branch(branchname):
+ """Pushes the passed branch to origin"""
+ # make sure we are on the feature branch
+ change_feature_branch(branchname)
+
+ # and push
+ gitcmd = ['push', '--set-upstream', 'origin', branchname]
+ try:
+ git_run(gitcmd)
+ timeprint('Pushed feature branch to origin.')
+ except GitError as e:
+ raise BranchError(
+ "Couldn't push %s to origin: %s" % (branchname, e)
+ )
# Task functions
def create_task(task_title, task_description):
@@ -383,6 +397,7 @@
# Autopkg execution functions
def run_recipe(recipe, report_plist_path, pkg_path=None):
"""Execute autopkg on a recipe, creating report plist."""
+ #cmd = ['/usr/local/bin/autopkg', 'run', '-v', '--override-dir', './overrides']
cmd = ['/usr/local/bin/autopkg', 'run', '-v']
cmd.append(recipe)
if pkg_path:
@@ -446,6 +461,7 @@
# Item failed, so file a task
failed_task(run_results['failed'])
cleanup_branch(branchname)
+ sys.exit(-1)
return
if run_results['imported']:
# Item succeeded, so continue.
@@ -454,15 +470,22 @@
# 7. If any changes occurred, create git commit
create_commit(run_results['imported'][0])
# 8. Rename branch with version
- rename_branch_version(
+ sw_exists = rename_branch_version(
branchname,
str(run_results['imported'][0]['version'])
)
- # 9. File a task
- imported_task(run_results['imported'][0])
- # 10. Switch back to master
- change_feature_branch('master')
-
+ if sw_exists:
+ timeprint('Branch already exists, not importing duplicate software')
+ cleanup_branch(branchname)
+ return
+ else:
+ # 9. File a task
+ imported_task(run_results['imported'][0])
+
+ new_branch_name = branchname + "-%s" % str(run_results['imported'][0]['version'])
+ push_branch(new_branch_name)
+ change_feature_branch('master')
+ return
def parse_recipe_list(file_path):
"""Parse a recipe list from a file path. Supports JSON or plist."""
Full file here
The script does the following:
Setting up the CI pipeline
The last step for the package building is to setup the actual CI pipeline in GitLab through .gitlab-ci.yml
:
$ cat .gitlab-ci.yml
autopkg:
stage: build
tags:
- autopkg
script:
- echo $RECIPE_NAME
- (if [[ ! -z "$RECIPE_NAME" ]]; then /usr/bin/python build/autopkg_tools.py -r $RECIPE_NAME; else /bin/bash build/run-autopkg-recipes.sh $TRIGGER_TOKEN; fi);
only:
- triggers
- schedules
This sets up a pipeline with one build job. If $RECIPE_NAME
is empty, it calls run-autopkg-recipes.sh
to generate new CI pipelines for each recipe in the overrides directory. If $RECIPE_NAME
is set, autopkg_tools.py
is called to do the actual package building. The only
statement ensures that this pipeline is only run on schedules and triggers, and not on pushes.
Publishing and deploying the changes
So now we have built all those new packages and they are merrily sitting in their own branches like this for example:
$ git branch -a
Cyberduck-6.5.0
* master
remotes/origin/Cyberduck-6.5.0
remotes/origin/master
The next step is to get these branches such as Cyberduck-6.5.0
merged onto master
and then deployed on the distribution infrastructure. Therefore, we will set up some more CI Pipelines, this time in the GitLab munki repository.
Merging the branches
Whenever a new branch is pushed, this pipeline is run:
---
stages:
- verify
- test
- deploy
lint:
stage: verify
tags:
- osx
script:
- build/munki_lint.sh
testinstall:
stage: test
tags:
- munki
script:
- makecatalogs .
- build/add_to_local_manifest.py $CI_COMMIT_REF_NAME
- sudo /usr/local/munki/managedsoftwareupdate --checkonly
- sudo /usr/local/munki/managedsoftwareupdate --installonly
except:
- master
merge:
stage: deploy
tags:
- osx
script:
- /usr/bin/python build/create_and_merge.py $PRIVATE_TOKEN
except:
- master
The lint job
This job runs some simple checks on the pkginfo files to avoid simple problems.
$ cat build/munki_lint.sh
#!/bin/bash
set -e
echo "Checking pkginfo syntax"
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for i in `find ./pkgsinfo -name \*plist`
do
/usr/bin/plutil -lint "$i"
/usr/bin/python build/pkginfo_check.py "$i"
done
IFS=$SAVEIFS
First all the pkginfo files are syntax-checked and then a little python script is run that conducts some consistency checks on the actual pkginfo:
$ cat build/pkginfo_check.py
#!/usr/bin/python
import sys
import os.path
import FoundationPlist
plist_file = sys.argv[1]
plist=FoundationPlist.readPlist(plist_file)
categories = ["Adobe", "Audio", "Communication", "Development", "Drivers", "Entertainment", "Graphics", "Image Processing", "Media", "Multimedia", "Office", "Operating System", "Printers", "Productivity", "Specialist", "Statistics", "System", "Text Editors", "Text processing", "Utilities", "Video", "Web Browsers"]
print "Checking package: "+ plist['name'];
sys.stdout.write("Checking if display_name is set... ")
if plist.has_key('display_name') and plist['display_name'] !='':
print "%s" % plist['display_name']
else:
print 'Error: display_name is not set'
sys.exit(1)
sys.stdout.write("Checking if installer item exists ... ")
if plist.has_key('installer_item_location'):
if os.path.isfile("./pkgs/" + plist['installer_item_location']):
print "%s" % plist['installer_item_location']
else:
print 'Error: package is missing'
sys.exit(1)
else:
print "nopkg plist"
sys.stdout.write("Checking if category is sane ... ")
if plist.has_key('category') and plist['category'] in categories:
print "%s" % plist['category']
else:
print 'Error: does not exist'
sys.exit(1)
The script checks three things:
- Is
display_name
set? - Does the installer item actually exist?
- Is the category in the set of allowed categories?
I might come up with more checks in the future.
The ‘testinstall’ job
Now that we know that the pkginfo file is alright, we can actually conduct a test installation. For this purpose, I have set up another gitlab runner on a dedicated Mac Mini. The new software item is added to a munki local only manifest and then installed/upgraded through munki. While this is only very basic testing, at least we can be sure that munki is able to install/upgrade the package without errors.
A python script adds the packages to the local manifest:
$ cat build/add_to_local_manifest.py
#!/usr/bin/python
import sys
import re
import os.path
import FoundationPlist
plist_file = '/Library/Managed Installs/manifests/test_packages'
plist=FoundationPlist.readPlist(plist_file)
package_name = re.sub('-.*', '', sys.argv[1])
print "Adding %s to local manifest if necessary" % package_name
if package_name not in plist['managed_installs']:
plist['managed_installs'].append(package_name)
FoundationPlist.writePlist(plist, plist_file)
print "done"
else:
print "already included"
The ‘merge’ job
As a last step, a python script is called that handles the necessary steps with the GitLab API to create a merge request and merge it right away upon successful completion of the pipeline:
$ cat build/create_and_merge.py
#!/usr/bin/python
import gitlab
import os
import sys
token = sys.argv[1]
gl = gitlab.Gitlab('https://gitlab', private_token=token)
project_id = os.environ.get('CI_PROJECT_ID')
source_br = os.environ.get('CI_COMMIT_REF_NAME')
project = gl.projects.get(project_id)
mr = project.mergerequests.create({'source_branch': source_br,
'target_branch': 'master',
'remove_source_branch': True,
'title': source_br})
mr.merge(should_remove_source_branch=True,merge_when_pipeline_succeeds=True)
This concludes the pipeline and the branch is actually merged onto master. Next step is to deploy master on the distribution infrastructure
Deploy to distribution
Whenever a branch is merged onto master, the following pipeline is triggered:
---
stages:
- verify
- build
- deploy
lint:
stage: verify
tags:
- osx
script:
- build/munki_lint.sh
makecatalogs:
stage: build
tags:
- osx
script:
- makecatalogs .
artifacts:
expire_in: 1 day
paths:
- catalogs/
only:
- master
deploy_distribution:
stage: deploy
tags:
- osx
script:
- rsync -arvz --delete --exclude='.git/' . munki@mac-distribution:/srv/munki/repo/
only:
- master
Deployment to distribution is pretty much straightforward. We run the lint job again, just to make sure. Afterwards the catalog files are created and finally rsync
is handling the file transfer to the distribution server. It should be noted that my actual distribution infrastructure is a Layer 4 load balancer that forwards requests based on the source IP to the closest distribution server and thus I have multiple distribution jobs, one for each backend distribution server.
Wrapping it up
As GitLab is only reading one .gitlab-ci.yml
we need to combine both yaml snippets into one file:
Final notes
So this describes my CI/CD process for autopkg as it currently is, I am sure that there is still room for improvement. Some summarising bullets:
- I am using separate Git repositories for autopkg and munki. It might be possible to unify this into one Git repository
- New packages are currently put into the
testing
catalog. Staging is still done manually through munki-staging and a trello board. - It would be nice to integrate some sort of auto-staging in future.
If you managed to read up to here, you might have comments or suggestions. I am available on the macadmins Slack as @tig
.