Cloud-based continuous integration for macOS with Concourse and Anka

Pix4D creates professional photogrammetry software for drone mapping. The people who use our products include surveyors, farmers, and researchers, and they use the software on a range of platforms including desktop, mobile, and cloud. Pix4D’s continuous integration (CI) team is based in the headquarters in Lausanne, Switzerland, where we design and maintain the systems used by our developers to automatically build, test, and package our suite of products.

Our CI system is designed around the paradigm of infrastructure as code. We manage the system’s resources using text-based, version-controlled configuration files and automation tools such as Terraform and Salt. All resources are located in a virtual private cloud (VPC) and hosted by Amazon Web Services (AWS). We use Concourse CI as our CI software because it is fully configurable by YAML files and scripts that may be shared, reviewed, and tracked in version control as an application code base would be managed. In addition, each task that is run on a Linux platform is executed inside a Docker container which provides the isolation that is necessary for reproducible builds.

Recently our developers began building some of our software products for macOS. This meant that we had to find a way to incorporate macOS builds into our CI environment with Concourse.

Integrating macOS builds into our existing CI

The first challenge we faced in incorporating macOS builds into our CI was finding the right infrastructure. The services provided by our current CI are running as AWS Elastic Cloud Compute (EC2) instances. Unfortunately, macOS images are not available for EC2. This meant that we would need to either subscribe to a separate service or self-host our own machines. We ultimately decided that we would have the most flexibility by hosting a number of on-premises Mac Minis and integrating them into our existing cloud.

The second (and more important) problem was determining how we could create an isolated build environment on the Mac. Concourse uses Docker containers for isolation on Linux. A number of Linux instances are registered with the Concourse task scheduler, and the scheduler sends them jobs to do in the form of Docker containers and volumes. On the Mac, however, Concourse executes builds directly in the host machine. This increases the chances for environmental drift and flakiness, ultimately reducing our ability to maintain the system and scale up our services. The solution to this problem was provided by Anka, a macOS virtualization technology for container DevOps by Veertu, Inc.

Anka and Concourse

Infrastructure – The primary components of our CI system are displayed in the following figure.



Our local cluster of Mac minis is connected to the VPC through a single entry point – the reverse proxy. The individual Macs register directly with the Anka controller and registry inside the VPC, both of which are hosted on a Linux EC2 instance. The Anka Build controller spawns and destroys VM instances on the Mac minis while requests to spawn or destroy an instance are made through the controller’s REST API.

The Anka Build Registry acts as a store of VM images. We pre-bake images with both the Concourse binary and the necessary tools and libraries to build our software, then push the images to the registry. If a Mac mini does not yet have an image saved locally, it will automatically download the image from the registry when a new VM is requested.

A startup script registers each newly created Anka VM instance as a worker with the ATC, which is the term that Concourse uses to refer to its task scheduler. Once registered, the worker is ready to receive data and build instructions from the ATC and stream the artifacts back into the VPC where they are ultimately stored in Amazon S3 buckets.

Creating new macOS VM templates

We streamline the creation of new Anka VMs with Veertu’s own builder extension for Packer by Hashicorp. A single Mac mini in our cluster is reserved for building new images. The creation of a new VM image starts with the execution of a script that handles the transfer of the necessary configuration files to the Mac builder via SSH. The same script then launches the packer build process directly on the Mac builder. Packer derives each new VM image from a base VM that mimics a fresh install of macOS. The result is a new Anka VM image which is pushed to the Anka registry inside the VPC, thereby making it available to each Mac host within the cluster.

This method of creating Anka VMs follows the infrastructure as code paradigm because the provisioning of the VM is configured with files that are parsed and executed by Salt. Another advantage is that the configuration of the VMs is stored in version control and old VM images may be stored on the registry in case we need to revert any changes to our CI.

Dynamic provisioning of the worker VMs

We faced another engineering challenge when it came to provisioning: how should dynamic information be provided to the newly spawned VMs? Such information includes the Concourse worker name and the cryptographic keys that are necessary to register with the ATC. We especially did not want to bake the keys into the VM images as this would pose a security risk. Fortunately, the Anka Controller REST API allows us to pass a startup shell script to each VM. A condensed version of the startup script’s Jinja template follows.

cat << EOF | sudo tee /srv/pillar/bootstrap.sls

{{ anka.anka_vm_pillar }}

EOF

sudo INSIDE_ANKA_VM="TRUE" salt-call --local state.apply


Two important events occur in the lifetime of this template. The first event is the rendering of the template which occurs inside the VPC on the Anka controller instance. The cryptographic keys and ATC endpoint are dynamically pulled from a secure parameter store when the Anka controller is provisioned on AWS. Importantly, this data is interpolated directly into the script at the line {{ anka.anka_vm_pillar }}.

The second event occurs immediately after a new VM is spawned. The script is passed from the controller to the new VM and executed, which directly injects the dynamic data into the VM instance’s pillar. (Pillar is the term used by the Salt provisioner for a static data store.) The Salt states that register the worker with Concourse are then applied to the VM by setting the INSIDE_ANKA_VM environment variable to TRUE and running salt-call –local state.apply.

After a successful salt-call, the new Concourse Mac worker should be registered with the Concourse ATC and ready to work.

Current limitations

There are currently two limitations to the setup described above.
  • On Linux, in which each Concourse task runs in a container, data is streamed between containers within the VPC. In the case of Anka and the Mac, the workers need to receive data from and send data to the VPC. This can incur some latency and bandwidth issues due to the Mac minis’ physical separation from the AWS hardware. We were able to work-around this limitation by shaping the traffic around our Mac cluster in our office’s router. We also expect that changes in streaming behavior in the upcoming Concourse releases to further improve streaming performance.
  • Concourse currently does not support on-demand VMs for the Mac. To prevent drift of the Build environment, a Cron job on the Anka controller respawns the VMs once per day. We found that this frequency was enough to keep our build environment fresh.


  • Summary

    In summary, we use Veertu, Inc.’s Anka virtualization technology to achieve well-isolated build environments for macOS builds. The unique circumstances regarding Apple license agreements meant that we needed to design a custom solution that integrated the Macs into our current CI system with Concourse on AWS. Despite this challenge, we were able to launch Concourse Mac workers inside Anka VMs that are running on an on-site cluster of Mac minis. The VMs are connected to the Concourse ATC inside our private cloud using credentials that are dynamically provisioned by the Anka controller.

    Overall, Anka has provided us symmetry with our Linux builds in Concourse.

    iOS 12 USB pairing with Anka VM for iOS CI

    In iOS 12 the lockdown procedure (mutual authentication of USB host and iOS device on each other) was modified. This lockdown procedure affects the workflow of attaching iOS devices to Anka VMs with USB pass-through for iOS CI real device testing using Anka Build or Anka Flow.

    We have fixed this issue in Anka version 1.4.3 and also determined that a specific set of instructions are required to pair iOS 12 device with the Anka VM Template and then it can be used in a fully automated manner as part of CI.

    Step 1 – If the device in the past, was already connected to host (on which the VM is going to run), then remove host record from the device DB, i.e., make the device forget the host. Execute the following to do this.
  • Detach the device from the host
  • On the device, Select Settings -> General -> Reset -> Reset Network Settings.
  • Step 2 – Attach the device to host and “Don’t Trust” the host in the dialog box that appears on the Device.



    If a dialog box also appears on the host, don’t ‘trust’ the device on the host.



    Step 3 – In the terminal window on the host, claim the device with these commands.

    sudo anka usb list

    sudo anka sub claim -n ios12iphone iphone/location

    “-n” flag can be used to associate a name with the claimed device. You can claim multiple devices connected to the same host with the same name, thus creating a group(name).



    Step 4 – Attach the device to the VM.

    anka start -d iPhone VMNAME

    The trust dialog will appear again on the device. Click on ‘trust’ the VM.



    and allow access to the device in the VM (trust device)



    Now the VM and the device are paired and accessible in Xcode and iTunes. You could detach the device, suspend or stop the VM. Next sessions of the VM or its clones on any other host should silently connect the device without additional prompts.



    Important – To pair another iOS12 device or other VMs, repeat steps 2 – 5, and make sure to “Don’t Trust” host on which you are performing the pairing. Contact us in our slack channel with additional questions on how to use the real devices with Anka Build dynamically provisioned VMs.

    Announcing Anka Build version 1.4.3 – Supports Docker inside Anka VMs

    Anka Build and Anka Flow version 1.4.3 is now available for all users. Some key new features in version 1.4.3 of Anka include support to run Docker and other hypervisors inside Anka VMs, create groups/clusters in Anka Build macOS cloud, priority based provisioning and core-based licensing. For more details see the version 1.4.3 release notes here.

    Run hypervisors inside Anka VMs

    Starting version 1.4.3, you can run Docker and other hypervisors like Anka, Android Emulator inside an Anka VM. See documentation to enable this feature on existing or newly created Anka VMs.

    Anka VM running Android Emulator and VirtualBox hypervisors



    Create groups/clusters to run VMs on specific Anka Build nodes(Host machines)

    When your Anka Build host machines are a mix of different types, you can create groups of similar machine types and provision specific Anka VMs to execute certain iOS CI jobs on specific groups.

    Also, use groups/clusters feature to assign pre-determined capacity to different groups using the Anka Build iOS CI infrastructure.  Grouping and Priority based provisioning features are available in Anka Build Enterprise Tier.

    Groups



    Creating Groups



    Core Based Licensing

    Going forward, Anka Build licensing will be generated for total cores in Anka Build Cloud instead of total machines and machine types. This will make it easier to add/remove and manage license key across multiple machines.

    Machine/Host-based licensing will be supported until the expiration of subscription to maintain compatibility with existing user setup. Sign up at https://veertu.com/getting-started-anka-trials/ to try Anka for iOS and macOS CI. 

    Quick setup of Anka Build macOS cloud on Mac hardware

    In this blog post, we will discuss how you can very quickly set up a macOS cloud using Anka Build exclusively on Mac Hardware. Anka Build cloud management software component is packaged as Linux Docker containers, and most common implementation architecture is to run them on a Linux instance. However, if you don’t have access to a Linux instance, you can set up a test/POC environment for Anka Build all on Mac hardware. The Mac hardware can be a single Mac machine (your MBP) or a cluster of Mac machines. We recommend this setup for quick proof of concept purposes, but you can use it for small-scale iOS CI operations.

    Setup on a single Mac machine

    Anka Build runs on all Mac hardware, including the latest 6-core 2018 mac minis. Follow these steps to get started.
  • Step 1 – Download Anka Build and Anka Build Controller and Registry for Mac packages from the Anka Build download page after trial registration. Note – Make sure there is enough free space on the Mac machine (at least 60GB).
  • Step 2 – Install AnkaBuild.pkg mac application on the Mac. https://ankadoc.bitbucket.io/getting-started/#anka-package-installation-and-upgrade-on-mac-hardware
  • Step 3 – Install the anka-controller-registry-mac-X.X.XX package. This package contains the management pieces of Anka Build macOS Cloud software. The controller gets installed as a docker container and registry is installed as a Mac application, Follow the instructions under Mac installation in https://ankadoc.bitbucket.io/getting-started/#registry-and-controller-installation-and-upgrade
  • Step 4 – Join the AnkaBuild node to Controller using ankacluster join command. https://ankadoc.bitbucket.io/using-controller/. For controller-address[:port], you will use http://localhost:8090.
  • Step 5 – Create your macOS VM template on the mac machine using AnkaBuild CLI. https://ankadoc.bitbucket.io/creating-vms/#using-macos-installer-application-recommended-to-create-mojave-hisierra-sierra-vms.
  • Step 6 – Connect the mac machine to the Registry using anka registry add [OPTIONS] REG_NAME REG_URL command. https://ankadoc.bitbucket.io/using-registry/#adding-a-new-registry. For REG_URL, you will specify the IP of your Mac. It’s the same value that you specified in step 3 in anka-controller. Docker file ENV REGISTRY_ADDR variable.
  • Step 7 – Push your macOS VM template to the registry using anka registry push command. https://ankadoc.bitbucket.io/using-registry/#pushing-vms-to-registry
  • Step 8 – You will now see this VM template under templates in Controller Portal dashboard.
  • You have now completed the setup of a single node macOS cloud and have one macOS VM template that you can use to provision on-demand VMs.
  • Step 9 – Go to Instances page on the portal dashboard and test using “Create Instances”.


  • Setup on Multiple Mac machines

    In this setup, it assumed that you want to set up a multi-node Anka Build macOS cloud. You will install the AnkaBuild package on all the Mac machines.

    Install the anka-controller-registry-mac-X.X.XX package on only one Mac machine, where you want to run the cloud management services. Follow the same instructions as Step3 outlined in the earlier section. Install the anka-controller-registry-mac-X.X.XX package on only one Mac machine, where you want to run the cloud management services. Follow the same instructions as Step3 outlined in the earlier section.

    Join all the Mac machines running AnkaBuild package to the Controller to create a multi-node cloud.

    Select one of the Mac machines to create VM templates from the nodes and execute steps 5.6.7 from the earlier section.

    You should now see multiple nodes in the portal dashboard.

    For additional questions, you can join the slack channel at https://slack.veertu.com/.

    T2 Chip and Anka co-existence

    Since the release of new 2018 mac minis, we have received several questions from users on the impact of T2 security chip included in the new mac minis on Anka Build setup. This blog describes in detail how Anka technology foundation is agnostic to T2 and can co-exist with it.

    There is enough written about the new T2 mac security chip, but as a quick summary, let’s look at the areas where it impacts existing mac administration and management workflows. T2 security chip is a security processor that is in charge of securing the following components of Mac hardware.

  • It validates the boot process (secure boot).
  • It does on-the-fly encryption of storage. Its full encryption with no performance loss due to software encrypt/decrypt I/O.
  • It protects the hardware from malicious usage scenario with hardware disconnect that ensures the microphone disables whenever the lid closes.


  • By default, all the new Mac Hardware including mac mini 2018 and newer iMac Pros have the T2 security chip enabled. So, what does it mean?

    1. Only certified operating systems, macOS and Windows, can boot on this hardware.

    2. NetBoot is not possible with Macs that have T2 hardware even when it is disabled.

    How does this affect the existing workflows?

      Installing other OS on top of Mac bare metal
    Before the introduction of T2 chip, it was possible to install software like ESXi, etc. on the Mac hardware. Now, even after disabling T2, access to SSD is still restricted, so booting from external USB is the only way to install other OS on the hardware. In this scenario, the OS can’t use the fast SSD inside the mac and will run on top of the slow USB device.

      Imaging
    Netboot has primarily been used to implement imaging in order to manage a group of macs. Now, with Netboot not working on the newer Mac hardware, it’s impossible to image a large group of macs consistently. It’s quite challenging now to set up and administer mac infrastructure for iOS CI.

    Anka installation on newer Macs is not impacted with T2 enabled. This is because Anka virtualization for macs is built on top of macOS hypervisor.framework. It works like any other mac application, which leverages and can work with full security on, utilizing all of the Mac hardware resources, including the fast internal SSD for maximum performance. So, Anka VMs can run with no impact of T2 enabled on the host machines.

    Anka VM on 2018 mac mini


    Anka Build is an alternative to explore if you wish to set up an agile and scalable mac infrastructure to provision immutable, container-like macOS VMs on-demand for iOS/macOS CI. Anka Build now supports Mojave and APFS.

    iOS build and test in Anka Build VMs on new 2018 mac mini – No performance compromise compared to running native on Mac Pro and Mac mini

    The new Mac mini 2018 which was released this month, with 8th Generation Intel CPU going up to six cores and support for up to 64GB of memory, is now a powerful contender for enterprise-grade macOS workloads. Here at Veertu, we were looking forward to this new mac hardware release to run Anka Build. Anka Build works out-of-box on the new mac mini hardware because it uses macOS Hypervisor.framework for virtualizing macOS VMS on top of macOS. What’s exciting us for is the performance of Anka Build on this new hardware and how it can change the game for iOS CI infrastructure. With new superpowerful CPUs, virtualization gets a boost, and we are seeing Anka Build VMs perform faster than running the same workload native on the older twelve core Mac Pro.

    Anka virtualized build/test iOS workload on new 2018 mac mini run faster than running it native on a twelve core Mac Pro

    We took the opensource Kickstarter iOS application and executed Xcodebuild build and test job in one Anka VM with 6vCPU (with HTT enabled) on the 6-core new 2018 mac mini and also native on a 12-core Mac Pro host. Anka VM performed ten percent better than native 12 core Mac Pro.
  • One Anka VM with 6 vCPU(HTT) on 2018 6-core mac mini – 1m21s
  • Bare metal Mac Pro 2013 with 12 cores (SSD) – 1m24s


  • Multiple Anka virtualized build/test iOS workloads on the new 2018 mac-mini run as fast as running it native on a quad core mac mini

    Again, we took the same opensource Kickstarter iOS project and executed Xcodebuild build and test job in two concurrent Anka VM with 6vCPU on the 6-core new 2018 mac mini and also native on a quad core mac mini. Both concurrent Anka VMs performed the same as a native quad-core mac mini.
  • Two Anka VM with 6 vCPU on 2018 6-core mac mini – 2m6s
  • Bare metal 4 core mac mini 2012 (2.3ghz SSD) – 2m4s


  • We also noticed that running concurrent Anka VMs on the new 2018 mac mini perform significantly better than running the same workload native on a dual core mac mini.
  • Two Anka VM with 6 vCPU on 2018 6-core mac mini – 2m6s
  • Bare metal dual-core 2014 (2.6 GHz, SSD) – 3m5s


  • So, what’s our takeaway from this?
    1. Devops and iOS software development teams are usually concerned about the performance impact of moving their iOS CI workloads to a virtualized environment. Since Anka Build virtualization on the new 2018 mac minis performs better or as good as native (better than native mac pro, significantly better than the native dual-core and as good as native quad-core), DevOps should explore running their workload on Anka Build macOS cloud on the new Mac minis.
    2. Most teams running virtualized environments for their IOS workloads are using Mac Pros (6 or 12 cores). They should explore the option of gaining a significant increase in the performance of their existing virtualized workloads by running them on Anka Build on top of the new 2018 mac minis.
    Contact us at info@veertu.com or https://slack.veertu.com/ to get more details on these benchmarks or with specific questions on running your iOS/macOS CI workloads.

    Anka Build Runner for GitLab CI

    Dynamically provision iOS build/test environments from GitLab CI on Anka Build macOS Cloud

    Anka Build software is used to configure an AWS like macOS cloud on mac hardware. Once Anka Build setup is complete, DevOps can create multiple build/test images for their iOS CI, and dynamically provision container like instances on Anka Build macOS cloud from these images.

    GitLab CI is a part of GitLab and is used by many dev teams as their platform of choice for CI. GitLab CI setup includes GitLab Runner and build cloud infrastructure to provision build containers or VMS.

    Today, we are announcing the availability of GitLab CI Runner for Anka Build. This will enable teams who are using GitLab CI to execute their iOS CI pipelines on their Anka Build macOS cloud. Check out detailed setup instructions at https://github.com/veertuinc/gitlab-runner.

    Setup and Architecture

    Anka Build Controller – This is the management module of Anka Build cloud software.

    Anka Build Registry – This is docker registry like registry module to store, manage and distribute Anka macOS images.

    Anka Nodes – Mac hardware on which the Anka VMs are provisioned and build and test jobs execute

    Anka GitLab CI Runner – Anka Build plugin which communicates with the GitLab CI APIs and Anka Build Controller.

    Steps
    1. Setup Anka Build Cloud on your mac build and test hardware.
    2. Prepare you macOS build Anka image/VM template with project dependencies.
    3. Install git on it.
    4. Push the Anka image/VM template to the Anka registry.
    5. Setup and configure GitLab CI Anka Runner. We recommend that you run it on the same instance where Anka Build controller is running. You can set it up as a docker container or install it manually. GitLab CI Anka runner should be able to communicate with both your GitLab CI service/server and Anka Build controller.
    6. Run GitLab CI Anka Runner.


    The setup is complete and your GitLab CI iOS pipeline should dynamically provision ephemeral Anka VMs on the Anka Build Cloud and execute builds.

    Project CI Pipeline in GitLab CI



    Anka GitLab Runner provisioning on-demand VM from an Image template on Anka Build macOS Cloud



    GitLab CI Pipeline job executing inside Anka macOS VM



    Job Completed and Anka macOS Vm is deleted

    Shared storage or local storage based macOS Cloud for iOS CI

    It’s obvious that mobile development is very mainstream now and as a result, the size and number of mobile development projects continues to grow. However, as projects and teams scale, and the number of dependencies increase, ensuring a consistent and stable build for all developers while ensuring code and test quality is a much bigger challenge.

    Mobile CI Infrastructure Requirements

    For a mobile CI system to scale, it needs to enable management of project dependencies, build and test environment dependencies and faster build times. While strategies for scaling build systems vary widely across use cases, most implementations focus on ephemeral approaches to managing the mobile job environment. This means the use of self-contained, immutable build environments to ensure proper versioning and verified stability. For many of these environments, typically virtual machines or docker containers, parity of performance becomes the chief concern. Management of these VMs and containers can prove challenging, however, and requires scalable architecture and a reduction in the number of system dependencies.

    Container and Container registry for iOS CI Cloud?

    In the container world, there are ample resources for building, scheduling, and deploying stateless applications and batch processes. As you know, containers cannot be used for the typical macOS CI use case, but there are examples to take from this highly scalable technology. These include projects like Mesos, Kubernetes, and others, which enable the ease of container management across very large environments on-premise or in public clouds like AWS or GCP. An additional hallmark of the container technology space is the use of registries to host base and additional container layers to add continued scalability using incremental approaches to infrastructure and application development.

    When scaling out infrastructure, registries can be used to download a container as an artifact, enabling workloads to be executed using local system resources, no matter the resource need. Further, the layering and artifacting of images allow this scale to expand across massive pools of resources, executing computations and compiles in more distributed fashion when required. Typically, containers downloaded by servers for running distributed compute, applications, or compile can perform moderately to extremely well using local CPU, memory, and especially storage. By using local storage to perform I/O operations within the container, directly on disk, the performance of these containers remains lightweight and easily scalable. Anka registry architecture is purpose-built around these concepts, and particularly well-suited for mobile CI systems. In contrast to this topology, the virtual machine world holds many different challenges.

    Storage setup for iOS CI Cloud

    Virtual machines often used to manage large stateful applications, can be a bit more daunting to manage. Contrasted with containers, their on-disk size is typically far greater, often exceeding 20Gb or more for a simple base operating system layer. In the macOS world, these images are often more difficult to manage because of limitations or requirements of the Apple ecosystem. Add to those additional dependencies, security tools, or large projects required to be included, these images can exceed 30Gb and can even grow beyond this when leaving room for a job or test execution, results exports, or caches to make the build faster. With a higher storage footprint, VMs become difficult to update and distribute at scale and typically have significantly higher I/O for compile jobs and other tasks. In traditional virtual machine architectures, stateful applications running inside of VMs are managed across a shared storage array, connected to the compute pool running virtual machines through various high-speed network connections. These storage arrays can be comprised of any number of technologies, including traditional spinning disks, flash storage types, or a combination of both with the in-memory cache as well.

    For the majority of use cases (like providing high availability and virtual machine migration when underlying hardware fails) this networked storage array can perform adequately to workloads exhibited. However, in build and CI environments, the I/O patterns of project compile and some test tasks can constrain these platforms and harm build performance. Because CI system scalability is achieved by a large number of ephemeral instances to execute build and test environments, the compile workloads in addition to VM distribution can add considerable load to the shared storage layer regardless of technology. This is because communication with the filesystem takes place both on the filesystem and over the network, adding latency and slowing throughput compared to local disk.

    Anka Registry allows for the download and distribution of VM images across a large number of underlying hosts. These hosts can be easily updated and can launch additional instances from local caches without requiring that the image is downloaded again. This architecture removes dependencies of network and storage during compiles run by virtual machines hosted on shared arrays and allows mobile build systems to more easily scale and perform faster. If you want the simplicity and the performance of running your iOS CI environment atop local SSDs, Anka is the platform for you.

    Provisioning on-demand macOS virtual machines with Jenkins and Anka Build for iOS CI

    Guest Blog Post By Peter Wiesner, Senior Software Engineer @Skyscanner

    Every year Apple releases a new version of Xcode. CI systems for iOS application development need to adopt it, so developers can take advantage of the new iOS features. CI systems using Anka Build have a head-start here. For folks not familiar with Anka Build, read more details here.

    You only need to create a new tag for the current Anka macOS virtual machine with the new Xcode installed. Anka ships with features helping to automate this process.

    In this blog post, I will show you how to do this. We will use the following solutions:
    • anka create to create the macOS virtual machine from a script
    • Jenkins pipeline to run the jobs on demand
    • anka run to execute batch commands on the virtual machines
    About the prerequisites:
    • Jenkins with pipeline plugin installed
    • a mac with anka-create label connected to Jenkins. We will use this native node to create and provision the virtual machine
    • Anka Build package version 1.4 current release installed on the mac
    Jenkins pipelineLet’s create a Jenkins pipeline project with following script. The key tricks are:
    • use Jenkins parameters to provide configuration (Xcode version to install)
    • use a separate repo for the actual provisioning files
    • always save the state of the VM on pipeline failure so next iteration is faster
    pipeline {
    
        agent none
    
        options {
    
            timeout(time: 240, unit: 'MINUTES')
    
        }
    
        environment {
    
            // Provide credentials here (anka username/pwd, keychain passwords,
            credentials
            for tool packages etc.)
    
    }
    
    parameters {
    
        //Provide mutable configuration for scripts (anka VM name, anka
        tag,
        Xcode version)
    
    string(name: 'ANKA_VM_NAME', defaultValue: 'anka_VM_node',
        description: 'Name of the Anka VM')
    
    //...
    }
    
    stages {
    
        stage("anka-create") {
    
            agent {
    
                node {
    
                    // There is a native mac connected to Jenkins with this
                    label
                    label 'anka-create'
    
                }
    
            }
            steps {
    
                create_task()
    
            }
    
            post {
    
                always {
    
                    // Create a vm that stores the state of the VM on exiting. When fixing
                    failures,
                    this can speed up the process a lot
    
                    sh ""
                    "#!/bin/bash
    
                    if [\$(anka list | grep current_state_vm | wc - l) == 1]
    
                    then
    
                    anka delete--yes current_state_vm
    
                    fi
    
                    anka stop\ $ANKA_VM_NAME
    
                    anka clone - c\ $ANKA_VM_NAME current_state_vm
    
                    if [\$(anka list | grep\ $ANKA_VM_NAME | wc - l) == 1]
    
                    then
    
                    echo "Deleting Anka VM: \$ANKA_VM_NAME"
    
                    anka delete--yes\ $ANKA_VM_NAME
    
                    fi
    
                    ""
                    "
    
                    cleanWs()
    
                }
    
            }
        }
    }
    }
    
    def checkout_provisioning_scripts() {
    
        // It is useful to decouple the actual provisioning script from this
        pipeline script
    
        // Easy to do updates and helps reading the pipeline script
    
        // Clone the provisioning script from a GitHub repo
    
        timeout(15) {
    
            checkout([$class: 'GitSCM', branches: [
                    [name: 'master']
                ], ,
    
                extensions: [
                    [$class: 'CloneOption', depth: 1, noTags: false,
                        shallow: true, timeout: 30
                    ]
                ],
                userRemoteConfigs: [
                    [name: 'origin', refspec:
                        '+refs/heads/*:refs/remotes/origin/*', url: 'SSH_GIT_URL'
                    ]
                ]
            ])
        }
    
    }
    
    def create_task() {
    
        cleanWs()
    
        credentials.withAllCredentials {
    
            checkout_provisioning_scripts()
    
            // Once we cloned the repository, print the environment variables
            to see
            if job parameters and variables are correctly passed to this task
    
            // provision.sh comes from the repo
    
            sh ""
            "#!/bin/bash
    
            printenv
    
            sh. / provision.sh
    
            ""
            "
    
        }
    }
    


    Creating the VM

    The Jenkins pipeline calls out to a provision script. This is responsible to utilize anka for provisioning. The key tricks are:
    • collecting all necessary packages beforehand
    • allowing the script to use previously saved virtual machine as starting point (`anka create` can take up to 20 mins)
    #!/bin/bash
    
    # Let's fail on error
    
    set -e
    
    # Retrieve macOS installer and necessary packages
    
    # It's beneficial to store them on a NAS that can be connected to the mac
    node
    
    # We can start the process with previous state of vm if we would like to
    
    # This saves a lot of time
    
    if [ "$START_FROM_BEGINNING" -eq 0 ]
    
    then
    
    #Create VM from the Installer on NAS
    
    ANKA_DEFAULT_USER=$USER_NAME anka --debug create --ram-size $VM_RAM
    --cpu-count $VM_CPU --disk-size $VM_DISK --app "$PATH_TO_OS_INSTALLER"
    $ANKA_VM_NAME
    
    anka modify $ANKA_VM_NAME add port-forwarding --host-port 0 --guest-port
    22 --protocol tcp ssh
    
    else
    
    #Clone VM
    
    anka clone current_state_vm $ANKA_VM_NAME
    
    fi
    
    # Iterate on the setup/install scripts
    
    for file in $files_to_execute
    
    do
    
    # Run script as root
    
    anka run --env $ANKA_VM_NAME sudo -E bash $file
    
    # Run script as created anka user
    
    #anka run --env $ANKA_VM_NAME sudo -E -H -u $USER_NAME bash $file
    
    done
    
    # Stop the VM after we are done, start it up and suspend it, so CI can use
    the super fast start-up feature of Anka VMs.
    
    anka stop -f $ANKA_VM_NAME
    
    anka run $ANKA_VM_NAME ls
    
    anka suspend $ANKA_VM_NAME
    
    # Save the new baseline virtual machine
    
    anka registry -a $REGISTRY_IP_ADDRESS push $ANKA_VM_NAME -t $ANKA_TAG
    Installing necessary tools

    The actual provisioning happens in the `for loop`, where we use `anka run`. These scripts install the tools like Xcode, Xcode CLI, Ruby, git or any packages. The content and order of the scripts are company/case specific, but let me list some general tricks. Keep in mind, when running the `anka run` command, the current working directory will be mounted to the VM and the files can be accessed with a relative path (sweet!).

    Installing certificates

    $P12_PATH contains the path to the exported p12 files location.
    echo $USER_PWD | sudo -S security import $P12_PATH -k
    "/Library/Keychains/System.keychain" -P "$P12_PASSWORD" -A
    Installing Xcode CLI

    $XCODE_CLI_PATH contains the path to the Xcode Command Line Tools package location.
    MOUNTDIR=$(echo `hdiutil mount $XCODE_CLI_PATH | tail -1 | awk '{$1=$2="";
    print $0}'` | xargs -0 echo)
    
    echo $USER_PWD | sudo -S installer -pkg "${MOUNTDIR}/"*.pkg -target /
    hdiutil unmount "${MOUNTDIR}"
    Installing Xcode

    You need to install the xcode-install gem for this. $XCODE_APP_VERSION contains the version number of the Xcode you want to install (like 10.0) $XCODE_REMOTE_URL is an URL from where the Xcode xip can be downloaded (it is worth to download it and upload to a private remote server to have bigger download speed).
    xcversion install $XCODE_APP_VERSION --url="$XCODE_REMOTE_URL" --verbose

    Build cache for faster iOS builds with Jenkins CI and on-demand macOS VM slaves

    Most of the existing iOS CI environments are configured to run the build and tests jobs directly on the Mac hardware. In such installations with large projects, some data like XCode derived data, and code repository get cached on the hardware and subsequent builds use these caches. In this blog, we will describe how you can set up your Anka Build macOS cloud in a Jenkins CI environment to build cache inside the VM and use them for on-demand provisioned Jenkins iOS CI slaves. This will provide the advantages of isolated, reproducible environments and build caches for faster build times.

    We have implemented a Jenkins plugin called “Anka Slave template Prepare” plugin. This plugin enables devops to load and update build cache on existing jobing Anka macOS VM templates in an automated manner. Devops can then use the Anka Build Cloud Jenkins plugin and setup their primary iOS CI Jenkins jobs to always run using the VM templates pre-loaded with caches.

    Visit https://ankadoc.bitbucket.io/using-jenkins/#using-the-anka-slave-template-prepare-plugin for detailed documentation to configure the plugin.

    Anka Slave Template Prepare Jenkins Plugin



    Post Build Action Configuration