Our encapsulated builds allow a lot of flexibility as to where we want them to run.
I previously floated the idea of not needing Jenkins at all, perhaps triggering an AWS Lambda using git hooks.
While digging around the Lambda options, I started thinking ahead to the process of retrieving the repo contents and all the associated hassle.
Then I had an even better idea: GitHub Actions!
Ten minutes later I had it working, well done GitHub.
Let’s take a look at the process from start to finish.
1 – Push a commit to your GitHub application repo, in my case this sample app
2 – A file in the repo configures the GitHub actions to run when a push or PR happens.
There’s a huge library of actions available to choose here but you must channel your inner Odysseus and resist temptation. We’ve only just wrested control of our pipelines back from random servers.
All we need is to instruct it to checkout the project and run maven wrapper against our pipeline pom. From that point onwards, the rest will be handled by our encapsulated and fully portable build.
This could be optimised a bit with a sparse checkout. We only need the pipeline module which will handle checking out the full repo itself.
name: CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run the build
run: ./mvnw -f pipeline/pom.xml compile
3 – pipeline/pom.xml uses the exec-maven-plugin to compile and execute our main Pipeline class.
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>3.0.0</version>
<executions>
<execution>
<phase>compile</phase>
<goals>
<goal>exec</goal>
</goals>
</execution>
</executions>
<configuration>
<executable>java</executable>
<arguments>
<argument>-classpath</argument>
<classpath/>
<argument>co.databeast.pipeline.Pipeline</argument>
</arguments>
</configuration>
</plugin>
</plugins>
</build>
4 – Pipeline.java uses conveyor to define all the remaining stages, jobs and tasks that make up our build.
public class Pipeline {
public static final String REPOSITORY_URI = "https://github.com/Davetron/sample_multi_module.git";
public static void main(String[] args) {
String repository = System.getProperty("repository", REPOSITORY_URI);
conveyor("co.databeast.pipeline.Pipeline",
stage("Build",
job("Application Build",
gitClone(repository),
maven("install")
)
)
).start();
}
}
Now we have a way to define our builds in a portable manner that doesn’t rely on any specific servers.
I expect it would be fairly straightforward to get this working on AWS Lambda or perhaps Google Cloud Functions, though I’m not sure the latter gives us access to a shell to run maven wrapper.
For previous entries in this series see:
1 – Breaking Down Barriers
2 – Pipelines as code…not text
3 – Pipelines as code (part 2)…the API
4 – Conveyor (part 3): Encapsulated Builds
5 – Conveyor (part 4): Where the rubber meets the road