Jenkins Out, GitHub Actions In

How We Made the Leap

(Plone-focused CI/CD modernization)
Lots of Plone products, one aging Jenkins box...
Time to evolve.

PloneConf 2025 PyCon Finland iMio

Who ?

Benoît

Benoît
DevOps Engineer at iMio · 15 years in Plone & open source
Automation, Docker, Kubernetes, IaC
Plone contributor, Plone foundation member
GitHub bsuttor

Rémi

Rémi
16 years in municipal IT · SmartWeb @ iMio since 2022
DevOps since 2024
Open-source & learning mindset
GitHub remdub
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In
iMio garçon

What is iMio?

  • Public company in Belgium
  • Provides IT services to ~400 local authorities
    • Municipalities, Public Centre for Social Welfare, provinces, police zones, rescue zones
  • 11 different applications → 1200+ instances
  • Python is in our DNA
    • Plone, Odoo and Django projects
  • Our missions
    • Mutualise IT solutions
    • Support digitalisation
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In

Agenda

  1. Context & Legacy Pain
  2. Why Migrate?
  3. Strategy & Process
  4. Technical Architecture
  5. Demo: Deployment Flow
  6. Fun Fact
  7. Future / WIP
iMio calendar
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In

Throwback: PloneConf 2022

How we created, deployed and updated over 200 websites at iMio with no downtime.

Key difference today:

  • 2015: GitHub Actions did NOT exist
  • Ecosystem maturity (2022 → 2025): composite actions, Action Runner Controler
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In
iMio server

The Legacy Setup

  • Single physical server (Ubuntu 14.x)
  • Jenkins + a lot of plugins
  • Groovy pipelines of... varying quality
  • Hard to upgrade (plugin dependency hell)
  • Credential sprawl
  • Snowflake state (deployed with infrastructure as code, but not maintained anymore)

Risk ↑ / Confidence ↓ / Bus factor = 1.5

Fun facts

Why Migrate? (High-Level)

  • Jenkins server was on its last legs
  • Consolidate around where code lives (GitHub)
  • Align with Plone community practices
  • Remove plugin fragility
  • Horizontal scale via Kubernetes
  • Isolation per job
iMio interrogation
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In
iMio interrogation

Why Not GitLab CI?

We already had GitLab internally BUT

  • All Plone products already on GitHub
  • Would require migration, retraining, and changes on dev local setups (& minds 🤡)
  • Marketplace ecosystem (actions)
  • ARC (actions-runner-controller) maturity
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In

Timeline

Migration phases
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In

Talking to teams

Questions we asked :

  • Does the actual workflow still suits your needs?
    • What triggers the CI/CD ?
  • How would you improve it?
  • Should we keep explicit environments: staging / prod?
  • Which metrics do you need?
iMio meeting
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In

The 'gha' Repository

Reusable composite actions (examples)

  • Build and push a docker image
  • Run plone tests
  • Call Rundeck job
  • Build deb package
  • Release Helm Chart
  • Notify on mattermost

Encapsulate complexity

→ Keep workflows thin

github-gha
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In
argocd

Runner Strategy

  • Self-hosted via ARC
    • Auto-scaling ephemeral runners (security + cleanliness)
    • Resource quotas per namespace
    • Fast spin-up (prebaked image)
    • Same network (reach internal services / servers)
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In

Runner Docker Image

Includes

https://github.com/IMIO/docker-bases/actions-runner
  • Python (multiple versions)
  • Plone buildout deps (C libs: libxml2, libjpeg, zlib...)
  • Caching dirs structured (/cache/buildout, pip)
  • Versioned & scanned (Trivy)
imio-container
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In
Branch / Deploy Flow imio-container
release-process
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In
Demo imio-demo

Rundeck Jobs (Legacy Tie-In)

  • Some long-running operations still in Rundeck
  • Docker images pull
  • Instances reboot
  • Upgrade-steps
  • GHA triggers via Rest API
rundeck
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In

Observability

Includes

  • Lightweight Mattermost notification: short status + link (no noisy full logs).
  • Actions logs (raw) in GitHub web UI
  • Plone logs : rundeck
  • Container-level metrics (Prometheus + Grafana)
imio-container
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In
imio-onfire

Fun Fact (Timing Was Perfect)

  • Physical Jenkins server died (disk failure)
  • BEFORE migration completed.
  • No data salvage possible. (but not needed)
  • Migration ROI validated instantly 🙂

Current WIP / Future

  • Shared reusable workflows (org-level)
  • Kubernetes-native Plone (full containerization & scaling)
  • Align with [plone/meta] best practices
  • Dashboard replacing noisy Mattermost spam
imio-architect
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In
imio-ripjenkins
Farewell Jenkins, you won’t be missed.
Thank you
Any question?

Feel free to scan the QR-Code to download the slides

imio-ripjenkins
PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In

Resources

PloneConf & PyCon Finland 2025 • iMio • Jenkins Out → GitHub Actions In

Thank you for being here. We are going to talk about our journey from Jenkins to Github actions.

Benoît & Rémi

Rémi

Benoît

Benoît I made a talk 3 years ago about how we deployed our instances. We still used the same process, but no more the same tools The main difference compared with 2022, it's github actions do not exists in 2015 when we started using Jenkins

Benoît With our old Jenkins server, we had a single physical server It was very old and deploy, at the begining, with our infrastructure as code, Puppet But over the years and new verison of puppet, it was no more synchronized with your puppet code beacause of, basically, plugins Also, for convenience, everybody was admin I was the only one who maintained the server

Rémi When we told our sysadmins that we were doing a talk about Jenkins, Here was the reply of Cédric...

Rémi This brings us to the next slide about why migrate ?

Rémi We had 2 obviouses choices. Github actions or gitlab-ci

Benoît So in 2021 we started to use github actions on some repo for testing our code, as the community. in 2024, we have Jenkins, GHA and gitlab-ci in our infrastructure, we choose to remove one of these 3 ... This is the beginning of the end of Jenkins In June, we made an inventory of all Jenkins pipeline based on groovy In July, we deployed docker based runner on your fresh new Kubernetes cluster and we create a central GHA repo In Augustus, we had Jenkins and gihub actions works in //. But gha do not deploy anything In septembre, we made the switch and we decommission Jenkins It was a quick migration, only 3 months

Benoît One of the big part was to verify if our "old" deployement steps was still used and suits to our needs. We have some questions to ask to ourself

Rémi keep workflows thin and easy to understand

Rémi We are hosting ourselve the runners and orchestrating them with the action runner controler, so we can benefit from auto-scaling, and so on

Benoît We have now 3/4 differents actions runner images used depending on our needs One with for Plone with all battery included, I speak about library One other to ...

Benoît Here we see what dev have to understand to deploy on app. We choose to use zest.releaser because we work with this package to release our eggs, so dev know and use often zest.releaser

Benoît & Rémi Same as Jenkins: - Staging: every merge to `main` auto-deploys to staging instances (copy of some prod instances) - Prod: only annotated tag on `main` + schedule (3 AM next day) - Rollback: git tag revert + redeploy (immutable images retained) (! Database)

Rémi As you saw on the demo, the last step is often a call to a rundeck job. This allows us to make operations needed to promote a new app version like images pull, instances reboot and so on. FYI, it will be deprecated when we will migrate to kube.

Rémi During the process, we kept an eye to observability, with things like mattermost notifications, actions logs, plone logs, and so on.

Benoît One week before the end of migration, one disk crash on your old OVH server (11 years) not able to recover

Benoît & Rémi

Benoît & Rémi

END