When transitioning to a microservice architecture, it is crucial to have a reliable and effective source control system in place. By combining this system with automation pipelines, a whole new world of possibilities opens up. Given that HBPS prioritizes both functionality and security, and strongly believes in data sovereignty through self-hosting, the obvious choice for us was GitLab. We are not against using public cloud services, but we believe that hosting your intellectual property in a public cloud increases the associated risks. So why do it when you have the infrastructure to deploy it on-premises?
We take pride in solving complex problems with simple solutions. However, don’t mistake GitLab for being simplistic in this case. GitLab offers a comprehensive implementation of various aspects of a complete source control system, making it the clear choice. Our philosophy is to keep things as simple as possible while solving the problem at hand. Why opt for a source control system with an external pipeline handler, package registry, and container registry when you can find all these components in one place? Let’s not add unnecessary complexity. It’s as simple as that.
For this particular project, we had to consider that the customer is currently using Bitbucket, Jira, and primarily Jenkins for their pipelines. Eventually, all of these systems will need to be migrated to GitLab pipelines, which means additional work. However, we chose to focus on the positive aspects. Having everything consolidated in one secure on-premises location made it all worthwhile.
Since GitLab comes with a built-in container registry, it made sense to leverage Docker containers for testing code in our pipelines. The customer already utilizes Ansible roles to deploy virtual machines in their environments, so we could also employ them within the pipelines. By using the same Ansible roles, we can build Docker containers tailored to each project’s requirements. For example, we can build a container containing Java 11 with a specific version of Maven. This container will resemble the virtual machine on which the code will be deployed (at least for now). Remember, microservices aren’t built in a “big bang” approach. Along the way, interim plans are necessary to address the current problem. Now, back to the containers. We can create Docker containers that align with the customer’s existing environment, ensuring that the code won’t break when deployed to production. These containers are built in a central project (repository) and added to that project’s container registry. In the CI/CD pipelines, developers can pull the specific image they need to execute their code. Since this container registry is shared, any user granted access to GitLab via LDAP can utilize the containers within it.
Although GitLab offers integration with various external systems, we believe it’s generally best to utilize the tools provided within the product. This approach enhances long-term maintainability, and even if external systems decide to revoke their integration capability with GitLab, the customer won’t be left with failing pipelines. Therefore, instead of reusing their existing Jenkins pipelines in GitLab, we decided that rewriting all the pipelines to function within GitLab’s native pipeline definitions would be the best course of action.
One of the key advantages of a microservice architecture is the ability to continuously and rapidly deploy changes. In monolithic systems, deploying updates often requires scheduled downtime for the entire system, impacting even the parts not being upgraded at that time. This rapid deployment approach is commonly known as Continuous Integration/Continuous Delivery (CI/CD), and any source control system needs to support these needs. GitLab excels in this area by providing all the necessary components within a single solution. A proper pipeline allows developers to make code changes in their preferred Integrated Development Environment (IDE), create the appropriate merge request, and the pipeline takes care of deploying the code to production. Naturally, this pipeline is a collaborative effort involving developers, DevOps engineers, and system administrators. The significance of a pipeline lies in ensuring that the code deployed to production is functional, secure, and maintainable. By setting up the pipeline correctly, all these requirements can be met, especially when incorporating features like Static Application Security Testing (SAST), container scanning, and functional testing.
Your source control system serves as the foundation upon which all your production systems run. Choosing the right system and ensuring that all required functionality is available within it is crucial.