How to build a continuous integration and continuous deployment pipeline for your enterprise middleware platform

With the rise of microservice architecture (MSA), continuous integration (CI) and continuous deployment (CD) has become mainstream processes within enterprises. Those familiar with microservice architecture will no doubt have heard about greenfield and brownfield integrations, a route where, around 80% of the time, users start a microservices journey from scratch or from an existing enterprise architecture.

According to a recent survey from Lightstep,  there are more and more organisations moving ahead with microservices architecture, even though they accept that it is hard to maintain and monitor.

Moreover, the survey highlights that the advantages of MSA outweigh the disadvantages. This also goes for CI/CD in that it is a tightly coupled concept along with MSA and adopting a DevOps culture.

Due to the dominance of MSA within enterprises, CI/CD has also become an essential part of each and every software development lifecycle within enterprises.  With this shift towards MSA, DevOps and CI/CD, other parts of the brownfield integration cannot stay out of these waves. These include:

  • Enterprise Middleware (ESB/APIM, Message Broker, Business Process, IAM products)
  • Home grown software
  • Application Server (Tomcat, WebSphere)
  • ERP/CRM software (mainly COTS systems)

 This said it’s not always practical to implement CI/CD processes for every software component. Therefore, it’s important to look at alternatives in leveraging the advantages of CI/CD process within enterprise middleware components.

 Leveraging CI/CD processes within enterprise middleware components

Starting with one of the most common enterprise middleware products, an Enterprise Service Bus (ESB), provides the central point which interconnects heterogenous systems within an enterprise. This adds value to your enterprise data through enrichment, transformation, and many other functionalities. One of the main selling points of ESBs is that they are easy to configure through high-level Domain Specific Languages (DSLs) like Synapse, Camel, etc.

To integrate ESBs with a CI/CD process, ESB configurations that implement the integration logic server configurations which install the runtime in a physical or virtualised environment need to be seriously considered.

Of the two components, ESB configurations go through continuous development and change more frequently. Automating the development and deployment of these configurations is far more critical. That’s because going through a develop, test, deploy lifecycle for every minor change takes a lot of time and results in many critical issues if you don’t automate it.

Another important aspect when automating the development process is that you assume that the underlying server configurations are not affected by these changes and are kept the same. It is a best practice to make this assumption because having multiple variables makes it really hard to validate the implementations and complete the testing. The process will automate the development, test, and deployment of integration components as follows:

  1. Developers use an IDE or an editor to develop the integration components. Once they are done with the development, they will commit the code to GitHub.
  2. Once this commit is reviewed and merged to the master branch, it will automatically trigger the next step.
  3. A continuous integration tool (e.g. Jenkins, TravisCI) will build the master branch and create a Docker image along with the ESB runtime and the build components and deploy that to a staging environment. At the same time, the build artefacts are published to Nexus so that they can be reused when doing product upgrades.
  4. Once the containers are started, the CI tool will trigger a shell script to run the Postman scripts using Newman installed in the test client.
  5. Tests will run against the deployed components.
  6. Once the tests have passed in the staging environment, Docker images will be created for the production environment and deployed to the production environment.

Automating the update of the server runtime component

Although the above process can be followed for the development of middleware components, these runtime versions will get patches, updates, and upgrades more frequently than not because of the demands of the customers and the number of features these products carry. Therefore automating the update of the server runtime component is a serious thought that must be taken on board when applying this.

The three main methods in which different vendors provide updates, patches, and upgrades, tend to be:

  • Updates as patches which need to be installed and restarted the running server
  • Updates as in-flight updates which will update (and restart) the running server itself
  • Updates as new binaries which need to replace the running server

Depending on the method by which you get the updates, you need to align your CI/CD process for server updates. The process flow for CI/CD process for server updates will happen less frequently compared to the development updates.

CI/CD process flow for server updates

Outlined below is the process flow:

  1. One of the important aspects of automating the deployment is to extract the configuration files and make them templates that can be configured through an automated process.
  2. When a new configuration change, update, or upgrade is required, it will trigger a Jenkins job which will take the configurations from GitHub and the product binaries (if required), product updates, and ESB components from a Nexus repository which will be maintained within your organisation. Using these files, a Docker image will be created.
  3. This Docker image will be deployed to the staging environment and start the containers, depending on the required topology or deployment pattern.
  4. Once the containers are started, the test scripts (Postman) are deployed to the test client and start the testing process automatically (Newman).
  5. Once the tests are executed and the results are clean, it will go to the next step.
  6. Docker images will be created for the production environment and deploy the instances to the environment and start the Docker containers based on the production topology.

With the above process flows, you can implement a CI/CD process for your middleware layer. Even though you can merge these two processes into a single process and put a condition to branch out into two different paths, having two separate processes will make it easier to maintain. And finally, if you are going to implement this type of CI/CD process for your middleware ESB layer, make sure that you are using the right ESB runtime with the following characteristics. These include quick start-up time, stateless, immutable runtime and small memory footprint.

Written by Chanaka Fernando, Associate Director at WSO2

 

More
articles

Menu