Continuous integration Archives - DevOps Online North America https://devopsnews.online/tag/continuous-integration/ by 31 Media Ltd. Tue, 11 May 2021 09:36:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 Implementing continuous delivery in business https://devopsnews.online/implementing-continuous-delivery-in-business/ Tue, 11 May 2021 09:36:29 +0000 https://devopsnews.online/?p=23278 The true value of continuous delivery is that high-quality, working code is ready to deploy to production, on-demand, at all times through an automated delivery pipeline. As code is merged up to the production branch, it is tested through automation from Development Unit Test and TDD through BDD and business testing in increasingly production-like environments....

The post Implementing continuous delivery in business appeared first on DevOps Online North America.

]]>
The true value of continuous delivery is that high-quality, working code is ready to deploy to production, on-demand, at all times through an automated delivery pipeline.

As code is merged up to the production branch, it is tested through automation from Development Unit Test and TDD through BDD and business testing in increasingly production-like environments. As this code progresses through the software development lifecycle (SDLC) and tests pass, it is ready to deploy when the business requests the change. Importantly, the risks associated with going live decrease as confidence builds that the software will work in production and necessary monitoring is already in place and tested prior to delivery.

Hence, continuous delivery makes teams more efficient and gives the business the flexibility to decide when functionality should be switched on.

 

Business benefits 

The business benefits from realising return on investment consistently, continually, and safely. Businesses have the control and ability to decide when to turn on functionality to suit market conditions and customer requirements, transforming customer perception of business delivery. Internally, production costs are lowered through reduced development time and effort, increased output efficiency, stability and speed to market.

Other internal benefits include:

  • streamlining workflows,
  • lower staffing costs,
  • reduced attrition rates,
  • retention of domain knowledge,
  • improvement in operation confidence through enhanced and collaborative teamwork.

Through our strategic partnership with Google Cloud, Deutsche Bank has been assisted in implementing objectives and key results to measure defined business benefits for our product releases. Are we building what the customer wants? Are we building it the right way so it’s both maintainable and performant? To retain and grow customer loyalty, and therefore revenue, companies need to ensure that their applications are performant, usable, and attractive to the end-user. Pre-production must include confirmation of resiliency, efficient disaster recovery, and security.

Continuous delivery defines the ability of companies to release changes on demand, safely and sustainably, even during normal business hours. This includes, but is not limited to updating services in a complex distributed system, upgrading mainframe software, making infrastructure configuration changes, making database schema changes, and updating firmware automatically.

Within Deutsche Bank, a huge amount of work has been put into prioritizing our DevOps solution, to facilitate Continuous Delivery. Within Deutsche, the Group Agile Accelerator Team has spent much of 2020 working on helping teams working efficiently and effectively with both Agile and DevOps processes and tooling. For 2021, the focus has now switched to the Business ROI via Continuous Delivery. This has been a fundamental shift for the bank, where application teams put forward a business case for funding with defined business value.

Advancement of DevOps in particular was the key focus to ensure the ability to continuously deliver is a highly achievable and crucial game-changer, where financial backing through the business now sees a BOW that contains technological advancements in addition to new features. Starting with business value and funding and understanding what would be needed to attain this, this shift in team culture and focus, empower teams and further unites Business, Development, infrastructure, and Operations.

This change in approach and having specific funding has enabled Business to understand the true drivers and benefits for continuous delivery and, of course, bears witness to the successful DevOps and Agile transformations that Business have already reaped benefits from.  Funding has also delivered a highly motivated and exploratory Community, bringing all sectors of the Bank together, targeted at sharing and learning in a cohesive way of working, which in turn is reducing costs through high quality and frequent stable releases by teams working in partnership with Business.  Budgeting for 2021, therefore, saw Business and the Executive Board strongly support the transformation across the bank.

 

The challenges

A common mistake organizations make, not only with continuous delivery but also with Agile and DevOps implementation, is they take someone else’s solution, emulate it and enforce it.  This often stifles innovation and team empowerment as they are given no opportunity to develop and evolve their own way of working. Whilst I would encourage organizations to understand failure points and challenges for companies who have implemented a successful continuous delivery strategy, they must fully understand their own issues and infrastructure and technological roadblocks. Value Stream Mapping is particularly beneficial and a great starting point so long as it involves all necessary stakeholders.

One of the greatest challenges to continuous delivery is related to the existing architecture and infrastructure in use. At Deutsche Bank, applications built in the last few years have been containerized and decoupled, built with automation being accessible and exposed APIs both for Test and Deployment ease. The challenge is more around the older monolithic applications. It is often not possible to automate deployment and testing remains largely manual or partially manual.

Some organizations mistakenly believe that they can implement continuous delivery by doing their existing deployment process more often. However, implementing the technical capabilities that drive continuous delivery typically requires significant process and architectural changes. Increasing the frequency of deployments without improving processes and architecture is likely to lead to higher failure rates and burned-out teams. Using modern tooling without implementing the necessary technical practices and process change won’t produce the expected benefits.

Together with Google, we are have implemented DevOps Research and Assessments (DORA) assessments for application teams. Through DORA research, Google provides interactive tooling which enables applications to assess their maturity in comparison with other companies in the same field. It is this type of advance in processes such as DevOps that enables companies on an ongoing basis to ensure the pipeline facilitating continuous delivery continues to improve and thereby protect the integrity and delivery capacity of production

Finally, implementing an efficient continuous delivery process will ultimately fail if the right KPIs are not agreed upon, transparent, and continually reviewed and upgraded.

Metrics may include:

  • Defect arrival rate, number of failed tests, and at what point in the SDLC
  • Number of builds ready for deployment and number of failed builds
  • Number of daily deployments to production
  • Deep insights into CD pipeline execution
  • The health of Applications post-deployment
  • Lead time between code commit and production release
  • Mean time between build failures, defect resolution, and recovery (MTTR)
  • Production downtime and recovery during and after deployments

 

 Implementing continuous delivery

Ideally, continuous delivery is implemented through a fully stacked deployment process, with small iterative high-quality releases deployed frequently. This involves a collaborative and transparent methodology, preferably Agile Design and Delivery, with a decoupled architecture using key tools. Quality Assurance must be a focus through the entire SDLC, ideally utilizing TDD and discussion through collaboration with test Professionals, Developers, and Business representatives. Company culture must shift to being business benefit-driven, minimum viable product iterative releases reducing risk.  These “3 Amigos” should include working with Security, Operations, and Infrastructure to ensure seamless delivery to Production, only if all these groups become part of the Feature Delivery Team can a successful delivery be achieved.

Through our partnership with Google and our continuing transformation towards an Engineering culture, Deutsche Bank has brought in a number of changes to both measure DevOps Maturity, making radical shifts in how DevOps is funded and deployment changes.

Our focus on DevOps across the many thousands of Applications we have is to introduce the DevOps Research and Assessment (DORA), as referenced in the question above, DevOps Maturity tool provided by Google.  There are a couple of options with this tool, firstly there is a quick assessment, this is available online and enables teams to get a high-level understanding of their week areas and where teams need to focus.  It also provides a cross-analysis against other companies in the same Sector.

A key reason to utilize DORA is that it provides us an efficient and retrospective quantifiable methodology to evaluate if the goals of continuous delivery are being realized.

Before implementing a continuous delivery pipeline and to avoid failure there are a number of precursors that teams need to be able to demonstrate:

  • Is our software in a deployable state throughout its lifecycle?
  • Do we prioritize keeping the software deployable over working on new features?
  • Is fast feedback on the quality and deployability of the system we are working on available to everyone on the team?
  • When we get feedback that the system is not deployable (such as failing builds or tests), do we make fixing these issues our highest priority?
  • Can we deploy our system to production, or end-users, at any time, on-demand?

A final implementation key criteria are that the release lifecycle does not end with release to production but encompasses operating and monitoring in production. This means incorporating all user feedback, incidents, monitoring reports, usage statistics, and so on into the planning phase of the next release.

 

Who is in charge of putting and overseeing a successful continuous delivery strategy?

There is no one person that is responsible or can be responsible for a continuous delivery strategy. Multiple disciplines must be involved in defining and governing the continuous delivery process, Business, Product Owners, the team including architects, developers, QA, Operations, Security and Release management.

In industries such as banking, it is often feared that a continuous delivery pipeline is not possible in such a highly regulated environment. In reality, this is not true. The pipeline, if built correctly, should have automated governance built-in. Indeed I would argue more governance than other methodologies.

For example, requirements are discussed and understood at a far better level both in terms of technical implementation and how they will be supported in production. Delivery is quick, typically bite-size implementations following two-week iterations, using SAFe (Scaled Agile Framework) for example, organizations are able to set out a framework with a collection of methods and rules that facilitate organizations to scale delivery at an enterprise level. Delivery of deployment code to production is done regularly and on-demand. The release cycle has become a rigorous and highly tested rapid one, for example, building on the concept of Program Increment which is the foundation of SAFe.

Quality gates are defined in very different ways. Stories hold highly challenged and discussed Acceptance Criteria and cannot be merged into the production branch until all of these are met, and it has been proved they have been met through demos and passing automation. Quality thresholds around code quality, design, reuse, and avoidance of “code smells” are inherent in the delivery pipeline through code review, either peer or through tools such as SONAR.

What has changed significantly is the decision to release (or not) has undergone a major transformation. The right process, methods, tools, and reports must form an integral part of the continuous delivery strategy. Within Deutsche Bank, we have transitioned to an Agile release Template that governs releases. The process must include the right measures post-production to ensure the strategy is working. If something breaks in production, not only is failure fast key to quick resolution but the process itself must be scrutinized and adapted as the highest priority. The right strategy empowers teams to make the release decision themselves as releases become more predictable and failure rates/incidents become a seldom seen problem and where they are observed can be resolved in a matter of hours.

A continuous delivery strategy must include transparency in measurement and monitoring of high service availability, delivery of critical fixes within hours, and deep analysis and monitoring of customer usage. You also need the right people in the right DevOps roles with the right skills, and at the top of this is having the right key soft skills and the willingness to collaborate. A learning culture must be fostered where it is safe to fail, fail fast, fix it and learn from the failure.

Release engineers typically replace the traditional and often bureaucratic role of the release manager. Again, this involves team members with a mixture of skills.  Within Deutsche Bank, Production Engineers (PEs) are evolving to fulfill this part of the process, focused on technical details, bringing together coordination of the release from development through to production and beyond.

A further key role in the continuous delivery pipeline is the architect. With a focus on ensuring high availability on production and pre-production systems, Architects are key to the design implementation, for coding, testing, and deployment. Through our partnership with Google and our increasing transition towards cloud platforms, Architects have performed a critical role in ensuring application design is correctly built right, the first time and lends itself to Cloud deployment and all the benefits associated with this.

Of course, automation has to be central to any continuous delivery strategy, whether for building an automated testing framework or an automated deployment pipeline. Developers, testers, and Operations are no longer three distinct teams. Developers that embrace the testing mindset and work with QA members to build code originating bypassing the identified test through TDD will ensure a substantial shift left, a massive reduction in late defect detection and corresponding cost, and a continued collaborative review that the code is focused entirely on Business criteria by meeting the Definition of Done.

Likewise, the next level of testing BDD builds on the Unit Test framework already developed in partnership and continues to the notion of building tests that verify Business objectives, BDD should never be about automating traditional black-box tests but be scenario-driven with the right mix of negative and positive paths.

Similarly, security engineers work with the team throughout the SDLC.  Non Functional aspects are now identified within Acceptance criteria and form part of the definition of done for a particular feature.  Security, Performance, Maintainability, and Sustainability, etc. are now integral parts of the design rather than late considerations often when it is too late to provide solutions or even worse not considered until Production Incidents evidence that this has been missed from the design considerations.

Operations and infrastructure engineers being brought in as part of the team is the other critical part of delivering a strategy that is entirely geared to delivering working software that meets business requirements in a safe and fast way. Through improved environments that are increasingly production-like, a better understanding of the infrastructure, environment, scalability, and how the system operates delivery of or turning a feature on in production becomes infinitely safer. Reliability, disaster recovery, security, usability at scale, and robustness of an application can be fully assessed and tweaked to be production-ready ahead of time.

Aside from the people focus there are well-acknowledged and defined criteria that a continuous delivery strategy must state process and policy:

  • Test automation
  • Deployment Automation
  • A solid code Merging policy held within a repository where branches and forks have a very short lifespan with frequent merges into the main branch
  • Integration of security into feature design and development
  • A loosely coupled architecture
  • Code maintainability
  • Empowerment of teams to choose the right tools for them and to define the right strategy for their Application
  • Continuous integration
  • Continuous Testing
  • Version control
  • Test data management
  • Monitoring and observability against predefined metrics and criteria including proactive notifications to teams
  • Database change management

 

You cannot be successfully Agile without Continuous Delivery!

You cannot be successfully Agile without Continuous Delivery, in terms of delivering business value quickly and safely to production and achieving business benefit. You can be the most efficient Agile team in the world but unless you have an automated delivery pipeline you just hit a new roadblock. At a most simplistic view, Agile, DevOps, and continuous delivery must be seen as the working processes that drive a means to achieve business delivery. Business requirements become the Agile stories that are developed and tested throughout the SDLC and verified against business acceptance criteria.

DevOps provides tooling and automated solutions to transition these deliverables to production through a collaborative and iterative way of working between all involved teams. Continuous delivery is the mechanism for turning these business requirements into working software that provides the right solution as already demoed to the business throughout this pipeline. Operations, Development, and Infrastructure must have transitioned through the cultural shift of working as one team with one combined book of work through all stages of this process and beyond release into production.

However, the two, in reality, must go hand in hand, CD needs small iterative releases to ensure releases become non-events, smooth, risk-averse, and efficient. At its essence, one of the inherent guidelines of a successful Agile team is that the team is empowered. The traditional release management process is a prime example of an organization being anti-agile, it places zero trust in the team and their ability to release high-quality software to production.

Another key anti-agile feature is that those that “govern” the releases are often outside of domain knowledge, their policing is around, has testing been done, has code been reviewed, has disaster recovery been proved, etc. yet there are no questions around OKR’s, does it provide Business Value.  The level of bureaucracy and layers of approvals is oppressive to delivery teams and cripples the ability to release.

In addition to Agile working in partnership with CD, other factors such as automated testing and automated deployment are pivotal parts of the implementation. Without this, organizations continue to be hampered by exhaustive manual testing and error-prone and lengthy deployment meaning business value cannot be delivered in a timely manner. Such organizations also miss the fast feedback loops, fast failure, continuous learning inherent in an Agile DevOps methodology.

At Deutsche Bank, part of our Agile methodology has included the delivery of a QBR (Quarterly Business Review). This has been a massive and highly successful transition for the Bank, again this has been implemented and taught from the GAA. This symbolizes the huge shift into consistently focusing and testing business values. Gone are hugely technical lengthy specifications which cannot be understood by Business. Agile teams are entirely focused on ensuring they can represent the actual business benefit they are delivering, what the usability is, who the target audience is, what the goals are and how they will be measured. Teams have become merged with business, these are no longer polar opposites they are one team that constantly challenges and supports each other in a transparent and entirely comprehensive manner.

In summary, companies that continue to purely focus on implementing Agile or implanting continuous delivery pipelines will continue to fail to deliver Business Value.  Organizations must view the Agile and DevOps not as two distinct parts but as a collective partnership in transitioning the culture and the method of development and delivery to Production or they will continue to inherently stifling any benefits of either.

 

Should every business adopt continuous delivery?

It depends on the challenges. Companies should use a checklist to assist with prioritising continuous delivery implementation, it’s not just about building the right thing it’s about building it in the right way, at the right time. For example:

  • The frequency of a process or the number of times it is repeated
  • Elapsed time
  • People and resource dependencies and potential roadblocks
  • How much would the process benefit from automation, is it error-prone
  • What is the urgency of getting this process automated against other processes

There are a number of factors that must be in place before an organization considers transitioning to CD as a way of delivering software speedily, safely, and sustainably.  So whilst the vast majority of companies would benefit in the business value driven through CD organisations need to assess is the company in its entirety committed.

At an organization level companies should grade themselves against the following considerations:

  • Is the company ready and open to adapt to a new culture?
  • Is interoperability between technologies an issue?
  • Does the company have complex monolithic applications?
  • Does the company work in an Agile way to deliver small incremental functionality (CI)?

Organizations who feel confident they are at the right place to move to CD should review Applications individually against criteria such as the following:

  • Does the Application have a high level of automated testing both pre-implementation and post-implementation?
  • Is environment provisioning automated and tested on production-like environments with osculated Production data?
  • Is the Application DR compliant?
  • Are Production Change Management approval groups in place this includes Technical Operations, Business, and QA Test Manager Groups, and has the approval process been automated??
  • Is the Application already releasing into production frequently?
  • Does the Application have a good history of quality releases, with no critical or high defects relating to application releases?
  • Does the Application have evidence of Issue closure success in terms of Time to remediate (MTTR)?
  • Are Deployment (and Rollback) activities largely, if not fully, automated? There should be no resource dependencies on SAs/DBAs to perform manual implementation activities.
  • Is a central shared code repository set up for the Application, which is tested and validated continuously?
  • Are Developers integrating code to the Main Branch several times a day through a defined Continuous Integration process?
  • Has Infrastructure been built into the code repository?
  • Has Monitoring been built-in throughout the SDLC and tested that the right alerts are being triggered?
  • Is code quality reviewed, poor code, complexity, duplication, and other “code smells” prevented from being merged to the Production Branch?
  • Is Deployment automated, this must not be confused by developers writing automated scripts for creating deployment pipelines that require significant and frequent manual updates which are error-prone?

 

Conclusion

Implementing continuous delivery as part of a company’s DevOps process is essential.

Agile and DevOps without continuous delivery mean that the key business value of having deployable release-ready software at any time is not possible.  As mentioned before Deutsche Bank’s investment in the “DevOps booster” saw a significant shift in the organizational culture from technical and business teams with a commitment to investing in new tooling, balancing infrastructure development with feature development, and a common belief that this is fundamentally a compelling shift with technical advances being considered in equal measure to feature delivery.

 

Article written by Paula Cope, Director of Deutsche Bank

The post Implementing continuous delivery in business appeared first on DevOps Online North America.

]]>
The importance of CI/CD in business https://devopsnews.online/the-importance-of-ci-cd-in-business/ Tue, 09 Feb 2021 09:17:32 +0000 https://devopsnews.online/?p=22996 The world of software development is constantly evolving, with companies getting more and faster enhancements and functionality to market. Yet, to remain ahead of the competition, development teams need to optimize their workflow for more efficiency, quality, and reliability. To do so, development teams implement continuous integration (CI) and continuous delivery (CD) in order to...

The post The importance of CI/CD in business appeared first on DevOps Online North America.

]]>
The world of software development is constantly evolving, with companies getting more and faster enhancements and functionality to market. Yet, to remain ahead of the competition, development teams need to optimize their workflow for more efficiency, quality, and reliability.

To do so, development teams implement continuous integration (CI) and continuous delivery (CD) in order to accelerate and automate the software delivery lifecycle. With CI/CD, the workflow continuously integrates code, which improves processes to benefit the business as a whole.

Hence, we asked professionals in the industry to share their insights on how CI/CD is essential for business.

 

Why is CI/CD important?

Continuous Integration is a development practice that requires developers to integrate code into a shared repository several times a day. Everything is then checked by an automated build, allowing teams to detect problems early.

Continuous Delivery, on the other hand, is the ability to get changes of all types into production safely and quickly in a sustainable way. This is achieved by ensuring the code is in a deployable state, even when developers are making changes every day.

According to Lee Gardiner, DevOps Lead (TE) at Collinson, reducing time to market in any way shape, or form will always benefit the business. By shifting patterns as close to those attempting to make changes will result in the cheapest form of testing and feedback as part of a CICD pipeline.

Besides, he continues, continuous delivery also results in a more stable platform by ensuring constant, safe to deploy code that should have been run through rigorous testing practices as well as ensuring code security.

Kalyan Nemalikanti, Senior DevOps & Site Reliability Engineer at Malayan Banking Berhad, highlights five ways that CI/CD are important:

  1. Both the development and the operations team (DevOps) can increasingly focus on their core competencies.
  2. CI/CD is the best way to promote the agile mindset within the development and operations team.
  3. Better Code Quality
  4. The main objective of CI/CD is to reduce the time to market that otherwise used to take years due to broken processes and minimal collaboration between development and operations.
  5. With continuous integration, delivery, and deployment — the code can be released to the end-users in a timely manner.

Hence, having CI/CD within your business operations is becoming more and more essential as it comes with various benefits for your enterprise in the long term.

 

Many benefits…

For Lee, the main benefits of CI/CD are fewer outages, more up-to-date code, better security, and most importantly a low time to market.

Kalyan points out that CI/CD can help deploy features into production without causing any disruption to other services, while quickly detecting and correcting incidents as and when they occur during the DevOps lifecycle.

Moreover, he adds, it boosts deployment frequency and presents more opportunities to re-evaluate the delivery process, through automation, effective testing, and monitoring procedures.

CI/CD also provides valuable data for continuous improvement around monitoring and metrics.

 

…. But also challenges

Yet, CI/CD still presents some risks and challenges for companies.

Indeed, Lee highlights that consistency and standardization will always be a challenge, as applications and infrastructure rarely fit into a nice, unified box. Besides, platforms are rarely of the same code type meaning duplication of efforts.

Kaylan points out the main risks that CI/CD entail:

  1. Initial developer hesitance
  2. Commit discipline
  3. End-user experience
  4. Being too focused on tools & technologies
  5. Thinking DevOps is anything more than process excellence

 

How to successfully implement CI/CD?

There are a few things to follow in order to adopt successfully CI/CD into a business.

According to Lee, there are two main steps:

  • Standardization: ensure first and foremost your CI flow is universally agreed and accepted, this will allow your CD to follow suit.
  • Ownership: ensure the parties involved are improving checks, tests and constantly evolving for new challenges.

He continues by saying that doing less to do more is always a running theme here, especially with CI. Thus, any checks that you can run in parallel will of course lower the feedback loop. After all, computing is cheaper than chairs.

Moreover, he adds that in order to implement CI/CD successfully, everyone in the team should care for and oversee CI/CD. ‘This is a piece that requires a considerate amount of attention, but the rewards will always be worth it.’

Hence, by implementing continuous integration and continuous delivery, companies will be able to see the best benefits of a complete CI/CD pipeline that will drive enhanced business and IT performance.

 

Should every business adopt CI/CD?

Lee emphasized that he doesn’t envy organizations that haven’t yet adopted Agile, CI/CD, or DevOps. Indeed, according to him, they are at risk to be short-lived, slow to adapt, and fail to keep up with organizations of size moving at the pace of a startup.

Therefore, CI/CD plays a vital part in software building and deployment and is more necessary than ever if organizations want to move forward. They are able to provide various benefits to businesses as well as to stakeholders including product owners, development teams, and end-users, among others.

In the long term, having a CI/CD process will lead to big advantages, such as reducing costs and increasing return on investment. Moreover, it will also allow businesses to invest more time in building better mobile apps with faster time to market.

 

Special thanks to Lee Gardiner and Kalyan Nemalikanti for their useful insights!

The post The importance of CI/CD in business appeared first on DevOps Online North America.

]]>
Securing the CI/CD pipelines with DevSecOps https://devopsnews.online/securing-the-ci-cd-pipelines-with-devsecops/ Tue, 01 Dec 2020 09:33:10 +0000 https://devopsnews.online/?p=22827 Continuous Integration and Continuous Delivery (CI/CD) can bring a seamless integration from end-to-end for the software development and deployment process. By doing this, CI/CD allows developers to dedicate more of their time developing code to improve software features instead of worrying about the deployment. Yet, they are still faced with many security challenges. CI/CD might...

The post Securing the CI/CD pipelines with DevSecOps appeared first on DevOps Online North America.

]]>
Continuous Integration and Continuous Delivery (CI/CD) can bring a seamless integration from end-to-end for the software development and deployment process. By doing this, CI/CD allows developers to dedicate more of their time developing code to improve software features instead of worrying about the deployment.

Yet, they are still faced with many security challenges. CI/CD might speed up the process but not the security. With DevSecOps – Development, Security, and Operations -, however, there is a possibility to accelerate the delivery of security within the software. DevSecOps engineers try to operate most security controls as part of the software by introducing it as a design constraint and then getting it checked by the CI/CD pipelines without damaging the integrity of controls.

 

Why DevSecOps?

As digital transformations increase, there is a vital need for safe and secure software, otherwise, everything, from the building to the delivery can be at risk. Security breaches are now one of the biggest threats to companies and products.

DevSecOps promotes collaboration between the development and security teams, hence, avoiding late handoffs to security professionals.  By introducing security at the beginning of the process, the value and quality of the product can only be reinforced. Indeed, without DevSecOps, the software might be considered unsafe at the last minute, thus causing multiple costly iterations. With DevSecOps, security standards are implemented directly into the pipelines making the products more secure from the beginning.

Overall, DevSecOps ensures credibility and agility in the market, as well as trust with consumers.

 

DevSecOps in CI/CD

There are many security vulnerabilities that can exist in open-source software. Hence, implementing DevSecOps practices within CI/CD will bring continuity to securing software deliveries.

Integrating automated security checks within the pipeline will allow developers to have early warnings of vulnerabilities and monitor any security defects or else. With an integrated continuous security approach, companies can expand while upgrading their security and development process as it goes.

Moreover, unit tests and static code analysis operate close to source code as well as run checks without executing the code. Hence, investing in security unit tests and static analyzers will only be beneficial as it can speed up the lifecycle while detecting quickly any vulnerabilities.

 

The future of DevSecOps and CI/CD pipelines

With the many challenges our world bring today, security is crucial in order to remain on top of the market. With DevSecOps, companies are able to speed up their CI/CD pipelines all the while keeping it secure from any vulnerabilities. Collaboration and communication are then vital between development and security teams and shouldn’t be overlooked.

With the rise of DevSecOps, security has become an important part of the continuous delivery pipeline. Having continuity and security ensures the best software delivery.

The post Securing the CI/CD pipelines with DevSecOps appeared first on DevOps Online North America.

]]>
The Benefits of Continuous Integration and Continuous Delivery https://devopsnews.online/the-benefits-of-continuous-integration-and-continuous-delivery/ Tue, 03 Nov 2020 10:04:47 +0000 https://devopsnews.online/?p=22758 Many organizations are starting to adopt Continuous Integration and Continuous Delivery and DevOps practices into their strategies as they grow in popularity. By doing so, these organizations are trying to reach a better level of development and innovation as well as satisfy both their teams and their customers.   What are Continuous Integration and Continuous...

The post The Benefits of Continuous Integration and Continuous Delivery appeared first on DevOps Online North America.

]]>
Many organizations are starting to adopt Continuous Integration and Continuous Delivery and DevOps practices into their strategies as they grow in popularity. By doing so, these organizations are trying to reach a better level of development and innovation as well as satisfy both their teams and their customers.

 

What are Continuous Integration and Continuous Delivery?

Continuous Integration and Continuous Delivery (CI/CD) is a combination of coding practices, work culture, and technological innovation, which bridges the gaps between development and operation activities and teams by enforcing automation in building, testing, and deployment of applications. It enables developers to develop more efficient and fast software releases, as well as ship better products and increase productivity.

The goal of CI is to enhance extensive testing to remove the uncertainty of changing code whereas CD drives the deployment process and decreases the possibility of human error. By using CI/CD pipelines, the software delivery process is automated, and the application is safely and efficiently deployed. CD is always ensuring that the project is working and up to date in order to give a great customer experience. CI/CD pipelines are thus giving more agility and speed to the development process.

 

What are the benefits of CI/CD?

More and more software companies are starting to adopt CI as it allows them to reach their goal as predicted. Indeed, the CI/CD pipeline helps to remove all manual tasks leading to fewer errors and more time to focus on features, hence, increasing the productivity of the team.

Moreover, CI/CD speeds up the development cycle as well as take the time to iterate and re-evaluate progress more often in order to get better products that meet the customers’ expectations. The CI/CD pipelines are able to enforce quality checks when running tests, so it meets all the specifications and avoids failures.

By making the development cycle shorter, CI/CD allows developers to experiment more and react to changing requirements faster, thus leading to more innovation. Besides, the pipelines are transparent and open, enabling everyone the possibility to make contributions and spot issues more quickly.

 

Conclusion

Implementing DevOps and CI/CD practices are endorsing work culture and technological innovation based on transparency, ownership, and communication. This transformation is slowly being adopted by many businesses as the CI/CD pipelines lead to more agility, more innovation, better quality products and give an enhanced customer experience.

The post The Benefits of Continuous Integration and Continuous Delivery appeared first on DevOps Online North America.

]]>
The rising role of Containers in DevOps https://devopsnews.online/the-rising-role-of-containers-in-devops/ Tue, 27 Oct 2020 10:19:08 +0000 https://devopsnews.online/?p=22745 As the world keeps on speeding up, with faster business cycles and higher customer expectations, organizations need to innovate faster than ever. Thus, agility and digital transformation have never been more critical. Developers are looking to solutions, such as Containers, that can help them transform into more discrete services, as well as new approaches to...

The post The rising role of Containers in DevOps appeared first on DevOps Online North America.

]]>
As the world keeps on speeding up, with faster business cycles and higher customer expectations, organizations need to innovate faster than ever. Thus, agility and digital transformation have never been more critical.

Developers are looking to solutions, such as Containers, that can help them transform into more discrete services, as well as new approaches to collaboration, such as DevOps, to give them the flexibility, efficiency, feedback loops, and speed needed for more agile workflows.

 

What is a container?

A container is a standard unit of software that packages up code and all its dependencies, so the application runs quickly and reliably from one computing environment to another. Hence, Containers allow developers to create an application on a laptop and deploy it on servers. That way, they are helping keep development agile and ensuring continuous delivery.

Despite some controversy about the efficiency of their use, containers are vital in DevOps processes as they speed up the deployment of applications. They also allow developers to focus on the application itself as they run the same way and their operations are handled by software. Containers are saving time and resources for IT teams.

Moreover, when using container technology, there are plenty of choices to go with such as Docker, Kubernetes, CoreOs, and Mesos, or even virtual machines.

 

What can Containers bring to DevOps?

Developers often found that containers provide them with a precise and controlled environment to build a continuous integration and continuous delivery pipelines. As Containers are immutable, the software that is tested and verified will be the same as the software that is deployed. There are discrepancies.

As we have mentioned previously, using containers withing the agile methodology of DevOps will speed up the deployment process and enable the quick launch of new features and apps. Hence, containerization becomes convenient for iterating quickly during development and scaling stateless services in production.

Furthermore, developers are able to run production code on their local machines as it lets them replicate a full development environment without having to deploy an application across the enterprise. There are also no more custom dependencies as the Docker container is already pre-configured.

Containers are more controllable if a company is using microservices rather than large applications.

 

The risks of using Containers

However, each business always needs to know whether or not to use containers for its DevOps projects. For instance, if the company is already using virtual machines, containers are not needed. Moreover, they sometimes can become a ‘black box’, making it difficult for developers to know what’s in them and what was running in the past.

There is also always a risk when running other people’s code. Indeed, it’s vital to know what’s inside the containers so as to know where lies the vulnerabilities.

In order to prevent attacks, container lifetimes need to be shortened and refreshed every once in a while, to keep them working well.

 

Conclusion

Containers allow developers to deploy their applications faster and keep development agile all the while ensuring continuous delivery. However, developers must also make sure that it’s possible to secure the containers adequately to have the best working environment ever.

The post The rising role of Containers in DevOps appeared first on DevOps Online North America.

]]>
How to build a continuous integration and continuous deployment pipeline for your enterprise middleware platform https://devopsnews.online/how-to-build-a-continuous-integration-and-continuous-deployment-pipeline-for-your-enterprise-middleware-platform/ Thu, 03 Oct 2019 10:48:20 +0000 https://www.devopsonline.co.uk/?p=21135 With the rise of microservice architecture (MSA), continuous integration (CI) and continuous deployment (CD) has become mainstream processes within enterprises. Those familiar with microservice architecture will no doubt have heard about greenfield and brownfield integrations, a route where, around 80% of the time, users start a microservices journey from scratch or from an existing enterprise...

The post How to build a continuous integration and continuous deployment pipeline for your enterprise middleware platform appeared first on DevOps Online North America.

]]>
With the rise of microservice architecture (MSA), continuous integration (CI) and continuous deployment (CD) has become mainstream processes within enterprises. Those familiar with microservice architecture will no doubt have heard about greenfield and brownfield integrations, a route where, around 80% of the time, users start a microservices journey from scratch or from an existing enterprise architecture.

According to a recent survey from Lightstep,  there are more and more organisations moving ahead with microservices architecture, even though they accept that it is hard to maintain and monitor.

Moreover, the survey highlights that the advantages of MSA outweigh the disadvantages. This also goes for CI/CD in that it is a tightly coupled concept along with MSA and adopting a DevOps culture.

Due to the dominance of MSA within enterprises, CI/CD has also become an essential part of each and every software development lifecycle within enterprises.  With this shift towards MSA, DevOps and CI/CD, other parts of the brownfield integration cannot stay out of these waves. These include:

  • Enterprise Middleware (ESB/APIM, Message Broker, Business Process, IAM products)
  • Home grown software
  • Application Server (Tomcat, WebSphere)
  • ERP/CRM software (mainly COTS systems)

 This said it’s not always practical to implement CI/CD processes for every software component. Therefore, it’s important to look at alternatives in leveraging the advantages of CI/CD process within enterprise middleware components.

 Leveraging CI/CD processes within enterprise middleware components

Starting with one of the most common enterprise middleware products, an Enterprise Service Bus (ESB), provides the central point which interconnects heterogenous systems within an enterprise. This adds value to your enterprise data through enrichment, transformation, and many other functionalities. One of the main selling points of ESBs is that they are easy to configure through high-level Domain Specific Languages (DSLs) like Synapse, Camel, etc.

To integrate ESBs with a CI/CD process, ESB configurations that implement the integration logic server configurations which install the runtime in a physical or virtualised environment need to be seriously considered.

Of the two components, ESB configurations go through continuous development and change more frequently. Automating the development and deployment of these configurations is far more critical. That’s because going through a develop, test, deploy lifecycle for every minor change takes a lot of time and results in many critical issues if you don’t automate it.

Another important aspect when automating the development process is that you assume that the underlying server configurations are not affected by these changes and are kept the same. It is a best practice to make this assumption because having multiple variables makes it really hard to validate the implementations and complete the testing. The process will automate the development, test, and deployment of integration components as follows:

  1. Developers use an IDE or an editor to develop the integration components. Once they are done with the development, they will commit the code to GitHub.
  2. Once this commit is reviewed and merged to the master branch, it will automatically trigger the next step.
  3. A continuous integration tool (e.g. Jenkins, TravisCI) will build the master branch and create a Docker image along with the ESB runtime and the build components and deploy that to a staging environment. At the same time, the build artefacts are published to Nexus so that they can be reused when doing product upgrades.
  4. Once the containers are started, the CI tool will trigger a shell script to run the Postman scripts using Newman installed in the test client.
  5. Tests will run against the deployed components.
  6. Once the tests have passed in the staging environment, Docker images will be created for the production environment and deployed to the production environment.

Automating the update of the server runtime component

Although the above process can be followed for the development of middleware components, these runtime versions will get patches, updates, and upgrades more frequently than not because of the demands of the customers and the number of features these products carry. Therefore automating the update of the server runtime component is a serious thought that must be taken on board when applying this.

The three main methods in which different vendors provide updates, patches, and upgrades, tend to be:

  • Updates as patches which need to be installed and restarted the running server
  • Updates as in-flight updates which will update (and restart) the running server itself
  • Updates as new binaries which need to replace the running server

Depending on the method by which you get the updates, you need to align your CI/CD process for server updates. The process flow for CI/CD process for server updates will happen less frequently compared to the development updates.

CI/CD process flow for server updates

Outlined below is the process flow:

  1. One of the important aspects of automating the deployment is to extract the configuration files and make them templates that can be configured through an automated process.
  2. When a new configuration change, update, or upgrade is required, it will trigger a Jenkins job which will take the configurations from GitHub and the product binaries (if required), product updates, and ESB components from a Nexus repository which will be maintained within your organisation. Using these files, a Docker image will be created.
  3. This Docker image will be deployed to the staging environment and start the containers, depending on the required topology or deployment pattern.
  4. Once the containers are started, the test scripts (Postman) are deployed to the test client and start the testing process automatically (Newman).
  5. Once the tests are executed and the results are clean, it will go to the next step.
  6. Docker images will be created for the production environment and deploy the instances to the environment and start the Docker containers based on the production topology.

With the above process flows, you can implement a CI/CD process for your middleware layer. Even though you can merge these two processes into a single process and put a condition to branch out into two different paths, having two separate processes will make it easier to maintain. And finally, if you are going to implement this type of CI/CD process for your middleware ESB layer, make sure that you are using the right ESB runtime with the following characteristics. These include quick start-up time, stateless, immutable runtime and small memory footprint.

Written by Chanaka Fernando, Associate Director at WSO2

 

The post How to build a continuous integration and continuous deployment pipeline for your enterprise middleware platform appeared first on DevOps Online North America.

]]>
SAIC secures AWS DevOps Competency https://devopsnews.online/16998-2-saic-secures-aws-devops-competency/ Mon, 29 Apr 2019 13:38:46 +0000 https://www.devopsonline.co.uk/?p=16998 Science Applications International Corporation (SAIC) has been awarded Amazon Web Services (AWS) DevOps Competency status for its DevOps expertise. The company said in a press release last Friday (April.26th) that the AWS DevOps Competency program recognises SAIC’s commitment to helping customers automate cloud infrastructure management functions and simplify continuous integration and delivery processes. DevOps teams...

The post SAIC secures AWS DevOps Competency appeared first on DevOps Online North America.

]]>
Science Applications International Corporation (SAIC) has been awarded Amazon Web Services (AWS) DevOps Competency status for its DevOps expertise.

The company said in a press release last Friday (April.26th) that the AWS DevOps Competency program recognises SAIC’s commitment to helping customers automate cloud infrastructure management functions and simplify continuous integration and delivery processes.

DevOps teams help transform software development life cycles to deliver secure application lifecycles to deliver secure applications, instill collaborative software and IT teams, and automate continuous integration and continuous delivery pipelines using AWS and AWS developer tools.

“We are excited to achieve AWS DevOps Competency status, which recognises our continued commitment to helping our clients adopt cloud technologies and DevOps practices with a combination of our process know-how and AWS reliable services,” said Coby Holloway, SAIC vice president of Enterprise IT.

“We help our customers achieve their digital transformation goals by providing some of the best cloud technology solutions using AWS cloud services.”

SAIC, a member of the AWS Public Sector Partner program, is an authorised and government reseller and has previously secured AWS Government Competency status.

The post SAIC secures AWS DevOps Competency appeared first on DevOps Online North America.

]]>
Embracing the opportunities of DevOps https://devopsnews.online/embracing-the-opportunities-of-devops/ Mon, 20 Aug 2018 10:59:44 +0000 http://www.devopsonline.co.uk/?p=13806 How can DevOps help us become champions and advocates for testing and quality activities across the lifecycle?

The post Embracing the opportunities of DevOps appeared first on DevOps Online North America.

]]>
How can DevOps help us become champions and advocates for testing and quality activities across the lifecycle?

The new world of modern software delivery, ushered in by the impact of digital disruption, offers tremendous opportunities both to improve solution quality at the desired speed and to develop new capabilities. However, as testers, we need to acknowledge that in order to ensure that software quality remains at the forefront of the mindset of everyone involved throughout the development lifecycle, our role and approach will have to change.

Cultural change is required

There is a general recognition that the cultural changes required for an organisation to successfully move into the world of DevOps/CI/CD can prove a much bigger barrier than the technical challenges.

To survive and thrive in this new world we need to develop a continuous testing culture across our function, complimenting and supporting a wider continuous delivery culture. We must become the champions and advocates for testing and quality activities across the lifecycle. This should bring what we have long sought – involvement right from the outset.

Rather than managing towards ‘hand-offs’ into segregated testing phases, we need to foster the approach of collaborative work and joint problem-solving.

The wider culture, typically adopted in this modern delivery world, of trying new approaches and tools and discarding those things that don’t work/add value, can help us in testing to also be more innovative. We can, and perhaps should, become more comfortable in experimenting with new techniques and ways of working in the quest to improve quality.

New friends and allies

The advent of DevOps also brings the opportunity to work more closely with our colleagues in operations. We share goals of protecting production and the customer from poor quality and resulting outages. Often those areas that the operations team are concerned about and want to see closely monitored are areas that we should be focussing our quality lens on too. On the flip side, the instrumenting and tools we utilise in testing may also deliver benefit to our operations colleagues.

Single view of the risk

In this world of modern software delivery and improved collaboration, we have an opportunity to develop a single, joint view of the level of risk and where this lies across the solution landscape. When we have achieved this single view, there should be much greater buy-in from our technical colleagues and senior stakeholders into our proposed approach to testing, which should be closely aligned to that same view of risk. In turn, this should avoid any suggestion that the scope of testing/QA is inappropriate and might even ‘kill the business case’.

Establishing a ‘ruthless automation’ mindset

The challenge in this new world for all disciplines including testing/QA is to think about how to automate all tasks as far as possible almost as soon as they are identified. This has been referred to by Forrester as ‘ruthless automation’. This approach of maintaining a constant eye on the challenge of frequent perhaps daily or even more frequent releases is an important shift in mindset.

Transformation from testers to quality engineers

It has been suggested that what is required to deliver solutions is actually ‘T-shaped technologists’, individuals with a dual focus. Firstly, they have their specialism, perhaps in development or more likely in our case test automation, exploratory testing or test data management, but then secondly, they must have the overall end-to-end delivery focus required. This means we will have the opportunity to develop new skills, become more technical if we have the aptitude/desire, but certainly move closer to the other functions.

We need to add value by advocating and driving quality across all of the different delivery functions and activities. From the initial story definition, through to the method of deployment, we can champion quality. This must include ensuring that the final delivery we are working towards meets all the necessary quality criteria. We have the opportunity now to re-invent ourselves and become quality engineers rather than merely testers.

Avoiding the perils of burn-out

When we establish what constitutes a sustainable pace of delivery, we need to consider not just technical considerations but our engineers and avoiding them becoming stretched beyond reasonable work commitments.

As managers, we need to ensure our quality engineers have a work/life balance that is appropriate. The benefits of an improved delivery speed could be very short-lived if in the process our talented engineers suffer ‘burn-out’ and consequently become less productive.

Embracing the opportunities

The new era should ultimately (appreciating this won’t happen overnight!) bring with it a clearing of the technical debt that previously meant being stuck with troubleshooting problematic legacy systems. This, in turn, should mean greater opportunity for our teams to work on more exciting new work.

Professional testers and those working in QA share a genuine passion for delivering quality solutions. The new world of modern software development offers us opportunities to develop new skills and have a greater influence, and ultimately play a pivotal role in delivering better quality at the speed demanded by today’s demanding and perhaps more discerning customers.

Rather than seeing the new delivery methods of DevOps/CI/CD as a threat to our existence, we need to evolve our teams and individuals to meet the challenge. Ultimately, we need to transform ourselves to become fully fledged quality engineers and in the process, earn enhanced trust and respect from our colleagues.

Written by Richard Simms, Test Architect, ROQ

The post Embracing the opportunities of DevOps appeared first on DevOps Online North America.

]]>
Facing digital disruptions? Here’s how to better maintain your software! https://devopsnews.online/facing-digital-disruptions-heres-how-to-better-maintain-your-software/ Tue, 07 Aug 2018 10:25:40 +0000 http://www.devopsonline.co.uk/?p=13709 Richard Simms, Test Architect, ROQ, unveils ways to prevent digital disruption by maintaining the quality of software

The post Facing digital disruptions? Here’s how to better maintain your software! appeared first on DevOps Online North America.

]]>
Companies find they are able to disrupt well-established markets and grow rapidly at the expense of others by better exploiting digital markets and channels.

Essentially, these organisations are not constrained by the architecture of legacy systems and related technical debt nor by outdated processes and siloed IT departments. Typically, they are embracing processes such as continuous integration (CI) and even continuous delivery (CD) that enable rapid solution delivery, establishing a presence quickly across multiple channels and then subsequently reacting to the market direction with frequent releases.

The rest of the business world initially looked on with a mixture of envy and trepidation and are only now working to try and adapt to make their IT capability more proactive. In many cases, the response has included making the cultural changes associated with DevOps, breaking down traditional silos between IT functions and implementing those same CI/CD processes.

As with all seismic changes in the IT industry the implications of this new world, for those of us focused on testing and software quality, are far-reaching. Just how do you adapt your testing processes to survive and indeed thrive in this new world?

Continuous testing

The answer to this challenge is to implement processes supporting continuous testing, ‘shifting left’ (testing earlier) whenever possible to make testing a full lifecycle activity.

Making testing a full lifecycle activity requires greater collaboration and a change to the mindset. We need to move from the old concept of ‘segregated validation’ where solutions are delivered into separate testing phases beyond development, to tight feedback loops with our delivery and support colleagues in parallel. Testing can no longer be incorporated either into subsequent sprints in a manner sometimes referred to as ‘agile-fall’, which is arguably just a ‘broken’ agile approach.

BDD ensuring traceability

More than ever we need to make sure our tests are clearly driven by and linked to requirements by embracing techniques such as Behaviour Driven Development (BDD). There should be no room for testing to be compromised by poor requirements in modern software delivery and traceability should be a given. Using the structured language (‘Given…Then…When’) to specify detailed requirements is a powerful technique and a great aid to us in testing – giving us the sound test basis we crave.

Ruthless automation

A key requirement to enable the speedy software delivery whilst maintaining quality is that each of the disciplines across the SDLC must automate whenever possible. Gartner refers to this approach as ‘ruthless automation’. The agile methodology emphasised the need for efficient test automation with the frequent regression required, but this is of paramount importance in making possible continuous delivery. Our automated test packs now need to be executed in very tight windows to maintain the speed of delivery. This almost certainly requires to focus on the low level and integration layer scripts with fewer scripts and a lighter touch through the GUI. The reliability of the scripts also needs to be beyond question as any ‘false failure’ almost inevitably means a delayed release. The BDD approach described above should be inextricably linked, acting as an enabler and providing acceleration and transparency to our automation efforts. Our scripts must also be capable of execution against multiple platforms (browsers, laptops, tablets and mobile devices) to avoid duplicating effort.

Continuous performance testing

Performance testing equally needs to become a full lifecycle activity; with checks to make sure individual components are undertaken as soon as they are built. An up-front assessment of the risks and potential bottlenecks to direct our testing efforts takes on even greater significance. Our load tests must be automated to a greater degree than was previously expected so that these can also be invoked whenever necessary.

Environment & data management

Providing the required mixture of persistent and temporary (those that can be routinely provisioned, configured, used to test and then torn down) realistic test environments will likely require a mix of cloud, on-premise, and hybrid solutions; perhaps making use of container technology.

In order to be able to support continuous testing our test data needs to be realistic and aligned to our test cases, deployed flexibly to many environments and in order to maintain velocity, available in an instant. At the same time, we must comply with all regulations (including the imminent GDPR) by ensuring sensitive data is anonymised/obfuscated when necessary and access/storage is strictly controlled. A new generation of Test Data Management (TDM) tools is emerging to help support this activity that we need to exploit.

Opportunities in this world

This new world of modern software delivery, ushered in by the impact of digital disruption, offers tremendous opportunities both to improve solution quality at the desired velocity and develop new capabilities as well as presenting the testing world with the challenges described above. The opportunities can be roughly broken down into the categories of people, process and technology, and each of these will be considered in more detail in subsequent ROQ articles which will be available on our website www.roq.co.uk. In our next instalment, we will discuss the ‘people’ aspect and consider whether we need to effectively make a move from being mere Testers to full-blown Quality Engineers.

Written by Richard Simms, Test Architect, ROQ

The post Facing digital disruptions? Here’s how to better maintain your software! appeared first on DevOps Online North America.

]]>
Looking to accelerate mobile app delivery? Agile isn’t enough! https://devopsnews.online/looking-to-accelerate-mobile-app-delivery-agile-isnt-enough/ Tue, 10 Jul 2018 12:28:15 +0000 http://www.devopsonline.co.uk/?p=13353 Creator Project Lead at Ionic, Matt Kremer, focuses on how IT leaders need to embrace agile and DevOps as part of their mobile app strategy

The post Looking to accelerate mobile app delivery? Agile isn’t enough! appeared first on DevOps Online North America.

]]>
In the earliest days of app development, most teams used what is called the “waterfall” method – a rigid sequencing of activities from requirements gathering to development, testing and production. The problem with waterfall is that it took too long to get a new product or feature in the hands of users. As someone once said, “waterfall is great for delivering the product you needed a year ago.”

Agile development emerged as a welcome alternative to waterfall. The agile method focused on breaking down requirements into much smaller chunks of work (perhaps a single feature) that could be developed iteratively over time, based on user input and changing market dynamics. It also got teams to work in very short intervals, or “sprints,” of typically one to two weeks. The agile approach is a better fit for the software-as-a-service (SaaS) era and has been widely adopted by app dev teams.

But, there’s a catch. While agile has become widespread, the frequency of production releases hasn’t changed much since the waterfall days. Many teams are still bundling features and shipping major releases just a few times a year. A recent survey found that half of app dev teams released applications four times a year or less.

As one Forrester research article puts it: “Agile product management tools improve reporting and visibility, but they don’t improve other essential activities in the delivery pipeline, like building, testing and deploying software.” Effectively, the bottleneck has shifted further down the delivery chain – from planning and development to building, testing and deployment.

Why speed is important to mobile

Thanks to the high standards set by popular consumer apps like Twitter, Pinterest and Sworkit, mobile users have become conditioned to expect regular app updates, new features and rapid fixes. While infrequent updates might be okay for some enterprise use cases, it doesn’t work well in mobile. Instead, user expectations put pressure on enterprise teams building mobile apps – whether the app is for employees, partners or customers – to deliver new releases on a nearly continuous basis.

In a few cases, the business model requires teams to ship production releases in days or weeks. Acker Wines is a highly successful wine merchant that auctions off premium wines to discerning buyers, and their auction cycle runs every two weeks. Napa Group, a digital consultancy responsible for building the Acker Wines mobile app, has just two weeks to build, test and release new features before the next auction. Bundling features and shipping semi-annually, or even monthly is not an option.

Shipping code faster

To fulfil the vision of agile development, we need to look beyond agile.

One avenue is DevOps. DevOps is a squishy and often overused term, but it’s an important concept for app dev leaders. Essentially, DevOps extends the different agile tools and methodologies beyond the development team to include the operations and QA teams that are responsible for putting production code in the hands of users. In the simplest terms, DevOps “ensures that the working software sitting in a developer’s laptop reaches the production phase easily and quickly.

The dream of DevOps is to move at the speed of development. That means being able to build, test and release working code as soon as it’s ready.

Three ways DevOps accelerates app delivery

With that in mind, here are three concrete ways that DevOps tools and practices help development teams be more successful.

  1. Continuous integration: When a developer commits a code change to a source code repository, CI automatically builds and integrates the changed code and prepares a new version of the app for testing or release. CI accelerates the pace of delivery by streamlining the build process and by providing a shared environment where multiple developers can contribute code changes at the same time.
  2. Continuous feedback: Automated testing, real-time error monitoring and user testing tools provide continuous feedback for product teams to improve app quality and fix problems at their source. Continuous feedback helps to make sure teams are building the right product and increases the pace of updates and fixes to address UX issues.
  3. Real-time app updates: In the mobile space, app developers are often at the mercy of the app stores to review, approve and publish production releases. Even when teams are moving fast to fix a bug or push a key feature, they often have little control over when a user gets the update. How can you get around this? Some mobile DevOps tools let app publishers send changes directly to users in real-time, without going through the app stores. This puts control back in the hands of the publishers and can drastically reduce the time it takes to get a production release in the hands of users.

Agile and DevOps = harmony

App dev leaders need to look critically at their culture, tools and processes to see if they’re really fulfilling the vision of agile, or if they’ve just found a more efficient way to deliver the same user experience as waterfall. To keep pace with user expectations, IT leaders should look beyond agile and embrace DevOps practices to accelerate app delivery.

Of course, the best approach is to combine agile with DevOps. Putting them together creates a powerful combination that should set the standard for how mobile apps – and software in general – should be delivered in 2018 and beyond.

Written by Creator Project Lead at Ionic, Matt Kremer

The post Looking to accelerate mobile app delivery? Agile isn’t enough! appeared first on DevOps Online North America.

]]>