tools Archives - DevOps Online North America https://devopsnews.online/tag/tools/ by 31 Media Ltd. Thu, 06 Jan 2022 10:34:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 Working in DevOps https://devopsnews.online/working-in-devops/ Thu, 06 Jan 2022 10:17:53 +0000 https://devopsnews.online/?p=23842 As the demand for DevOps and cloud has been slowly increasing throughout the year, more and more organisations are looking for skilled DevOps teams. It is likely that DevOps will then become the approach adopted by many IT companies in order to offer reliable and faster solutions. Hence, if you are interested in learning DevOps...

The post Working in DevOps appeared first on DevOps Online North America.

]]>
As the demand for DevOps and cloud has been slowly increasing throughout the year, more and more organisations are looking for skilled DevOps teams. It is likely that DevOps will then become the approach adopted by many IT companies in order to offer reliable and faster solutions.

Hence, if you are interested in learning DevOps to advance your career, some experts in the industry have shared their advice and recommendations with us!

 

What is DevOps? 

According to Khuswant Singh, Lead DevOps Automation Engineer at Next, DevOps is a software development methodology that combines software development (Dev) with information technology operations (Ops) participating together in the entire application lifecycle from design through the development process to production support.

Under a DevOps model, development and operations teams are no longer “siloed.” Sometimes, these two teams are merged into a single team where the engineers work across the entire application lifecycle, from development and test to deployment to operations, and develop a range of skills not limited to a single function. At its core, DevOps is a set of tools and practices that help organisations build, test, and deploy software more reliably and at a faster rate.

Moreover, Bolvin Fernandes, Azure DevOps specialist at Redkite, adds that DevOps is a set of practices and a cultural change that combines software development and operations, two traditionally siloed teams, in order to expedite an organisation’s ability to release software/applications as compared to traditional software development processes. DevOps entails the adaptation of tools and practices best suited to the unification and automation of processes within the software development lifecycle that will enable organisations to create, improve and ship out software products at a faster pace.

DevOps is enabling organisations to deliver their products more quickly than those with the traditional development and release cycle, Khuswant continues. True DevOps unites teams to support continuous integration and continuous delivery (CI/CD) pipelines through optimized processes and automation. So, continuous is a differentiated characteristic of a DevOps pipeline. A CI/CD approach enables efficiency in the building and deployment of applications, and automated application deployment allows for rapid release with minimal downtime.

When done properly, DevOps greatly reduces the time it takes to bring software from idea to implementation to end-user delivery. It also adds efficiency to the software delivery process in many ways. It allows different team members to work in parallel, for example, it also ensures that coding problems are found early in the delivery pipeline, when fixing them requires much less time and effort than it does once a bug has been pushed into production. With DevOps, the expectation is to develop faster, test regularly, and release more frequently, all while improving quality and cutting costs.

To help achieve this, DevOps monitoring tools provide automation and expanded measurement and visibility throughout the entire development lifecycle – from planning, development, integration and testing, deployment, and operations.

 

DevOps Culture

DevOps culture involves closer collaboration and shared responsibility between development and operations for the products they create and maintain, Khuswant underlines. This helps companies align their people, processes, and tools toward a more unified customer focus.

At the heart of DevOps culture is increased transparency, communication, and collaboration between teams that traditionally worked in siloes. DevOps is an organizational culture shift that emphasizes continuous learning and continuous improvement. An attitude of shared responsibility is an aspect of DevOps culture that encourages closer collaboration. It’s easy for a development team to become disinterested in the operation and maintenance of a system if it is handed over to another team to look after.

Bolvin notes that It is all about shared responsibility and accountability between developers and operations of the software they build and deliver. This includes increasing transparency, communication, and collaboration across multiple teams and also, the business.

Hence, if a development team shares the responsibility of looking after a system over the course of its lifetime, Khuswant points out, they can share the operations staff’s pain and so identify ways to simplify deployment and maintenance (e.g., by automating deployments and improving logging). They may also gain additional observed requirements from monitoring the system in production. When operations staff share responsibility for a system’s business goals, they can work more closely with developers to better understand the operational needs of a system and help meet these. In practice, collaboration often begins with increased awareness from developers of operational concerns (such as deployment and monitoring) and the adoption of new automation tools and practices by operations staff.

It is helpful to adjust resourcing structures to allow operations staff to get involved with teams early. Having the developers and operations staff co-located will help them to work together. Handovers and signoffs discourage people from sharing responsibility and contribute to a culture of blame. Instead, developers and operations staff should both be responsible for the successes and failures of a system. DevOps culture blurs the line between the roles of developer and operations staff and may eventually eliminate the distinction.

 

The role & responsibilities of a DevOps engineer

According to Bolvin, here are some of the responsibilities of a DevOps engineer:

  • Understanding customer requirements and project KPIs
  • Implementing various development, testing, automation tools, and IT infrastructure Managing stakeholders and external interfaces
  • Defining and setting development, test, release, update, and support processes Troubleshooting techniques and possibly, fixing code bugs
  • Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement
  • Encouraging and building automated processes wherever possible
  • Incidence management and root cause analysis
  • Coordination, communication, and collaboration within the team and with customers Strive for continuous improvement and build continuous integration, continuous delivery, and continuous deployment pipelines (CI/CD Pipeline)
  • Mentoring and guiding the team members
  • Monitoring and measuring customer experience and KPIs
  • Managing periodic reporting on the progress to the management and the customer

Khuswant also highlights these ones:

  • Awareness of DevOps and Agile principles.
  • Building and setting up new development tools and infrastructure.
  • Working on ways to automate and improve development and release processes.
  • Ensuring that systems are safe and secure against cybersecurity threats.
  • Excellent organisational and time management skills, and the ability to work on multiple projects at the same time.
  • Strong problem-solving skills
  • Good attention to detail
  • working with software developers and software engineers to ensure that development follows established processes and works as intended
  • planning out projects and being involved in project management decisions.
  • Excellent teamwork and communication skills.
  • Excellent organisational and time management skills, and the ability to work on multiple projects at the same time
  • Working on ways to automate and improve development and release processes.

 

The top skills to work in DevOps

To work in DevOps, Khuswant suggests having knowledge of one cloud platform (AWS, Azure, GCP) and container orchestration tool (Kubernetes, OpenShift), as well as experience in developing Continuous Integration/Continuous Delivery pipelines (CI/ CD) with tools such as Jenkins, Azure DevOps Services, etc.

Moreover, he recommends having good hands-on knowledge of Configuration Management and Infrastructure as Code tools like Puppet, Ansible, Chef, Terraform, etc., and proficient in scripting, and Git and Git workflows.

Bolvin adds that in order to work in DevOps, you need to not only be attentive to details, have the enthusiasm and eagerness to continuously learn and evolve with technology, but also to have an eye for identifying bottlenecks and replacing them with automated processes.

He continues by saying that it is good to be able to collaborate with multiple teams (e.g. contributing and maintaining a wiki) and have the willingness and zeal to impart knowledge to fellow team members.

Finally, he suggests working with cloud technologies, having knowledge and application of a scripting language, as well as awareness of critical concepts in DevOps and Agile principles.

 

How to move into DevOps

DevOps engineering is a hot career with many rewards, Khuswant points out. A DevOps Engineer will get enormous opportunity to work on a variety of work and DevOps tools which are very satisfying. Begin by learning the fundamentals, practices, and methodologies of DevOps. Understand the “why” behind DevOps before jumping into the tools.

A DevOps engineer’s main goal is to increase speed and maintain or improve quality across the entire software development lifecycle (SDLC) to provide maximum business value by automating the development lifecycle by using various DevOps tools.

Therefore, he recommends reading articles, watching YouTube videos, and going to local Meetup groups or conferences to become a part of the welcoming DevOps community, where you’ll learn from the mistakes and successes of those who came before you.

Bolvin also suggests understanding DevOps principles and methodologies alongside identifying gaps you can bridge in order to speed up the software build & release process. It’s crucial to understand the KPIs of DevOps and more so, align them to how you can contribute by upskilling yourself.

 

Advice for future DevOps engineers 

Khuswant says that DevOps engineers need to know a wide spectrum of technologies to do their jobs effectively. Whatever your background, here are some fundamental technologies you’ll need to use and understand as a DevOps engineer:

  • Operating systems Administration – Linux/Windows
  • Scripting
  • Cloud
  • Containers

Bolvin advice to:

  • Make yourself familiar with cloud technologies
  • Understand what DevOps entails
  • Work on your communication and collaboration skills as they will definitely be tested
  • Be open to trying new technologies and don’t be afraid to fail as that’s how you will learn
  • Be willing and eager to share your knowledge as that’s critical to propagate the DevOps culture

 

Special thanks to Khuswant Singh & Bolvin Fernandes for their insights on the topic!

The post Working in DevOps appeared first on DevOps Online North America.

]]>
Design thinking tools to boost your DevOps journey – part 2 https://devopsnews.online/design-thinking-tools-to-boost-your-devops-journey-part-2/ Thu, 04 Nov 2021 11:49:46 +0000 https://devopsnews.online/?p=23753 Isaac Perez Moncho was at the National DevOps Conference 2021 at the British Museum a few weeks ago and gave a talk about how to design thinking tools to boost your DevOps journey. Here is the second part of this talk. In part 1, we followed William Blacksmith on his collaboration journey. In part 2, we...

The post Design thinking tools to boost your DevOps journey – part 2 appeared first on DevOps Online North America.

]]>
Isaac Perez Moncho was at the National DevOps Conference 2021 at the British Museum a few weeks ago and gave a talk about how to design thinking tools to boost your DevOps journey. Here is the second part of this talk.

In part 1, we followed William Blacksmith on his collaboration journey. In part 2, we will find out what these three techniques are called in the 21st century and understand how they can help you increase collaboration within your organisation.

 

Before we start, we should review the impact of the traditional lack of collaboration between platform teams and product teams.

The feared organisational silos are an example of a lack of collaboration. They had two negative consequences for platform teams:

  1. Teams had to support applications they knew little about.
  2. They created platform services that were suboptimal for product teams.

The first consequence is better addressed by following the “you build it, you run it model”, which is out of the scope of this article. The second consequence, creating suboptimal services for product teams, is addressed through better collaboration and using techniques like the ones Will used.

The organisational and business impact of suboptimal platform services can be substantial. Lower performance from product teams means a slower response to market changes and lower capability to deliver business plans. The negative impact is likely to compound in the current hiring environment, with no shortage of companies with excellent platform services like Netflix. Software engineers expect better from the services they use. Having outdated tools and services will frustrate them, making them more likely to change jobs.

In my previous role, the platform team created a “self-service” monitoring service, complete with a 20-page installation guide. It did not make product teams happy.

The goal of Will’s techniques is to increase collaboration between platform teams and product teams. Better collaboration results in better tools, better relationships between teams, and more satisfied engineers.

Now that we know the benefits of Will’s techniques, we can get into them in more detail.

 

User Surveys 

The first technique Will used was User Surveys.

What is it?

A low-effort technique to asynchronously gather feedback or requirements.

How is it done?

Ideally, user surveys are conducted using tools like SurveyMonkey, Google Forms, or other survey tools. They can also be done via Slack or Email.

My personal preference is for short surveys, three to five questions, using a survey tool. Short surveys increase the response rate by reducing mid-survey dropouts. A tool like Google Forms enables data gathering and a better analysis of the responses.

Three questions you can use to get started are:

  1. Would you recommend our services?
  2. What would you improve?
  3. What would you keep?

The questions can be geared towards one service or to all services managed by the platforms team.

Why is it used?

User surveys are low effort for both the creator and the subject and can be done with frequency. Surveys help create a continuous feedback loop and show to the users that you care about their input.

Considerations?

User surveys can feel impersonal and cold, which can garner more honest feedback and requirements. However, they will not create strong personal relationships between the teams.

 

Shadowing

The second technique Will used was Shadowing.

What is it?

Immersion in the users’ journey.

How is it done?

Involves engineers (DevOps) observing users (Software Engineers).

The DevOps engineers will observe the software engineers while they use the platforms’ tools. The observers take notes and ask questions when required while taking into account the observer effect.

Why is it used?

Shadowing makes the pain points of the tools and processes visible and fosters relationships between individuals.

Considerations?

Adds a personal dimension to the feedback gathering process.

It can take more effort to use than user surveys. However, it can also increase collaboration by establishing or improving personal relationships between teams and individuals. It requires coordination between individuals, and remote shadowing can lower the usefulness of the technique.

 

Idea Generation

The third technique Will used was Idea Generation.

What is it?

A brainstorming session with plenty of post-it notes that includes software and DevOps engineers. Engineers generate ideas to solve the problem at hand. The objective is to generate a solution that the platforms teams can implement to solve a problem experienced by the product teams.

Idea generation techniques can be used to structure and inspire the participants. Some of the techniques include Mind-mapping, S.W.O.T analysis, and Six Thinking Hats.

How is it done?

Invite representatives of all involved parties, have post-its ready, and select an idea generation technique.

After gathering enough ideas, spend time pruning and consolidating them. The organiser, or assigned person, will take action and next steps to develop the ideas generated.

Why is it used?

There are several benefits: a broader set of solutions, ice-breakers between teams, deep user understanding, increased collaboration, and relationship building.

Concerns?

Idea generation can be the more powerful technique to use in terms of enabling collaboration, but it is also the riskier one and the technique that requires more effort to use. The session must be conducted in some order and produce an actionable output that will be considered when creating the solution.

 

Now you have two techniques you can use straight away to start fostering collaboration between teams, User Surveys and Shadowing, and one that can bring it to the next level, Idea Generation.

To finish, I’d like to recommend the following book. It contains the above and many more techniques, case studies, and resources that you can use to think about user involvement when creating your platform services: This is service design thinking.

 

Article written by Isaac Perez Moncho, Head Of Infrastructure at Lending Works

The post Design thinking tools to boost your DevOps journey – part 2 appeared first on DevOps Online North America.

]]>
Design thinking tools to boost your DevOps journey – part 1 https://devopsnews.online/design-thinking-tools-to-boost-your-devops-journey-part-1/ Tue, 26 Oct 2021 10:50:59 +0000 https://devopsnews.online/?p=23736 Isaac Perez Moncho was at the National DevOps Conference 2021 at the British Museum two weeks ago and gave a talk about how to design thinking tools to boost your DevOps journey. Here is the first part of this talk:    The following is the story of William Blacksmith, a Middle Ages toolmaker. Will works...

The post Design thinking tools to boost your DevOps journey – part 1 appeared first on DevOps Online North America.

]]>
Isaac Perez Moncho was at the National DevOps Conference 2021 at the British Museum two weeks ago and gave a talk about how to design thinking tools to boost your DevOps journey. Here is the first part of this talk: 

 

The following is the story of William Blacksmith, a Middle Ages toolmaker.

Will works making tools for the shoemakers of the kingdom. You may think making tools for shoemakers are not worth writing about; however, in this kingdom, shoes are a sure way to get influence in the Royal Court. The King loves his sneakers.

Being an enabler of a kingdom’s influential sector, Will wants to improve his tools to increase the impact his customers have. He is very curious and likes trying new things all the time. When he hears complaints from the shoemakers about how antiquated their tools are, Will starts thinking about how to create a new generation of tools for his customers.

 

What if I asked the shoemakers about the tools we make for them?

No one in the kingdom has ever asked any customer what they thought about the tools they used. Will is not completely sure the shoemakers will be very collaborative. He starts thinking about questions to ask the shoemakers, and after some time he comes up with the following three:

Would you recommend our tools to shoemakers in our neighboring kingdom?

What would you keep from our tools?

What would you change?

Will believes those three questions are a good start because they give him some feedback, while not taking much time from the shoemakers. This should result in a good response rate. He has a problem, though.

The dancing season is about to start, and the shoemakers and himself are very busy. He cannot spend two days visiting each of his customers in person. What can he do? He is double lucky; not only has a printing press opened up next door, all his customers and himself know how to read and write! Not likely in those times, but very convenient for me and my story.

He decides to print the three questions, send his apprentice to deliver them to all his customers, and tell them he’ll be back one week later to collect the answers.  Will is amazed by the responses and the suggestions to improve the tools. Some are very simple, yet he did not think about them before. Something unexpected also happened; some shoemakers left comments on the sides of the pages thanking him for giving them a way to express their issues.

Encouraged by the responses, he is eager to get better quality feedback and build a better relationship with the shoemakers.

 

What if I went in person and observed the shoemakers using their tools?

Will’s head is spinning trying to find better ways to improve his tools. With his experience, direct observation, and a dialogue with the users, the quality of feedback would increase substantially.

Now he has a problem, who is he going to visit to do this? He has too many customers and visiting all of them will take too much time. He decides to start with the shoemakers who left thank you notes in the previous questions and with whom he has a better relationship. Will settles for three customers, sending his apprentice to ask these customers for a good time to spend 1 hour observing some of their processes.

The visits are an eye-opener:

“Why do you use this tool like this? It’s supposed to be used this way.”

“We know, Will. But if you use it like it’s intended it takes too long to get the leather cut.”

“What? Did you set this up three hours ago?”

“Yes, Will, the tool and the sole need to settle for three hours before they can be put together.”

Will is mindblown – if he knew about emojis he would use one now. Many of his preconceived notions about how tools were being used have been rendered useless. He is back to his workshop, racing with ideas about how to improve his tools.

After the success of getting ideas from others, Will decides that before he starts tackling some of the biggest challenges he is going to work with his customers.

 

What if I invited some of the shoemakers here to my workshop and we discussed together how to tackle a challenge?

He prepares food, wine, and some small square pieces of parchment. When everyone arrives, Will presents them with a problem some of the shoemakers had and asks for ideas from everyone.

After several ideas are discarded, a few promising ones emerge. Will tells the shoemakers he will start making some prototypes of tools to tackle this challenge, and he will send them so they can try the prototypes. Everyone is so excited that they continue drinking, eating, and talking about tools and how to solve future problems.

We are not blacksmiths, we don’t create tools for shoemakers, and most of the time, we are not in the Middle Ages. However, we create tools and systems for expensive engineers, and we want to be proud of the tools we create for them.

In part 2, we will learn how and when to use Will’s techniques, as well as their modern names!

 

Article written by Isaac Perez Moncho, Head Of Infrastructure at Lending Works

The post Design thinking tools to boost your DevOps journey – part 1 appeared first on DevOps Online North America.

]]>
DevOps: A dungeon master’s guide https://devopsnews.online/devops-a-dungeon-masters-guide/ Fri, 20 Dec 2019 12:28:39 +0000 https://www.devopsonline.co.uk/?p=21934 Long read DevOps is the offspring of agile software development – born from the need to keep up with the increased software velocity and throughput agile methods have achieved. Advancements in agile culture and methods over the last decade exposed the need for a more holistic approach to the end-to-end software delivery lifecycle.  What is Agile...

The post DevOps: A dungeon master’s guide appeared first on DevOps Online North America.

]]>
Long read

DevOps is the offspring of agile software development – born from the need to keep up with the increased software velocity and throughput agile methods have achieved. Advancements in agile culture and methods over the last decade exposed the need for a more holistic approach to the end-to-end software delivery lifecycle. 

What is Agile Software Development?

Agile Development is an umbrella term for several iterative and incremental software development methodologies. The most popular agile methodologies include Scrum, Kanban, Scaled Agile Frame work (SAF), Lean Development and Extreme Programming (XP).

While each of the agile methodologies is unique in its specific approach, they all share a common vision and core values (see the Agile Manifesto). They all fundamentally incorporate iteration and the continuous feedback that it provides to successively refine and deliver a software system. They all involve continuous planning, continuous testing, continuous integration, and other forms of continuous evolution of both the project and the software.

What Are the Challenges DevOps?

Prior to DevOps application development, teams were in charge of gathering business requirements for a software program and writing code. Then a separate QA team tests the program in an isolated development environment, if requirements were met, and releases the code for operations to deploy. The deployment teams are further fragmented into siloes groups like networking and database. Each time a software program is “thrown over the wall” to an independent team it adds bottlenecks. The problem with this paradigm is that when the teams work separately:

  • Dev is often unaware of QA and Ops roadblocks that prevent the program from working as anticipated.
  • QA and Ops are typically working across many features and have little context of the business purpose and value of the software.
  • Each group has opposing goals that can lead to inefficiency and finger-pointing when something goes wrong.

DevOps addresses these challenges by establishing collaborative cross-functional teams that share responsibility for maintaining the system that runs the software and preparing the software to run on that system with increased quality feedback and automation issues.

What Is the Goal of DevOps?

Improve collaboration between all stakeholders from planning through delivery and automation of the delivery process in order to:

  • Improve deployment frequency
  • Achieve faster time to market
  • Lower failure rate of new releases
  • Shorten lead time between fixes
  • Improve mean time to recovery

A Common Pre-DevOps Scenario The software team meets prior to starting a new software project. The team includes developers, testers, operations and support professionals. This team plans how to create working software that is ready for deployment.

Where Are You on the DevOps?

The DevOps continuum is a helpful way to look at the different aspects of DevOps. The bottom horizontal axis represents what people perceive DevOps to fundamentally be focused on. Some people adamantly feel that DevOps should focus on culture more than tools, while on the other people tend to value tools over culture.

The vertical axis depicts the three levels of the DevOps delivery chain: continuous integration, continuous delivery and continuous deployment. The DevOps community refers to organizations in the top right of the DevOps continuum as pink unicorns because there are currently so few of them that you don’t see them in the wild very often. Popular examples of these unicorns are companies like Netflix, Etsy, Amazon, Pinterest, Flicker, IMVU and Google. In a recent poll participants indicated where their organizations fit on the DevOps continuum:

Thought leaders, coaches and bloggers often portray a vision of DevOps in the upper-right corner and they will often have a strong bias towards either DevOps culture or automation tools. While it is ok to have esoteric debates about whether DevOps culture or tools are more important, the reality is that you can’t have DevOps without tools and all the tools in the world won’t help if you don’t have a strong supporting culture.

DevOps can be a blend of culture, tools and maturity that make sense for your organization and what makes sense will most likely evolve over time. The important thing is to continually strive to break down the walls and bottlenecks between the phases of software delivery by improving collaboration and automation. In the following sections, we dive deeper into each aspect of the DevOps continuum to help you better understand where you fit.

What Are the Phases of DevOps?

There are several phases to DevOps maturity; here are a few of the key phases you need to know.

Waterfall Development

Before continuous integration, development teams would write a bunch of code for three to four months. Then those teams would merge their code in order to release it. The different versions of code would be so different and have so many changes that the actual integration step could take months. This process was very unproductive.

Continuous Integration

Continuous integration is the practice of quickly integrating newly developed code with the main body of code that is to be released. Continuous integration saves a lot of time when the team is ready to release the code.

DevOps didn’t come up with this term. Continuous integration is an agile engineering practice originating from the Extreme Programming methodology. The terms been around for a while, but DevOps has adopted this term because automation is required to successfully execute continuous integration. Continuous integration is often the first step down the path toward DevOps maturity.

The continuous integration process from a DevOps perspective involves checking your code in, compiling it into usable (often binary executable) code and running some basic validation testing.

Continuous Delivery

Continuous delivery is an extension of continuous integration. It sits on top of continuous integration. When executing continuous delivery, you add additional automation and testing so that you don’t just merge the code with the main code line frequently, but you get the code nearly ready to deploy with almost no human intervention. It’s the practice of having the code base continuously in a ready-to-deploy state.

Continuous Deployment

Continuous deployment, not to be confused with continuous delivery [DevOps nirvana], is the most advanced evolution of continuous delivery. It’s the practice of deploying all the way into production without any human intervention.

Teams that utilize continuous delivery don’t deploy untested code; instead, newly created code runs through automated testing before it gets pushed out to production. The code release typically only goes to a small percentage of users and there’s an automated feedback loop that monitors quality and usage before the code is propagated further.

There are a very small number of companies that are truly practicing continuous deployment. Netflix, Etsy, Amazon, Pinterest, Flicker, IMVU and Google are popular examples of companies doing continuous deployment.

While DevOps nirvana is often not the end goal for most enterprises, they often focus on moving towards continuous delivery.

What Are the Values of DevOps?

DevOps focuses heavily on establishing a collaborative culture and improving efficiency through automation with DevOps tools. While some organizations and people tend to value one more than the other, the reality is it takes a combination of both culture and tools to be successful. Here’s what you need to know about these two DevOps values.

DevOps Culture

DevOps culture is characterized by increased collaboration, decreasing silos, shared responsibility, autonomous teams, improving quality, valuing feedback and increasing automation. Many of the DevOps values are agile values as DevOps is an extension of agile.

Agile methods are a more holistic way of delivering software. Agile development teams measure progress in terms of working software. Product owners, developers, testers and UX people work closely together with the same goals.

DevOps is just adding the operations’ mindset and maybe a team member with some of those responsibilities into the agile team. Whereas before DevOps progress is measured in terms of working software, with DevOps progress is measured in terms of working software in the customer’s hands.

To achieve this, Dev and Ops must break down the silos and collaborate with one another, share responsibility for maintaining the system that runs the software, and prepare the software to run on the system with increased quality feedback and delivery automation.

DevOps Tools

DevOps tools consist of configuration management, test and build systems, application deployment, version control and monitoring tools. Continuous integration, continuous delivery and continuous deployment require different tools. While all three practices can use the same tools, you will need more tools as you progress through the delivery chain.

What Tools Are Used in DevOps?

Earlier we briefly discussed some of the tools used in DevOps; here are some of the key tools and practices you need to know.

Source Code Repository

A source code repository is a place where developers check-in and change code. The source code repository manages the various versions of code that are checked in, so developers don’t write over each other’s work.

Source control has probably been around for forty years, but it’s a major component of continuous integration. Popular source code repository tools are Git, Subversion, Cloudforce, Bitbucket and TFS.

Build Server

The build server is an automation tool that compiles the code in the source code repository into executable code base. Popular tools are Jenkins, SonarQube and Artifactory.

Configuration Management

Configuration management defines the configuration of a server or an environment. Popular configuration management tools are Puppet and Chef.

Virtual Infrastructure

Amazon Web Services and Microsoft Azure are examples of virtual infrastructures. Virtual infrastructures are provided by cloud vendors that sell infrastructure or platform as a service (PaaS). These infrastructures have APIs to allow you to programmatically create new machines with configuration management tools such as Puppet and Chef.

There are also private clouds. For example, VMware has vCloud. Private virtual infrastructures enable you to run a cloud on top of the hardware in your data center.

Virtual infrastructures combined with automation tools to empower organizations practicing DevOps with the ability to configure a server without any fingers on the keyboard. If you want to test your brand-new code, you can automatically send it to your cloud infrastructure, build the environment and then run all of the tests without human intervention.

Test Automation

Test automation has been around for a long time. DevOps testing focuses on automated testing within your build pipeline to ensure that by the time that you have a deployable build, you are confident it is ready to be deployed. You can’t get to the point of continuous delivery where you’re fairly confident without any human intervention that your code is deployable without an extensive automated testing strategy. Popular tools are Selenium and Water.

Pipeline Orchestration

A pipeline is like a manufacturing assembly line that happens from the time a developer says, “I think I’m done,” all the way to the time that the code gets deployed in the production or a late-stage pre-production environment.

Unifying Enterprise Software Development and Delivery

Version One VS unifies agile application lifecycle management and DevOps, providing a full picture of your entire software delivery pipeline in a single platform. Version One Continuum for DevOps is an enterprise continuous delivery solution for automating, orchestrating, and visualizing the flow of change throughout the software delivery cycle

Best Practice

Understand the collaboration and shared tools strategy for the Dev, QA, and infrastructure automation teams

DevOps teams need to come up with a common tools strategy that lets them collaborate across development, testing, and deployment (see Figure 1). This does not mean that you should spend days arguing about tooling; it means you work on a common strategy that includes DevOps…

  • Processes
  • Communications and collaboration planning
  • Continuous development tools
  • Continuous integration tools
  • Continuous testing tools
  • Continuous deployment tools
  • Continuous operations and Cloud Ops tools

Coming up with a common tools strategy does not drive tool selection — at least not at this point. It means picking a common share strategy that all can agree upon and that is reflective of your business objectives for DevOps.

The tool selection process often drives miscommunication within teams. A common DevOps tools strategy must adhere to a common set of objectives while providing seamless collaboration and integration between tools. The objective is to automate everything: Developers should be able to send new and changed software to deployment and operations without humans getting in the way of the processes.

Use tools to Identify the Plans

No ad hoc work or changes should occur outside of the DevOps process, and DevOps tooling should capture every request for new or changed software. This is different from logging the progress of software as it moves through the processes. DevOps provides the ability to automate the acceptance of change requests that come in either from the business or from other parts of the DevOps teams.

Examples include changing software to accommodate a new tax model for the business, or changing the software to accommodate a request to improve performance of the database access module.

Use tools to log metrics on both manual and automated processes

Select tools that can help you understand the productivity of your DevOps processes, both automated and manual, and to determine if they are working in your favor. You need to do two things with these tools. First, define which metrics are relevant to the DevOps processes, such as speed to deployment versus testing errors found. Second, define automated processes to correct issues without human involvement. An example would be dealing with software scaling problems automatically on cloud-based platforms.

Implement test automation and test data provisioning tooling

Test automation is more than just automated testing; it’s the ability to take code and data and run standard testing routines to ensure the quality of the code, the data, and the overall solution. With DevOps, testing must be continuous. The ability to toss code and data into the process means you need to place the code into a sandbox, assign test data to the application, and run hundreds — or thousands — of tests that, when completed, will automatically promote the code down the DevOps process, or return it back to the developers for rework.

Perform acceptance tests for each deployment tooling

Part of the testing process should define the acceptance tests that will be a part of each deployment, including levels of acceptance for the infrastructure, applications, data, and even the test suite that you’ll use. For the tool set selected, those charged with DevOps testing processes should to spend time defining the acceptance tests, and ensuring that the tests meet with the acceptance criteria selected.

These tests may be changed at any time by development or operations. And as applications evolve over time, you’ll need to bake new requirements into the software, which in turn should be tested against these new requirements. For example, you might need to test changes to compliance issues associated with protecting certain types of data, or performance issues to ensure that the enterprise meets service-level agreements.

Ensure continuous feedback between the teams to spot gaps, issues, and inefficiencies

Finally, you’ll need feedback loops to automate communication between tests that spot issues, and tests that process needs to be supported by your chosen tool. The right tool must identify the issue using either manual or automated mechanisms, and tag the issue with the artifact so the developers or operators understand what occurred, why it occurred, and where it occurred.

The tool should also help to define a chain of communications with all automated and human players in the loop. This includes an approach to correct the problem in collaboration with everyone on the team, a consensus as to what type of resolution you should apply, and a list of any additional code or technology required. Then comes the push to production, where the tool should help you define tracking to report whether the resolution made it through automated testing, automated deployment, and automated operations.

Pick up the Automation Tools to execute the flow of work deliver for the project driven based

The major tool categories include: 

  • Version controlTools that track software versions as they are released, whether manually or automatically. This means numbering versions, as well as tracking the configuration and any environmental dependencies that are present, such as the type, brand, and version of the database; the operating system details; and even the type of physical or virtual server that’s needed. This category is related to change management tools.
  • Build and deploy: Tools that automate the building and deployment of software throughout the DevOps process, including continuous development and continuous integration.
  • Functional and non-functional testing: Tools that provide automated testing, including best practices listed above. Testing tools should provide integrated unit, performance, and security testing services. The objective should be end-to-end automation.
  • Provisioning and change management: Tools to provision the platforms needed for deployment of the software, as well as monitor and log any changes occurring to the configuration, the data, or the software. These tools ensure that you can get the system back to a stable state, no matter what occurs.

Written by Senthilkumar Sivakumar, cloud architect and DevOps specialist

The post DevOps: A dungeon master’s guide appeared first on DevOps Online North America.

]]>
DevOps report finds online database development is a key technical practice https://devopsnews.online/devops-report-finds-online-database-development-is-a-key-technical-practice/ Tue, 04 Sep 2018 09:08:02 +0000 http://www.devopsonline.co.uk/?p=13896 The 2018 Accelerate State of DevOps Report finds database development is a key technical practice which can drive high performance in DevOps

The post DevOps report finds online database development is a key technical practice appeared first on DevOps Online North America.

]]>
The 2018 Accelerate State of DevOps Report finds database development is a key technical practice which can drive high performance in DevOps. This matches similar findings in research from Redgate Software, which sponsored the report and provided input.

The longest-running research of its kind, the Accelerate report from DevOps Research and Assessment (DORA) has consistently shown that higher software delivery performance delivers powerful business outcomes.

Interestingly, a new theme in this year’s report was to identify the technical practices that drive higher performance and unlock competitive advantage. They include monitoring and observability, continuous testing, ‘shifting left’ on security and, importantly, database change management.

Software delivery performance

In terms of DevOps itself, the report shows that the highest performing organisations which adopt DevOps release changes 46 times more frequently, have a change failure rate that is 7 times lower, and are able to recover from breaking changes 2,604 times faster.

Crucially, the lead time from committing changes to being able to deploy them is less than one hour in the highest performing organisations – and between one and six months in low performers. Between 46% and 60% of changes deployed by low performers also require some form of hotfix, rollback, or patch.

Database development has entered the picture because deploying changes to the database is often the bottleneck in software development and slows down releases. To address this, the report investigated which database-related practices help when implementing continuous delivery to improve software delivery performance and availability.

Continuous delivery

The results revealed that teams which do continuous delivery well use version control for database changes and manage them in the same way as changes to the application. It also showed that integrating database development into software delivery positively contributes to performance, with changes to the database no longer slowing processes down or causing problems during deployments.

The starting block is communication, cross-team collaboration and visibility, which echoes Redgate’s own 2018 State of Database DevOps Survey earlier in the year. This showed that 76% of developers are now responsible for both application and database development, and 58% reported their development teams and DBAs work on projects together.

“DevOps has become widely accepted in application development and it’s increasingly being recognised that database development can’t be left behind,” comments Stephanie Herr, Database DevOps Product Manager, Redgate.

“Tools are now available that integrate with and plug into the same version control, continuous integration and release management tools used for applications. More importantly, the business case for adopting DevOps for the database has now also been demonstrated.”

Written from press release by Leah Alger

The post DevOps report finds online database development is a key technical practice appeared first on DevOps Online North America.

]]>
Digital transformation in software testing https://devopsnews.online/digital-transformation-in-software-testing/ Mon, 20 Aug 2018 11:23:59 +0000 http://www.devopsonline.co.uk/?p=13809 Senior Manager for Enterprise Software Quality Assurance at Nedbank, Johan Steyn, shares the first chapter of his book ‘The Business of Software Testing’, a “must read for software quality and testing practitioners”. Will we find you swimming or sinking as the DevOps Tsunami hits?

The post Digital transformation in software testing appeared first on DevOps Online North America.

]]>
Senior Manager for Enterprise Software Quality Assurance at Nedbank, Johan Steyn, shares the first chapter of his book ‘The Business of Software Testing’, a “must read for software quality and testing practitioners”. Will we find you swimming or sinking as the DevOps Tsunami hits?

Software quality management and testing is an exciting career. Our peers in software development often see it as a secondary career choice, but the New World cliff that we are careening towards is forcing change at a pace that few appreciate.

Technological advances such as test automation, cognitive and artificial intelligence, DevOps, the digitalisation of industries that were previously slow to adopt new technologies, and the Internet of Things (IoT) are forcing changes that are breathing new life into the notion of testing as a career choice.

Many test professionals are not equipped with the technical know-how to embrace New World tools and frameworks, and few are ready to grow in a career where the lines between business, development, and testing are continually blurring.

Few test professionals have been trained or exposed to skills that are needed to navigate the map of the New World. We are required to be able to work with our peers in business, speak their language, and explain the process and benefits of testing with business acumen.

Fewer in number still are the test professionals who effectively plan their career paths and who are enabling themselves for the next step in their careers.

Over time, many land in leadership positions where they find themselves doing less testing, and dealing more with team issues, recruiting for and building their teams, budgeting, forecasting, working with external vendors, and navigating the various pitfalls of politics in corporate life.

Most test professionals have never learned how to “sell themselves”. How do you build your brand and promote yourself? Are you seen as a thought leader? Have you been able to establish a network among your peers in the industry?

Many testers dream about launching a test consultancy firm of their own. But, many who do so fail within the first year, as they had no clue about the difficulties this choice will introduce. So, how do you start and manage a business? How do you secure funding and control your cash flow? How do you propose your firm’s offerings to new customers?

The Business of Software Testing is a book that introduces these concepts to test professionals. Whether you plan to start your own company, or whether you want to climb the corporate ladder, this book will enable you with the knowledge that is essential to prepare yourself for the next step.

We are racing toward the New World cliff. You can be ready to jump with confidence and to fly to new heights.

The business of software testing

There is a momentous shift-taking place in the world of digital technology. Industries and careers that offered sanctuary to many professionals for many decades are disrupted in ways that we may never be able to grasp. Although the news media and industry forums have been shouting this news into our ears for a long time, many of us are oblivious to the dramatic impact and speed at which we are approaching the cliff of innovation.

We are entering a new technological world, a world where only the brave will survive. Who are those brave souls? They have the foresight to understand the massive impact of what is already happening to our world, and have taken the needed steps to survive the coming tsunami. Tsunami is the right word to use here. When a tsunami approaches, we cannot do much to stop the destruction about to hit our homes. But we can heed the warnings from scientists and prepare accordingly. A tsunami moves with great speed and is usually unexpected. As meteorological technology advances, we will have more time to organise when the warning bell sounds. But we will never have enough time. A tsunami wave moves faster than we can imagine.

The DevOps tsunami

Tsunami is the word I have been using for a long time to describe the changes in our digital world and technical careers. Some months back, I published an article on LinkedIn called The DevOps Tsunami, which caused quite a stir among my peers. The article was also picked up by an influential British Software Testing publication. Resultantly, many software quality professionals from a global spectrum contacted me to express their views.

My sincere belief was that my description of the tsunami would echo what many others in our industry already knew and experienced. But I was surprised by the amount of resistance and criticism that filled by Inbox. Many who made contact expressed a belief that DevOps and the resultant impact on Software Quality Management were just a fad – another buzzword like agile or scrum – and that it would soon disappear like the sound of a jet plane passing by. They expressed a “been there – done that” view: they have seen the many changes hitting our technological world but have experienced little change in their daily lives as testing practitioners. There are always new tools at our disposal, new buzzwords and new trends. But many are still conducting software testing in a manual way, and they seem to be quite happy with that.

The status-quo

This comfort zone of the status quo was built on personality cults and empires that were carefully manufactured in our corporate environments over the years. These cult leaders may have been good testing professionals in their hay-day. But over time, have they climbed the corporate ladder, nestled in a comfortable career where change and innovation was the enemy, and where like-minded minions filled the ranks of the teams they managed.

They have managed to become the go-to software guys in their corporate divisions and are the holders of the keys to quality. But to justify their existence, they keep their stakeholders – especially those with the funding on which their kingdoms depend – at ransom. Concepts like automotive innovation, cognitive technology and even the expertise of vendor partners are avoided at all costs. Innovation, the reuse of assets and the employment of disruptive thinkers are not welcomed. These things will make their houses built on sand to crumble.

 The testers of tomorrow (today)

The clarion call goes out to the Software Quality and Testing community. What we desperately need TODAY is an army of the “Testers of Tomorrow”. The call goes out to those testing professionals who embrace the coming tsunami with all the change and uncertainty it brings. Nothing would have prepared you for this.

What does the Tester of Tomorrow look like? First of all, it is a testing professional with good technical skills. This is not someone who is bound to a specific tool, framework or methodology. This adaptable tester allowed himself to be exposed to a variety of the tools of his trade. Exploration, hunger for growth and innovation is the name of his game.

The Tester of Tomorrow is a real leader. Where many in her trade like to work in the shadows, she operates in the trenches with her team. She drives by her example of commitment and dedication and she sees the strengths in her team not as threats, but as those essential elements that will make her successful, too. She is always keen to promote others and to give praise where it is due.

The Tester of Tomorrow is a commercially savvy leader. He understands that Software Quality Management and Testing is a means to an end. He always and foremost takes into account the business objectives of his customers and stakeholders. He spends time and effort with his team to ensure all are aligned with the business goals of their organisation, and aligns their testing approach and planning to these. He is measured and measures his team on the successful realization of business goals through software quality management.

The Tester of Tomorrow is a shrewd political navigator. She knows that both her and her team’s success rely on her political capital within her organisation. She makes sure that she is connected to the relevant influencers and that she has their ear. She knows that gossip and second-hand information within the corridors of the workplace can scuttle her success. She knows how to promote herself with skilled manoeuvring, and she always ensures that the achievements of her team and the credit due to them are visible to her stakeholders. She recovers from failures gracefully, knowing how to dust herself off and tackle the failure with ownership to exceed expectations.

The Tester of Tomorrow is a reader and a learner. Learning never stops for this leader. He is on the cutting-edge with technological advances and innovation because he attends conferences, participates in webinars and spends time reading. He is not a lazy information gatherer. He is also well connected with his peers in the world of Software Quality. He is a voice worth listening to, a thought-leader. The Tester of Tomorrow lives and breathes Software Quality Management. She is not merely a tester at the end of the cycle. She is not seen as the “stepchild of the SDLC”. Her voice and influence are heard from the very outset of a new project or feature being planned. Her peers welcome her opinion and shape their planning around her guidance. She embodies “shift left” as she skillfully practices her craft throughout the software development and release process.

The impossible dream?

What I have just described may seem like a far cry from the reality that most quality professionals experience. Its environment restricts the growth of a plant growing in a pot. Most organisations – whether end-users of software services such as banks, or even the supposed experts like global vendors – are not aware of and prepared for the tsunami. Your career ambitions as a Tester of Tomorrow may not realise where you currently work. Many organisations still see software testing as a necessary evil to be avoided at all costs, or at least as a grudge purchase like short-term insurance.

Traditionally, our peers in the software world looked at testers as second-hand citizens. Testing was seen for those who did not “make the cut” to become developers. One would never be able to entice a hard-core developer into a career in software testing. The tsunami will force a change here. As we wake up to the tsunami-hit world around us, and as the actual role of software quality is recognised in a world moving at a fast pace that introduces massive risk, the Tester of Tomorrow will find her real place.

I see a world where those hardcore, weirdo ponytail developers can be enticed to focus on a career in software quality management. In this world, their technical and development skills will make them the ideal candidates to test software.

Written by Senior Manager for Enterprise Software Quality Assurance at Nedbank, Johan Steyn

The post Digital transformation in software testing appeared first on DevOps Online North America.

]]>
Cloudera announces partnership with AI simulation firm https://devopsnews.online/cloudera-announces-partnership-with-ai-simulation-firm/ Wed, 25 Jul 2018 11:15:26 +0000 http://www.devopsonline.co.uk/?p=13563 Cloudera, the modern platform for machine learning and analytics, optimised for the cloud, announces its partnership with Simudyne Technology, an artificial intelligence (AI) simulation firm

The post Cloudera announces partnership with AI simulation firm appeared first on DevOps Online North America.

]]>

Cloudera, the modern platform for machine learning and analytics, optimised for the cloud, announced its partnership with Simudyne Technology, an artificial intelligence (AI) simulation firm.

The two will jointly bring the first computational simulation platform built for big data to the financial sector.

Now banks and financial companies can design and run any detailed simulation model on a massive scale. This will give CEOs and their senior business leaders the ability to make dramatically better decisions more quickly than ever, whether in the cloud, on premises, or in a hybrid environment.

“Financial firms can use our flexible machine learning platform to store more volumes of heterogeneous data than they could before and subject it to all sorts of different processing and analysis frameworks,” commented Tom Reilly, Chief Executive Officer at Cloudera.

Understanding arising risks

“Combined with Simudyne, this modern approach to gaining insights and understanding risk helps financial institutions make better predictions and business decisions as economic scenarios emerge.”

Simudyne’s core technology is the only simulation platform certified to run on Cloudera Enterprise. With Cloudera’s modern platform, financial firms can manage credit risk, enable scalable stress testing tools, drive better customer insight, and promote economic stability.

“Simudyne is groundbreaking technology currently being leveraged across Barclays, and enables us to model multiple scenarios on huge data sets so we can understand our risk, exposure and options,” added Jes Staley, Chief Executive Officer at Barclays Bank.

Read about Barclays and Simudyne’s partnership here! 

The power of machine learning

Computational simulation is a key set of analytic tools that provide methods for studying a wide variety of models of real-world systems. With detailed simulations, banks can incorporate the complexities required to model the real world.

Agent-Based Models (ABMs) are a critical new tool for researchers, risk managers, marketing experts, business executives, and policymakers because they provide a robust and holistic view of possible future outcomes.

“Using the power of Cloudera machine learning and Simudyne’s computational simulation software, business leaders can now easily run ‘what if’ scenarios on their existing infrastructure,” continued Justin Lyon, Chief Executive Officer at Simudyne.

“Together, we offer financial services the first simulation toolkit that accurately captures complex feedback and amplification effects that give rise to systemic risk.”

Written from press release by Leah Alger

The post Cloudera announces partnership with AI simulation firm appeared first on DevOps Online North America.

]]>
Testers knowledge towards automation skills https://devopsnews.online/testers-knowledge-towards-automation-skills/ Tue, 17 Jul 2018 13:26:14 +0000 http://www.devopsonline.co.uk/?p=13431 If you look around the testing landscape you will see that manual testing roles are disappearing, but not manual testing itself. So, do we still need the skill sets of a manual tester? Or is it all about automation testing?

The post Testers knowledge towards automation skills appeared first on DevOps Online North America.

]]>
If you look around the testing landscape you will see that manual testing roles are disappearing, but not manual testing itself.

Despite this, the skill set of a manual tester is still needed. According to Brijesh Deb, Agile Testing Evangelist and High Tech Test Manager at Sogeti, this is simply because it is impossible to have zero manual intervention. Every software, be it a mobile app or a component of the NASA rocket, would have to go through some kind of a manual test at some level, at least.

Nevertheless, it appears that not all testers have the skill set to carry out automation tests.

Deb commented: “Test automation is a far bigger animal with a much greater scope where everything from the inception, to the design to the coding, everything is done through automation.

Testing evolution

“The quality parameters have changed with a lot of additional weight now being given to non-functional parameters such as performance and security.

“What this means is that the skill set of the testers has also had to evolve. With this changing outlook of the software testing industry and the evolution in testing, it is imminent that testers add automation to their repertoire as manual testing alone is not going to be enough.

“About a decade or so ago, there was a lot of impetus being given to UI tests and UI was the primary candidate for automation alongside regression tests. Despite this, manual testing can be ubiquitous as the tests and code written for automation are, typically, written manually.”

Furthermore, Anand Bagmar, Founder of Essence Testing, believes automation is NOT the only skill for a tester to contribute and be effective.

”There are many other areas where they can add value – but they need to be able to learn, understand and show a willingness to get close to technology and code – that is non-negotiable from my perspective,” Bagmar added.

Skills & capabilities

In order to build good quality software that will give value to the users of that functionality and in-turn, the creator of the product as well, Bagmar recommends testers must have the following skills and capabilities:

  • Have a testing-mindset
  • Understand and radiate risk
  • Be smart and effective in ways of working
  • Optimise where possible
  • Evolve in learning and understanding
  • Ability/willingness/freedom to experiment and learn from what works well, or not
  • Collaborate with all relevant roles for deeper and shared understanding.

(Check out useful steps to take when tweaking test automation processes here!) 

Deb continued: “Most of the time teams try to write code for tests which are executable manually and call it test automation. What confuses this, even more, is the testing vs checking debate.

“Be it the approach of testing the software manually or with automated tests, one common skill the testers must possess is “Test Craftsmanship”. Test Craftsmanship is a combination of the right testing mindset, with the knowledge of various testing tools and techniques. For manual testing, a solid grounding in test craftsmanship i.e., right testing mindset and the knowledge of various test design techniques might just be enough as it is more procedural in nature.

“Automation testing, on the other hand, requires knowledge of additional tool and languages for the tool to work. Depending on the context, both approaches serve different purposes and are equally important.”

DevOps & continuous delivery

Since the last decade, the focus is moving really quickly to DevOps. This means continuous integration (CI) and continuous delivery (CD) is absolutely impossible without continuous testing (CT). Yet, the fastest (or only) way to achieve CT is through test automation.

“In the fast-moving delivery and release life-cycles, manual testing does not provide much value. We need to focus on a healthy combination of exploratory testing and test automation (of all applicable types) to be effective as a team to build a good quality product. Any test that is important to be re-executed over a period of time needs to be automated at an appropriate level in test automation,” added Bagmar.

“While there is so much focus on test automation, one of the hindering factors for testers from taking up test automation as a career option is the ‘Fear of Code’”.

Since test automation involves writing code that involves accurate knowledge of one or more programming language(s), this scares a lot of engineers, according to Deb. There is a common misconception in the testing world that testers normally do not have access to the code and are more often than not involved in black box testing. So, there is no need to actually learn to programme.

“What testers do not realise is the fact that the knowledge of code will help them, investigate defects, debug errors and expand their avenues more by helping them find the unknown. Testing IMO is more than just finding defects. It is about finding the unknown and helping make the software better,” continued Deb.

Technology involvement

The most important skill required from the tester is the ability to get hands-on involved in technology. This involvement can be at various levels according to Bagmar:

  • Be able to understand/read code and make sense out of the same (logic)
  • Be able to understand/read the existing automated tests to know what “intents” have been automated – this reduces waste by having to repeat the same intent-validations manually
  • Do effective gap-analysis based on what has already been automated, and what would add additional value if automated. Thus, the knowledge of what does not need to be automated, hence focus on more deeper learning, understanding and exploration using human-mind
  • Contribute to enhancing automated test suite (unit/integration, API, UI/end-2-end /performance/security/etc.)
  • Contribute to building a more testable and functional architecture.

Bagmar also noted that organisations are evolving as they see value from more deeper involvement from the testers in all phases of the software life-cycle. “Testers need to evolve in this direction or else they will be in trouble”.

His blog on “Career Path of a Tester!” highlights more areas where a tester can contribute and grow in areas of building a good quality product – check it out!

Written by Leah Alger

The post Testers knowledge towards automation skills appeared first on DevOps Online North America.

]]>
TCoE’s existence in today’s DevOps world https://devopsnews.online/tcoes-existence-in-todays-devops-world/ Fri, 13 Jul 2018 14:20:53 +0000 http://www.devopsonline.co.uk/?p=13410 DevOps is not a methodology; it is a notion to help cut down the barriers between Dev and Ops, says Senior Project Lead QA at Value Labs, Narayana Maruvada

The post TCoE’s existence in today’s DevOps world appeared first on DevOps Online North America.

]]>
It is always an acceptable and/or argumentative fact that the perceptions towards ascertaining the QA/testing needs (encompassing techniques, tools, practices, and strategies etc.) have undergone many ramifications in the recent past, and more predominantly when organisations started to incline more towards adopting today’s agile and DevOps mode of project delivery, which totally changed the very approach for availing the QA and testing services.

One major and notable change was organisations interest in promoting, establishing and investing in TCoE (dedicated centralised testing teams). Although, it is very evident that the agile and DevOps approach was instrumental in expediting the time to market situations, meeting the shorter or frequent release cycles and facilitating continuous delivery. But if organisations are equally looking out to establish a highly standardised QA and testing practice (either for a specific or a complex business need) and aim to build a centralised repository of reusable test assets to deliver quality at optimal cost through optimum utilisation of resources, then TCoE is still the ‘go-to’ solution.

DevOps – the known and unknown

It is very well understood that DevOps is not a methodology, instead, it is a notion to help cut down the barriers between Dev and Ops and purely intended to meet and expedite the needs pertaining to shorter and more frequent release cycles to promote the ‘agile’ mode of delivery with ease. The said notion is achieved through seamless integration and effective collaboration between the assorted teams viz. ‘Dev’ and ‘Ops’ teams, and together, they function as a single entity to accomplish the following activities as a continuous process viz.

  • Continuous testing
  • Continuous integration
  • Continuous delivery
  • Continuous monitoring

Although application developers and system engineers bring correct things into the right environments, the actual crux lies with Q&A teams since they need to continuously stay focused to align their test deliverables, and ensure that every minute and/or potential code changes work as intended without actually breaking anything with the application and/or product, regardless of frequent build requests being encouraged.

The reason as to why Q&A teams are considered to be crucial in the DevOps ecosystem is for the fact that they meticulously take care of following key activities continually and quickly in the same sprint (or on-demand basis) for the application under test viz.

  1. Identify the critical test scenarios
  2. Develop and automate test cases
  3. Organise test cases into respective test suites
  4. Outline the execution order of test cases
  5. Schedule the execution as part of C.I
  6. Generate and/or share the test execution results to requisite stakeholders.

From aforementioned, it is understood that the test approach automation adopted by the Q&A team appears to be very systematic with a cohesive test design, which together helps cut down the challenges with test planning and test management. However, the said approach with Q&A in DevOps is definitely not going to be straightforward and result oriented if the applications/products are of intricate nature.

For example, applications and products are designed and developed for BFIS, utility and healthcare (US demographics) and domains are considered to be the most complex. Especially with healthcare domain (US demographics) – it is very exhaustive as it encompasses the most critical pieces of business functions pertaining to

  • Payroll/invoice Generation
  • Benefits enrollment and disenrollment and their administration
  • Payments and remittances
  • Processing of claims

Besides, EDI (stands for Electronic Data Interchange) forms the major crux of the healthcare domain (specifically for US demographics, as mandated by HIPPA) since it forms the basis and/or enables the transaction processing through the exchange of sensitive data in an acceptable standardised file format between different heterogeneous systems. Again, it’s not just one file format, but there are several file formats available today each of which is intended to standardise the transactions pertaining to a specific business function.

So, to build such a typical domain centric application or product it not only involves the simulation of a bundle of critical functionalities using contemporary technology stack, it also requires extensive domain knowledge to validate. Especially from a QA standpoint, a robust team encompassing of an exploratory, functional, automation and security testers are very much needed in order to assess the said applications/products thoroughly. However, availability and/or building such QA teams on need basis are one of the major gaps with DevOps.

Contemporary challenges

Following are some of the key challenges that QA teams tend to experience as part of working in the DevOps ecosystem:

Effective Collaboration – apparently, it’s the QA team, which acts as a crucial interface between the product development team and operations. So, it’s imperative that QA teams should be made a part to witness the entire project/product life cycle and its associated discussions, rather than having them involved during the testing phase alone. But this collaboration is of very less possibility of looking at how DevOps operates, and hence there will be potential gaps to both understand and align with the testing needs and expectations.

Test Enablement – it is very critical that QA teams understand the business (and underlying critical functionalities) for which the application/product is built and verified. So, in order to achieve this QA team should team-up and has to work closely with business experts / SME’s to understand how the system supports the business, based on which QA team can ascertain the testing needs and enable the QA services accordingly. But in DevOps, it is seemingly hard to find and/or facilitate this channel of association. 

Test Coverage – in DevOps, one cannot guarantee 100% test coverage of the application/product for two fundamental reasons – the rush to deliver the software quickly to meet the release cycles and secondly there are high chances of overlooking from validating critical functionalities due to dynamically changing requirements, which can attribute to defect leakage.

Facilitating Early Testing – one of the major objectives with DevOps is early detection of defects in the development cycle, and this is possible only when testing is planned to begin during the early stages. But it is more likely achieved when there is requisite test documentation (and other supporting test templates) in place to outline, organize and prioritize the user-stories that can be readily tested without any dependencies.

Testing Maturity – regardless of any and every approach with software delivery, the key-differentiating factor from a QA standpoint is the test maturity. Generally, it is one such attribute that cannot be quantified all the time, but it is derived taking certain key factors into consideration such as approach, experience, skill-set, ability to orchestrate and automate etc. So, if the QA teams fail to possess the requisite test maturity, then it is inevitable to see challenges and failures while taking projects through completion.

Centralised testing service (aka TCoE)

The idea behind having TCoE is to establish a highly standardised QA and testing practice besides building reusable test assets and repositories that can deliver quality at optimal cost by optimum utilisation of resources. It helps bring people, process and technology together, and likewise promote the requisite collaboration across teams to improve the effectiveness of testing.

It is a comprehensive structure in itself and is driven by independent and self-sustaining CoE’s outlined below, backed by a strong core team (for governance, facilitation and coordination services), which together act as key accelerators and differentiators to meet any challenging/complex and/or evolving business needs (from QA standpoint) regardless of time to market situations or delivery patterns.

  • Functional test competency team
  • Domain competency team
  • Automation competency team
  • Non-functional test (performance, security etc.) team.

The proposition is not all about building big centralised testing teams, or TCoE’s, but to ascertain the possibilities of integrating or embedding already existing centralised testing services into agile or DevOps and maintaining them, to maximise ROI and deliver high customer satisfaction and service standards.

Written by Senior Project Lead QA at Value Labs, Narayana Maruvada

The post TCoE’s existence in today’s DevOps world appeared first on DevOps Online North America.

]]>
Technical & cultural building blocks https://devopsnews.online/technical-and-cultural-building-blocks/ Thu, 05 Jul 2018 09:12:44 +0000 http://www.devopsonline.co.uk/?p=13291 Director of DevOps at ADP UK, Keith Watson, plots a course from a monolithic product design with the legacy build and deployment processes to a continuous delivery process into the cloud for API micro-services using DevOps tools and techniques

The post Technical & cultural building blocks appeared first on DevOps Online North America.

]]>
Unless you are a greenfield start-up, most companies will have a number of IT products which are built on code developed and deployed using processes created from days well before DevOps.

Starting with a clear vision and strategy for the specific outcomes we desire has allowed us to put in place the technical and cultural building blocks in our product for the permanent transformation of our product delivery pipeline. We have a simple but clear vision to:

  • Have a holistic approach that optimises the efficiency of the whole system, not just some parts
  • Re-engineer the product architecture and the release processes to improve deployment frequency and deployment lag
  • Demonstrate improved behaviour of the product in production to the operations teams
  • Conform to all appropriate company compliance standards
  • Ensure all stakeholders in the delivery pipeline are involved in creating the solution.

Our aim is to deliver this vision using the “configuration as code” DevOps model with standard software engineering principles. The outcome we desire is to reduce the time from code commit to the deployment of high quality, highly secure small functional artefacts into production.

Cultural changes

Tools are a necessary but not sufficient condition for DevOps to succeed. As well as new tools and methods, it is important to understand that business processes and cultural habits will also need to change. Changes will often experience resistance unless the benefits DevOps can be demonstrated.

Because of previous deployment issues, most companies will have additional governance processes and separation of duties to reduce the risk of production issues. This will naturally create silos, which are anti-patterns to DevOps success. Breaking down these barriers must be done in a sensitive way so as not to imply lack of commitment or professionalism from either side. Building relationships and demonstrating deployment competence of the development teams are often the only way to change opinions. This takes time and effort.

John Kotter’s 8 stage change process is a useful tool to manage business change (see diagram). Key to this is building a guiding coalition. Initially, it is important to spend time building relationships across the different disciplines and silos. This helps determine whom the key decision makers are, what strategies and tools are already being used, and what degrees of freedom there are to make changes. Finding ways to demonstrate business value (short-term wins) are an excellent way to gain trust and respect across the various stakeholders and between silos.

Source: https://www.kotterinc.com/8-steps-process-for-leading-change/

Automated testing

It is also important to understand potential objections and address those objections in any DevOps strategy. For example, it is vitally important to prove that the new processes deliver higher quality and more secure artefacts than previous deployment pipelines. This means implementing a “shift-left” testing policy particularly by using automated testing earlier in the software development cycle. In many organisations, testing is often performed late and usually includes manual or semi-automated testing. Using automated testing enables us to provide quality gates earlier in the continuous delivery pipeline. This ensures greater confidence in deployments into test environments and eventually into production. It also enables regression testing to be performed when modules are changed reducing the risk for production deployments. This does, however, have implications for the new skills and behaviours required in both software development teams and testing teams.

Tools and recipes

Continuous delivery is such a different paradigm to the usual deployment methods that stakeholders, particularly budget holders, need to be shown why investment in tooling is important. Adopting some tools and choosing an appropriate project to demonstrate business benefit quickly is often a better approach than company-wide initiatives over a longer period. Which tools are chosen are usually less important than adopting DevOps policies and principles such as using recipes, adopting version control for all code (not just source code) and using appropriate pipeline orchestration. As with any coding discipline, our knowledge of how to build pipelines grows as we gain experience writing more and more complex pipelines.

Continuous delivery

While there is still work to be done, the early results we have seen in our product have demonstrated the business value to various stakeholders and allowed us to gain credibility within the development and operations teams. Each new delivery gives us more confidence and enables us to grow our knowledge in building relationships across teams and using DevOps techniques.

Written by Director of DevOps at ADP UK, Keith Watson

The post Technical & cultural building blocks appeared first on DevOps Online North America.

]]>