culture Archives - DevOps Online North America https://devopsnews.online/tag/culture/ by 31 Media Ltd. Tue, 12 Jul 2022 11:34:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 How DevOps can help banks instil cultural changes that drive technological success https://devopsnews.online/how-devops-can-help-banks-instil-cultural-changes-that-drive-technological-success/ Tue, 12 Jul 2022 11:30:27 +0000 https://devopsnews.online/?p=25494 Large financial institutions can often be resistant to change; they may be limited by legacy infrastructure and high levels of internal bureaucracy. However, one of the biggest obstacles they have is their mindset when it comes to technological innovation. Financial services is now a business that revolves almost entirely around technology, and this digitalisation has...

The post How DevOps can help banks instil cultural changes that drive technological success appeared first on DevOps Online North America.

]]>
Large financial institutions can often be resistant to change; they may be limited by legacy infrastructure and high levels of internal bureaucracy. However, one of the biggest obstacles they have is their mindset when it comes to technological innovation.

Financial services is now a business that revolves almost entirely around technology, and this digitalisation has happened at speed. Customers expect always-on access to banking services and can quickly take their business elsewhere if these expectations are not met.

Digital-first ‘challenger’ banks and fintech are, by design, in the perfect position to address these demands. Many older, more established institutions have had to digitise their existing analogue services and struggled to adapt. But by learning lessons from these new market entrants, incumbents can adapt their culture and approach to technology.

 

A more agile, flexible approach

A key tool to support this cultural change can be found in DevOps. This is a term used to describe a set of best practices concerning the combination of software development and IT operations. It sits at the intersection of the creation of new software-based services and the actual deployment of these services in a real-world environment. It’s an agile methodology used by many technology-focused companies.

Introducing new products and services involves change. And as far as many large financial institutions are concerned, change goes hand-in-hand with risk. Banks want to upgrade their products but can’t afford anything to go wrong. A new service that doesn’t work or creates unexpected problems will have a negative impact.

So, the ideal scenario for a financial services organisation is to develop customer-friendly products with features that have genuine consumer appeal in such a way that they can be smoothly launched with minimal risk and easily managed on an ongoing basis. By using the principles of DevOps, this is possible. But banks that stick to the old way of doing things will struggle to achieve these goals.

 

Why the old approach is outdated

In order to manage risk, conventional wisdom used to dictate that it was best to bundle up a number of changes — whether new features, security patches, compliance or regulatory updates, and so on — and apply them all in a bundle at fixed times throughout the year.

But in life, as in technology, the smaller that you can make the change, typically the less risk there is associated with it. Also, the easier it is to back out and revert if you need to. Change is a lot less risky when it’s done in very small bite-sized pieces. And that’s the thinking behind the agile methodology associated with DevOps.

DevOps focuses on small incremental changes introduced on a rolling, regular basis. Typically, you would start with a minimum viable product in a live environment, then gradually introduce features and upgrades as they become ready. This minimises potential disruption — there’s very little system downtime and anything that doesn’t work can be quickly rolled back. Fixes can be applied before the feature is re-released, without compromising the organisation’s ultimate strategic goals.

Compare this to the old way of doing things where business-critical systems were taken offline for significant periods of time and rafts of changes introduced all at once. If anything went wrong, it was highly problematic to figure out what the problem was, whose responsibility it was, and how to roll it back. Taking a DevOps-style approach to technological development makes much more sense.

 

How banks can shift their cultural approach

The best way banks can set themselves up for future technology success is to focus on their cultural approach to technology development. When it comes to major projects such as overhauling and replacing legacy infrastructure, using DevOps principles will only lead to the outcomes you want if you can challenge the culture of change across significant parts of the organisation, capitalising on opportunities to change the way that technology is managed, delivered, and deployed within the wider business.

It’s important to ensure that key players within the organisation understand the benefits that DevOps principles can bring – and have them act as advocates for change. It’s vital to consider how your technological partnerships play into making this cultural shift too. Partners that share the same outlook and use agile methods that you want to adopt will be able to provide proof points that the DevOps approach works. It’s also advantageous to communicate the change in culture to customers in a highly evangelical way to ensure they buy into it. Staff, partners, and end customers should all agree that the changes being made are for the best.

Another consideration is how the organisational structure can be adapted to give ownership and responsibility for the impact assessment of changes to the people that have actually created them. A traditional approach to governance would see those taking the responsibility for change governance and decision-making sitting several layers above the people originating the changes. This is perhaps understandable given the potential impact of something going wrong, but the entire approach is flawed. Leaner governance structures are critical for the success of DevOps culture.

 

Measuring the success of DevOps culture

To ensure that this cultural shift is adopted in the long term, the progress and successes of the DevOps approach must be clearly communicated within the business. The big advantage of following this path is that you can measure outputs quickly and accurately because you are dealing with specific chunks of work on a regular basis. This allows you to change course along the way – you won’t end up doing two months of work only to realise that it’s not what you wanted, or that it’s not giving you the right results.

At its heart, embracing DevOps is about recognising the importance of deploying change as it’s ready. There is significant value in getting updates, whether security features or additional functions live and into a product as soon as possible. Updates aren’t hidden in bundles of change. This is fundamentally empowering for all involved and creates a virtuous circle of ownership and quality.

It’s also an approach that ensures high availability. In the world of payments, in particular, being always-on is very important in terms of customer experience. The growth of SecDevOps to the ways of working only embeds this further, ensuring that security principles are ‘coded’ in at inception and are not a secondary consideration later in the process.

 

In summary: Embracing change will lead to success

The pace of change in banking has arguably never been greater than it is at the moment. The pressure on long-established financial institutions is immense, with rivals old and new competing for their customers at every turn. Standing still is not an option, so incumbents must improve their existing services and develop new ones. If they don’t, they will get left behind. Times change, and so must they.

And it’s the approach that banks take to this change that really matters. It’s easy to get lost in the big picture, especially when thinking about what things will look like five or 10 years from now. In my view, taking a much more granular outlook to change is the best way forward for banks. By using the principles of DevOps and SecDevOps they can make change more manageable, empower their teams to deliver, and put themselves in the best position to compete in the long term.

 

Article written by Eimear O’Connor, Chief Operating Officer, Form3

The post How DevOps can help banks instil cultural changes that drive technological success appeared first on DevOps Online North America.

]]>
DevOps Pillar: Communication https://devopsnews.online/devops-pillar-communication/ Tue, 01 Mar 2022 14:38:16 +0000 https://devopsnews.online/?p=24016 Communication (from Latin communicare, meaning “to share” or “to be in relation with”) There are out there tons of articles and studies around DevOps ways of working, working frameworks, Agile Gurus, how to work in a team, shift left, no Silos, and so on, and so on… but, is hard to find any evidence about DevOps basics and...

The post DevOps Pillar: Communication appeared first on DevOps Online North America.

]]>
Communication (from Latin communicare, meaning “to share” or “to be in relation with”)

There are out there tons of articles and studies around DevOps ways of working, working frameworks, Agile Gurus, how to work in a team, shift left, no Silos, and so on, and so on… but, is hard to find any evidence about DevOps basics and the big pillar of DevOps transformation which is, in my opinion, Communication.

Usually one of the misconceptions I find a lot is the fact that people usually tend to focus on tools and automation, forgetting the Agile manifesto where it states Individuals and interactions over processes and tools, customer collaboration over contract negotiation. Forgetting that no amount of tools can be a substitute for good communication and empathy.

Isn’t DevOps a way to bring different Teams together? If so, how can you put a bunch of different people working together for the same objective or goal without Communication? Remember this: collaboration and Communication are strictly connected and dependent on each other.

DevOps is all about teamwork, all together and no Silos, but in order to achieve that we need to take into account that cultural background has a massive role on it and Communication is where sits the key or “secret” of making a difference for the success or (if that is the case) unsuccess of any team.

What my experience across the years and especially managing teams so dispersed on the planet has taught me is to level up the way I communicate to them. I am not talking about only words but gestures (even if virtual), we don’t only communicate through sounds or writing, there are many other ways of communicating and passing a message (Verbal and non-Verbal communication).

Be aware of the different cultural backgrounds and what it means for the different forms of communication, beware of the emotion around the moment and how it is affecting communication.

Invest time with your team, to learn a bit of their main language (in the case your main languages are different), discuss with them about their cultural roots and background and, along the process share yours as well. This exercise not only helped me but helped the team as well, it’s a boost to trust and Communication quality, which are pivotal when you aim for having a high performant team.

Another relevant misconception, or what I usually call a ‘classical pitfall’, is about how Teams must be able to communicate between them without a proxy/middleman! Yes, no me, no Scrum Master, no Project Manager, no Director, no CEO, no to any form of Middle Management. Unless they have the know-how for the solution they are not needed there, it will be just a waste of time for them and a waste of money for the company having them on a call monopolizing, controlling the flow of information, or even making the same a “political arena”.

Remember, tech people don’t need a middle man, they speak the same language between them and they have the delivery mindset on their DNA, they don’t need a monitor/moderator in their calls, all they need to do is to present a solution and people that are willing to contribute for the solution. They need more communication support than anything else, which means they need to communicate between them without interference from authority or other external factors.

Probably you are thinking what about when there is some friction between the teams? The classical confrontation that in human relations is often (too much for what we would like) present but by all means avoidable.

Always be the defuse element, never forget that a story always has two sides, and many times even more than two, which means when dealing with so many different teams in a proper DevOps environment, everyone needs to be heard, and the decision making is with the team union rather than one man choice without any support to that decision.

A message when conveyed less appropriately will definitely bring disunion, unneeded stress, and even sometimes it’s a call out for a toxic environment which is something that no one wants.

This doesn’t mean you need to be the Mr. Nice or the next winner of a sympathy contest. All you need is to agree sometimes that you just will disagree with different points of view without never losing empathy as at the end of the day your goal is a success like everyone else, I hope.

Many times when the discussion is more heated, or you are under stress and just received an email that pressed the “wrong” pressure button, rather than replying straight away it’s better to switch off and even sometimes to wait until the following morning before replying. Never but never reply with emotion, especially if the channel of choice is email. Always prefer “cold head” over emotions and then you will realize that actually you are part of the solution and not the problem. Contribute to defusing a potential conflict between several parts, avoid if possible endless and pointless threads of emails, which in the end are a complete waste of time (yours), resources (colleagues), and money (corporate).

Another classic pitfall, in my view, is the very long emails (even threads) usually with the wrong focus, who never had one (or several…) please raise your hand. Emails should be short, with a pragmatic approach when providing context, focus on a solution and not on the problem, the focus should always lay on Solutions or the search for the same and never about guilt.

My personal mantras for good communication:

  • Transparency
  • Always choose head over emotion when replying to anything
  • On Virtual Meetings, give the example and always have your video on and invite (not impose) others to do the same
  • On in-person meetings, close or keep away any electronic device and use just pen and paper
  • Always listen to others, which means you allow everyone to finish their idea, don’t interrupt them
  • In case of conflict, be the defuse agent don’t be aggressive, passive-aggressive, or even try to be a joker
  • Being the one that is defusing, doesn’t mean you need to agree on something, you can always agree that you disagree
  • Be pragmatic and solution-oriented when communicating with others – to point out problems or to play the blame game, anyone from any skills level or age group can do that
  • Avoid “Ph.D. dissertation or thesis” for both verbal and non-verbal communication – “Working software over comprehensive documentation
  • Prefer Wiki and Instant messaging over emails

 

By any means I am not an authority around communication, these are my learnings through my past and continuous experience with DevOps Teams under my management and from my work career with DevOps Transformation. In my point of view, the reason behind so many DevOps / Agile Transformation failures is the lack of a culture of quality communication between teams and peers. If the communication flows freely, without any type of control or censorship, you will get a great foundation for a High Performant DevOps Team.

Good link to remember the Agile Manifesto (key values and principles behind the Agile philosophy): https://agilemanifesto.org

 

Written by Ricardo Moreira, DevOps Manager at Vodafone Business

The post DevOps Pillar: Communication appeared first on DevOps Online North America.

]]>
Data Fabric and the future of Data Management https://devopsnews.online/data-fabric-and-the-future-of-data-management/ Tue, 15 Feb 2022 10:11:05 +0000 https://devopsnews.online/?p=23982 Data Management is as old as civilization and was born the day the first human decided to keep records of important information many thousands of years ago. As the value of data became better known and more data was gathered on many topics, the need for finding and establishing approaches in managing, warehousing, and value...

The post Data Fabric and the future of Data Management appeared first on DevOps Online North America.

]]>
Data Management is as old as civilization and was born the day the first human decided to keep records of important information many thousands of years ago. As the value of data became better known and more data was gathered on many topics, the need for finding and establishing approaches in managing, warehousing, and value extraction from it became more prevalent.

Marking clay tablets, scratching wax boards, writing with charcoal, using ink on papyrus, and finally printing on paper laid out the slow and painful evolutionary path of data management, taking millenniums to arrive at the age of computers during the 1950s. It would still take a few more decades before computer storage became cheap and spacious enough to be used by businesses as part of their data management system.

During the early 1980s, we witnessed the start of the decades-long rapid rise in the capacity of computer storage and their continued drop in costs. Fortunate to all of us, today, that trend is stronger than ever before and shows no signs of stopping in near future.

As computerized data management became the next evident step, many new approaches were put to trials and errors and as the computing architecture design evolved for software solutions, data management also went through several major changes.

 

Enter Data Fabric

A Data Fabric is an enterprise-wide data integration architecture and its wide range of data services that democratize data engineering, analytics, and other data services across a choice of endpoints spanning hybrid multi-cloud environments. Data Fabric standardizes data management practices and usages across the cloud, on-premises, and edge devices.

Data Fabric is an Augmented Data Management architecture that can optimize access to distributed data and intelligently process and prepare it for self-service delivery to data users just in time, regardless of where the data is stored.

A Data Fabric architecture is designed to be agnostic to data environments, data processes, data use, and geo-location of data repositories while integrating it with the core and supplementary data management capabilities. It provides a connective layer between data endpoints, enabling the complete range of data management capabilities (such as cross-silo integration, static and active Metadata discovery, governance, processing, and orchestration).

IBM predicts that using Data Fabric can lead up to a 158% increase in ROI and can reduce ETL requests by up to 65%. Moreover, it will enable self-serve data consumption and collaboration. Data Fabric’s Active Metadata will help automate governance, data protection, and security and it will augment data integration and automate data engineering tasks.

Data Fabric, which according to Gartner, is one of the key technology trends of this decade – is essentially a design concept that serves as an organization-wide integration layer of data and its needed connectivity and processing requirements that encompasses the existing and upcoming data’s Metadata information.

It virtually weaves a live and vibrant “Fabric” out of the threads of “Data”, enriched with the color of their Metadata, visualizing their patterns and symmetries.

Data Fabric is the most recent response to the challenges of data management, like the high cost of maintenance alongside the low efficiency of data value extractions, lack of continuous analytics or event-driven sharing, and connectivity issues with distributed storage. Data Fabric breaks down the in-efficient siloed structure of data by identifying how existing use cases establish usage patterns that replicate the way the business is done. It saves the organization from the never-ending and ever-increasing cost of merging, emerging, and redeploying silos with new data.

Data Fabric taps into both human and machine capabilities in using data processes to read, capture, integrate and deliver data based on the user, context of usage, and prevalent and changing usage patterns. It continuously identifies and weaves data from disparate sources to discover unique, and relevant relationships – in the form of Metadata – between the available data points.

Data Fabric establishes the Augmented Data Management using a system based on Metadata recognition and analysis. One of its core competencies is to reduce the effort of the manual work by raising the efficiency in consuming the Metadata that the organization’s lines of business generate continuously as the organization moves forward in the market.

It is estimated that by 2024, Data Fabric deployments will quadruple efficiency in data utilization while cutting human-driven data management tasks in half.

 

Metadata as the Bloodline for the Data Fabric

Data Fabric dynamically uses both the available – existing – Metadata as well the Metadata that is gathered through the discovery of changes in the data, usage, and even the changes in usage patterns. The Metadata that explains these changes is never readily available and has to be discovered from logs, exports, query processes, use of views, and even security access logs.

We cannot implement Metadata by limiting the scope to only collecting and analyzing it in a static manner. Data Fabric thrives through Active Metadata practice by receiving and tracking live updates, which means it would be insufficient to only gather Metadata from existing data storage and platforms.

A Data Catalog or Data Marketplace can serve as an anchor point for gathering Metadata. Many organizations already have implemented Data Catalogs to establish Self-Service – aka Democratized – Data Discovery and Analytics.

Data Fabric can begin with mere observation and processing of Metadata without having to impact, change or deviate from the existing processes. The improvements and optimizations can be planned and implemented later as incremental and evolutionary changes.

Metadata Activation is done through analysis of existing Metadata in combination with the discovered Metadata and running pattern analysis with the intention of identifying the areas in processes and practices where improvement can be implemented.

 

“Smartifying” Data Fabric   

Machine Learning (ML) and Graph Analysis (GA) form the smart core of the Metadata discovery through running deep data analysis on existing and newly arriving data. They can profile the new data, extract value and structure and run comparisons with the available information to look for similarities and then map users and their usage patterns to that data.

Using ML enhances Data Fabric with learning models that can train by observing the functions of data engineers, enabling them to prescribe the best next actions and use the outcome to calibrate the next decisions into a continuous improvement practice. This serves as the backbone of an Augmented Data Management solution that can free up human data workers to focus their attention on innovation and enhancing value delivery to customers. This solution automates repetitive tasks such as discovering and profiling data, ETL and schema alignment, and data integration.

Data Fabric’s use of ML in analyzing content and design alignment of the Metadata moves it from static to active state which can then deliver pattern-based outcomes. This also supports an ongoing increase in the number of data assets and data service users in a wide spectrum (from the reuse of data assets to the rapid ingestion of data from public sources and business partners).

Data Fabric’s use of ML also helps with transparency, traceability, and provability of Metadata analytics, which supports the human data workers in their continuous exploratory work in creating more customer values. Its ability to mark up the differences between the expected data patterns versus the actual patterns that are discovered (aka pattern-based deviation detection), allows for its automatic data failure response protocols to step in and take care of most data problems without the need for human intervention.

Data Fabric being “smart” means that it also gets better over time as it learns from humans how to fix the problems it could not tackle before. This makes the Data Fabric more efficient and resourceful as it grows its understanding of the data (and its Metadata) through continued exposure to the patterns through time. The direct outcome would be carving away the worries of human data workers and freeing up more of their time for innovative work in value delivery as Data Fabric continues to learn and practice more.

Human data workers will adjust Data Fabric Augmented Data Management’s understanding in the areas that are open for automation, and the areas that would still need human supervision (for security compliance or other reasons).

 

Where to begin from?

Like any new solution design, having a strong technology base to build upon, understanding the Current Mode of Operation (CMO) of the Data Management in and across the data siloes, identifying the existing gaps and pain points and the expected target states, in a phased evolutionary change roadmap, would be a good approach.

Creating a deep understanding of where we are and what we are trying to achieve will help us better envision what needs to be achieved at the completion of each step towards having a fully functional, enterprise-wide Data Fabric solution.

Since Data Fabric has a deep dependency on contextual information, we need an integrated pool of Metadata to help the newly established solution to identify, connect, and analyze all kinds of Metadata such as Business, Operational, Technical and more.

Creating Graph Models by analyzing the Metadata in a continuous model would allow for the continuous data sharing and democratization of data processing and provisioning functions. Graph Models provide the needed visualization and enrichments through adding Semantics for easier sharing and understanding of data across the enterprise.

The Semantic enrichment of the Metadata makes it more insightful and helps with its interpretation into practical knowledge. It enhances the value delivery through data usage and helps the ML algorithms train on relevant, up-to-date, and rich data for more relevancy and accuracy.

Frictionless technology integration and seamless data portability are two other vital factors in the successful implementation of a Data Fabric solution. They will ensure an easy flow of usable data across siloes and in and out of the enterprise as needed. Technical compatibility with the common data curation, processing, transformation, aggregation, streaming, replication, and virtualization is extremely important and can make or break the entire transformation efforts.

Using Metadata analytics to map out the existing data usage patterns, and then measuring the alignment – or misalignment – to the initial intent within the lines of business using them, in addition to identifying the rate of introducing new data assets to the organization, can also help us better plan the needed transformation roadmap and make a better selection of the heated areas that can benefit from the pressure relief that Data Fabric can introduce to them.

From the bottom-up the Data Fabric solution stack has several layers, which would include – but not be limited to – the following:

  • Data Sources, are the data repositories or streams of data.
  • ML-Enhanced Metadata Management, Orchestration, and Data Cataloguing, where the Meta information about the Data from the Data sources are identified, enriched and gathered.
  • Master and Reference Data Management, which serves as the storage and distribution point of best-fit data for the variety of uses by the consumers.
  • Data Integration, where the data curation, processing, normalization, transformation, virtualization, and analytical services can be done.
  • Data Governance and Standards, which enforce the policies and access control.
  • Data delivery, where several data connectivity and transport protocols and interfaces can work in parallel to deliver the data to the consumers (which in turn can be serve-service analytics, business intelligence solutions, data science pipelines, and all types of dashboards)

Starting with Quick-Wins through Pilot/POC projects would allow their teams to get experience with the transformation process, learn about the existing hurdles, and fine-tune their approach for escalation towards larger implementation work. Starting with one data asset or domain-oriented prototype that can test our ability in integrating currently disparate data from a small number of sources (preferably across a few areas of business operations) would be a great learning experience for the transformation teams, and lowers the risk of disrupting the existing data management system during the transformation.

As the Data Fabric’s Augmented Data Management comes into pragmatic use and starts the discovery work across the existing siloes, leadership will run into many unused data assets that were covered under layers of integration issues and data format incompatibilities. They should take this opportunity and try to learn more about the hidden layers of their own business by digging through these data, seeking valuable insights about the existing flow of data and its value delivery impact across the pipelines.

 

Caveats

Since Data Fabric brings a fundamental change to the organizations’ data management and the methodology behind their orchestration and the integration of the tools that supports them, care must be taken to implement it through an incremental evolutionary shift.

While a System Thinking approach is required to ensure we are not creating segregated patches of Data Fabric islands, unable to establish the data flow and democratization of data service in the enterprise, leadership should also avoid pushing for hasty large-scale changes without proper impact analysis, especially when the organization already has a patchwork of legacy and modern tools and repositories that are maintained through painful manual work by the humans.

Data Fabric implementation demands replacing tools and platforms that are unable to share their Metadata with the rest of the data management system; this is a delicate move for well-rooted organizations that have been a participant in the data management evolution of past several decades, and have ended up with a number of old-tech legacy data siloes that need to be reverse-engineered for data migration before they can participate in a phase-out roadmap and be replaced with newer solutions.

One key success factor of a Data Fabric implementation would be its ability to adapt to existing data management platforms and master data and data governance practices without having to go for a one-shot, all-or-nothing flip-over.

 

Conclusion

According to Gartner, by 2024, 75% of organizations will have established a centralized data and analytics (D&A) center of excellence to support federated D&A initiatives and prevent enterprise failure. This is while through 2025, 80% of organizations seeking to scale digital business will fail because they do not take a modern approach to data and analytics governance.

Gartner also believes that by 2024, organizations that utilize active Metadata to enrich and deliver a dynamic

Data Fabric will reduce time to integrated data delivery by 50% and improve the productivity of data teams by 20%.

Data Fabrics’ Augmented Data Management, now enhanced and empowered by expanding adoption of Public Cloud service have been ramping up as the best answer to the looming data management trouble that is spreading its shadow in all market sectors, not to mention that it brings a definitive technological advantage to the organizations who succeed in its proper and well-structured implementation.

Data Fabric allows the organization to maximize its value recognition from the data that it has and to raise the ROI on the data that it receives from the market and leverage that into stronger, differentiating insights in support of better business agility and customer value delivery.

 

Article written by Arman Kamran, CTO of Prima Recon, Professor of Transformative Technologies, Advisory Board Member to The Harvard Business Review, and Enterprise Transition Expert in Scaled Agile Digital Transformation

The post Data Fabric and the future of Data Management appeared first on DevOps Online North America.

]]>
Working in DevOps https://devopsnews.online/working-in-devops/ Thu, 06 Jan 2022 10:17:53 +0000 https://devopsnews.online/?p=23842 As the demand for DevOps and cloud has been slowly increasing throughout the year, more and more organisations are looking for skilled DevOps teams. It is likely that DevOps will then become the approach adopted by many IT companies in order to offer reliable and faster solutions. Hence, if you are interested in learning DevOps...

The post Working in DevOps appeared first on DevOps Online North America.

]]>
As the demand for DevOps and cloud has been slowly increasing throughout the year, more and more organisations are looking for skilled DevOps teams. It is likely that DevOps will then become the approach adopted by many IT companies in order to offer reliable and faster solutions.

Hence, if you are interested in learning DevOps to advance your career, some experts in the industry have shared their advice and recommendations with us!

 

What is DevOps? 

According to Khuswant Singh, Lead DevOps Automation Engineer at Next, DevOps is a software development methodology that combines software development (Dev) with information technology operations (Ops) participating together in the entire application lifecycle from design through the development process to production support.

Under a DevOps model, development and operations teams are no longer “siloed.” Sometimes, these two teams are merged into a single team where the engineers work across the entire application lifecycle, from development and test to deployment to operations, and develop a range of skills not limited to a single function. At its core, DevOps is a set of tools and practices that help organisations build, test, and deploy software more reliably and at a faster rate.

Moreover, Bolvin Fernandes, Azure DevOps specialist at Redkite, adds that DevOps is a set of practices and a cultural change that combines software development and operations, two traditionally siloed teams, in order to expedite an organisation’s ability to release software/applications as compared to traditional software development processes. DevOps entails the adaptation of tools and practices best suited to the unification and automation of processes within the software development lifecycle that will enable organisations to create, improve and ship out software products at a faster pace.

DevOps is enabling organisations to deliver their products more quickly than those with the traditional development and release cycle, Khuswant continues. True DevOps unites teams to support continuous integration and continuous delivery (CI/CD) pipelines through optimized processes and automation. So, continuous is a differentiated characteristic of a DevOps pipeline. A CI/CD approach enables efficiency in the building and deployment of applications, and automated application deployment allows for rapid release with minimal downtime.

When done properly, DevOps greatly reduces the time it takes to bring software from idea to implementation to end-user delivery. It also adds efficiency to the software delivery process in many ways. It allows different team members to work in parallel, for example, it also ensures that coding problems are found early in the delivery pipeline, when fixing them requires much less time and effort than it does once a bug has been pushed into production. With DevOps, the expectation is to develop faster, test regularly, and release more frequently, all while improving quality and cutting costs.

To help achieve this, DevOps monitoring tools provide automation and expanded measurement and visibility throughout the entire development lifecycle – from planning, development, integration and testing, deployment, and operations.

 

DevOps Culture

DevOps culture involves closer collaboration and shared responsibility between development and operations for the products they create and maintain, Khuswant underlines. This helps companies align their people, processes, and tools toward a more unified customer focus.

At the heart of DevOps culture is increased transparency, communication, and collaboration between teams that traditionally worked in siloes. DevOps is an organizational culture shift that emphasizes continuous learning and continuous improvement. An attitude of shared responsibility is an aspect of DevOps culture that encourages closer collaboration. It’s easy for a development team to become disinterested in the operation and maintenance of a system if it is handed over to another team to look after.

Bolvin notes that It is all about shared responsibility and accountability between developers and operations of the software they build and deliver. This includes increasing transparency, communication, and collaboration across multiple teams and also, the business.

Hence, if a development team shares the responsibility of looking after a system over the course of its lifetime, Khuswant points out, they can share the operations staff’s pain and so identify ways to simplify deployment and maintenance (e.g., by automating deployments and improving logging). They may also gain additional observed requirements from monitoring the system in production. When operations staff share responsibility for a system’s business goals, they can work more closely with developers to better understand the operational needs of a system and help meet these. In practice, collaboration often begins with increased awareness from developers of operational concerns (such as deployment and monitoring) and the adoption of new automation tools and practices by operations staff.

It is helpful to adjust resourcing structures to allow operations staff to get involved with teams early. Having the developers and operations staff co-located will help them to work together. Handovers and signoffs discourage people from sharing responsibility and contribute to a culture of blame. Instead, developers and operations staff should both be responsible for the successes and failures of a system. DevOps culture blurs the line between the roles of developer and operations staff and may eventually eliminate the distinction.

 

The role & responsibilities of a DevOps engineer

According to Bolvin, here are some of the responsibilities of a DevOps engineer:

  • Understanding customer requirements and project KPIs
  • Implementing various development, testing, automation tools, and IT infrastructure Managing stakeholders and external interfaces
  • Defining and setting development, test, release, update, and support processes Troubleshooting techniques and possibly, fixing code bugs
  • Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement
  • Encouraging and building automated processes wherever possible
  • Incidence management and root cause analysis
  • Coordination, communication, and collaboration within the team and with customers Strive for continuous improvement and build continuous integration, continuous delivery, and continuous deployment pipelines (CI/CD Pipeline)
  • Mentoring and guiding the team members
  • Monitoring and measuring customer experience and KPIs
  • Managing periodic reporting on the progress to the management and the customer

Khuswant also highlights these ones:

  • Awareness of DevOps and Agile principles.
  • Building and setting up new development tools and infrastructure.
  • Working on ways to automate and improve development and release processes.
  • Ensuring that systems are safe and secure against cybersecurity threats.
  • Excellent organisational and time management skills, and the ability to work on multiple projects at the same time.
  • Strong problem-solving skills
  • Good attention to detail
  • working with software developers and software engineers to ensure that development follows established processes and works as intended
  • planning out projects and being involved in project management decisions.
  • Excellent teamwork and communication skills.
  • Excellent organisational and time management skills, and the ability to work on multiple projects at the same time
  • Working on ways to automate and improve development and release processes.

 

The top skills to work in DevOps

To work in DevOps, Khuswant suggests having knowledge of one cloud platform (AWS, Azure, GCP) and container orchestration tool (Kubernetes, OpenShift), as well as experience in developing Continuous Integration/Continuous Delivery pipelines (CI/ CD) with tools such as Jenkins, Azure DevOps Services, etc.

Moreover, he recommends having good hands-on knowledge of Configuration Management and Infrastructure as Code tools like Puppet, Ansible, Chef, Terraform, etc., and proficient in scripting, and Git and Git workflows.

Bolvin adds that in order to work in DevOps, you need to not only be attentive to details, have the enthusiasm and eagerness to continuously learn and evolve with technology, but also to have an eye for identifying bottlenecks and replacing them with automated processes.

He continues by saying that it is good to be able to collaborate with multiple teams (e.g. contributing and maintaining a wiki) and have the willingness and zeal to impart knowledge to fellow team members.

Finally, he suggests working with cloud technologies, having knowledge and application of a scripting language, as well as awareness of critical concepts in DevOps and Agile principles.

 

How to move into DevOps

DevOps engineering is a hot career with many rewards, Khuswant points out. A DevOps Engineer will get enormous opportunity to work on a variety of work and DevOps tools which are very satisfying. Begin by learning the fundamentals, practices, and methodologies of DevOps. Understand the “why” behind DevOps before jumping into the tools.

A DevOps engineer’s main goal is to increase speed and maintain or improve quality across the entire software development lifecycle (SDLC) to provide maximum business value by automating the development lifecycle by using various DevOps tools.

Therefore, he recommends reading articles, watching YouTube videos, and going to local Meetup groups or conferences to become a part of the welcoming DevOps community, where you’ll learn from the mistakes and successes of those who came before you.

Bolvin also suggests understanding DevOps principles and methodologies alongside identifying gaps you can bridge in order to speed up the software build & release process. It’s crucial to understand the KPIs of DevOps and more so, align them to how you can contribute by upskilling yourself.

 

Advice for future DevOps engineers 

Khuswant says that DevOps engineers need to know a wide spectrum of technologies to do their jobs effectively. Whatever your background, here are some fundamental technologies you’ll need to use and understand as a DevOps engineer:

  • Operating systems Administration – Linux/Windows
  • Scripting
  • Cloud
  • Containers

Bolvin advice to:

  • Make yourself familiar with cloud technologies
  • Understand what DevOps entails
  • Work on your communication and collaboration skills as they will definitely be tested
  • Be open to trying new technologies and don’t be afraid to fail as that’s how you will learn
  • Be willing and eager to share your knowledge as that’s critical to propagate the DevOps culture

 

Special thanks to Khuswant Singh & Bolvin Fernandes for their insights on the topic!

The post Working in DevOps appeared first on DevOps Online North America.

]]>
Design thinking tools to boost your DevOps journey – part 2 https://devopsnews.online/design-thinking-tools-to-boost-your-devops-journey-part-2/ Thu, 04 Nov 2021 11:49:46 +0000 https://devopsnews.online/?p=23753 Isaac Perez Moncho was at the National DevOps Conference 2021 at the British Museum a few weeks ago and gave a talk about how to design thinking tools to boost your DevOps journey. Here is the second part of this talk. In part 1, we followed William Blacksmith on his collaboration journey. In part 2, we...

The post Design thinking tools to boost your DevOps journey – part 2 appeared first on DevOps Online North America.

]]>
Isaac Perez Moncho was at the National DevOps Conference 2021 at the British Museum a few weeks ago and gave a talk about how to design thinking tools to boost your DevOps journey. Here is the second part of this talk.

In part 1, we followed William Blacksmith on his collaboration journey. In part 2, we will find out what these three techniques are called in the 21st century and understand how they can help you increase collaboration within your organisation.

 

Before we start, we should review the impact of the traditional lack of collaboration between platform teams and product teams.

The feared organisational silos are an example of a lack of collaboration. They had two negative consequences for platform teams:

  1. Teams had to support applications they knew little about.
  2. They created platform services that were suboptimal for product teams.

The first consequence is better addressed by following the “you build it, you run it model”, which is out of the scope of this article. The second consequence, creating suboptimal services for product teams, is addressed through better collaboration and using techniques like the ones Will used.

The organisational and business impact of suboptimal platform services can be substantial. Lower performance from product teams means a slower response to market changes and lower capability to deliver business plans. The negative impact is likely to compound in the current hiring environment, with no shortage of companies with excellent platform services like Netflix. Software engineers expect better from the services they use. Having outdated tools and services will frustrate them, making them more likely to change jobs.

In my previous role, the platform team created a “self-service” monitoring service, complete with a 20-page installation guide. It did not make product teams happy.

The goal of Will’s techniques is to increase collaboration between platform teams and product teams. Better collaboration results in better tools, better relationships between teams, and more satisfied engineers.

Now that we know the benefits of Will’s techniques, we can get into them in more detail.

 

User Surveys 

The first technique Will used was User Surveys.

What is it?

A low-effort technique to asynchronously gather feedback or requirements.

How is it done?

Ideally, user surveys are conducted using tools like SurveyMonkey, Google Forms, or other survey tools. They can also be done via Slack or Email.

My personal preference is for short surveys, three to five questions, using a survey tool. Short surveys increase the response rate by reducing mid-survey dropouts. A tool like Google Forms enables data gathering and a better analysis of the responses.

Three questions you can use to get started are:

  1. Would you recommend our services?
  2. What would you improve?
  3. What would you keep?

The questions can be geared towards one service or to all services managed by the platforms team.

Why is it used?

User surveys are low effort for both the creator and the subject and can be done with frequency. Surveys help create a continuous feedback loop and show to the users that you care about their input.

Considerations?

User surveys can feel impersonal and cold, which can garner more honest feedback and requirements. However, they will not create strong personal relationships between the teams.

 

Shadowing

The second technique Will used was Shadowing.

What is it?

Immersion in the users’ journey.

How is it done?

Involves engineers (DevOps) observing users (Software Engineers).

The DevOps engineers will observe the software engineers while they use the platforms’ tools. The observers take notes and ask questions when required while taking into account the observer effect.

Why is it used?

Shadowing makes the pain points of the tools and processes visible and fosters relationships between individuals.

Considerations?

Adds a personal dimension to the feedback gathering process.

It can take more effort to use than user surveys. However, it can also increase collaboration by establishing or improving personal relationships between teams and individuals. It requires coordination between individuals, and remote shadowing can lower the usefulness of the technique.

 

Idea Generation

The third technique Will used was Idea Generation.

What is it?

A brainstorming session with plenty of post-it notes that includes software and DevOps engineers. Engineers generate ideas to solve the problem at hand. The objective is to generate a solution that the platforms teams can implement to solve a problem experienced by the product teams.

Idea generation techniques can be used to structure and inspire the participants. Some of the techniques include Mind-mapping, S.W.O.T analysis, and Six Thinking Hats.

How is it done?

Invite representatives of all involved parties, have post-its ready, and select an idea generation technique.

After gathering enough ideas, spend time pruning and consolidating them. The organiser, or assigned person, will take action and next steps to develop the ideas generated.

Why is it used?

There are several benefits: a broader set of solutions, ice-breakers between teams, deep user understanding, increased collaboration, and relationship building.

Concerns?

Idea generation can be the more powerful technique to use in terms of enabling collaboration, but it is also the riskier one and the technique that requires more effort to use. The session must be conducted in some order and produce an actionable output that will be considered when creating the solution.

 

Now you have two techniques you can use straight away to start fostering collaboration between teams, User Surveys and Shadowing, and one that can bring it to the next level, Idea Generation.

To finish, I’d like to recommend the following book. It contains the above and many more techniques, case studies, and resources that you can use to think about user involvement when creating your platform services: This is service design thinking.

 

Article written by Isaac Perez Moncho, Head Of Infrastructure at Lending Works

The post Design thinking tools to boost your DevOps journey – part 2 appeared first on DevOps Online North America.

]]>
Design thinking tools to boost your DevOps journey – part 1 https://devopsnews.online/design-thinking-tools-to-boost-your-devops-journey-part-1/ Tue, 26 Oct 2021 10:50:59 +0000 https://devopsnews.online/?p=23736 Isaac Perez Moncho was at the National DevOps Conference 2021 at the British Museum two weeks ago and gave a talk about how to design thinking tools to boost your DevOps journey. Here is the first part of this talk:    The following is the story of William Blacksmith, a Middle Ages toolmaker. Will works...

The post Design thinking tools to boost your DevOps journey – part 1 appeared first on DevOps Online North America.

]]>
Isaac Perez Moncho was at the National DevOps Conference 2021 at the British Museum two weeks ago and gave a talk about how to design thinking tools to boost your DevOps journey. Here is the first part of this talk: 

 

The following is the story of William Blacksmith, a Middle Ages toolmaker.

Will works making tools for the shoemakers of the kingdom. You may think making tools for shoemakers are not worth writing about; however, in this kingdom, shoes are a sure way to get influence in the Royal Court. The King loves his sneakers.

Being an enabler of a kingdom’s influential sector, Will wants to improve his tools to increase the impact his customers have. He is very curious and likes trying new things all the time. When he hears complaints from the shoemakers about how antiquated their tools are, Will starts thinking about how to create a new generation of tools for his customers.

 

What if I asked the shoemakers about the tools we make for them?

No one in the kingdom has ever asked any customer what they thought about the tools they used. Will is not completely sure the shoemakers will be very collaborative. He starts thinking about questions to ask the shoemakers, and after some time he comes up with the following three:

Would you recommend our tools to shoemakers in our neighboring kingdom?

What would you keep from our tools?

What would you change?

Will believes those three questions are a good start because they give him some feedback, while not taking much time from the shoemakers. This should result in a good response rate. He has a problem, though.

The dancing season is about to start, and the shoemakers and himself are very busy. He cannot spend two days visiting each of his customers in person. What can he do? He is double lucky; not only has a printing press opened up next door, all his customers and himself know how to read and write! Not likely in those times, but very convenient for me and my story.

He decides to print the three questions, send his apprentice to deliver them to all his customers, and tell them he’ll be back one week later to collect the answers.  Will is amazed by the responses and the suggestions to improve the tools. Some are very simple, yet he did not think about them before. Something unexpected also happened; some shoemakers left comments on the sides of the pages thanking him for giving them a way to express their issues.

Encouraged by the responses, he is eager to get better quality feedback and build a better relationship with the shoemakers.

 

What if I went in person and observed the shoemakers using their tools?

Will’s head is spinning trying to find better ways to improve his tools. With his experience, direct observation, and a dialogue with the users, the quality of feedback would increase substantially.

Now he has a problem, who is he going to visit to do this? He has too many customers and visiting all of them will take too much time. He decides to start with the shoemakers who left thank you notes in the previous questions and with whom he has a better relationship. Will settles for three customers, sending his apprentice to ask these customers for a good time to spend 1 hour observing some of their processes.

The visits are an eye-opener:

“Why do you use this tool like this? It’s supposed to be used this way.”

“We know, Will. But if you use it like it’s intended it takes too long to get the leather cut.”

“What? Did you set this up three hours ago?”

“Yes, Will, the tool and the sole need to settle for three hours before they can be put together.”

Will is mindblown – if he knew about emojis he would use one now. Many of his preconceived notions about how tools were being used have been rendered useless. He is back to his workshop, racing with ideas about how to improve his tools.

After the success of getting ideas from others, Will decides that before he starts tackling some of the biggest challenges he is going to work with his customers.

 

What if I invited some of the shoemakers here to my workshop and we discussed together how to tackle a challenge?

He prepares food, wine, and some small square pieces of parchment. When everyone arrives, Will presents them with a problem some of the shoemakers had and asks for ideas from everyone.

After several ideas are discarded, a few promising ones emerge. Will tells the shoemakers he will start making some prototypes of tools to tackle this challenge, and he will send them so they can try the prototypes. Everyone is so excited that they continue drinking, eating, and talking about tools and how to solve future problems.

We are not blacksmiths, we don’t create tools for shoemakers, and most of the time, we are not in the Middle Ages. However, we create tools and systems for expensive engineers, and we want to be proud of the tools we create for them.

In part 2, we will learn how and when to use Will’s techniques, as well as their modern names!

 

Article written by Isaac Perez Moncho, Head Of Infrastructure at Lending Works

The post Design thinking tools to boost your DevOps journey – part 1 appeared first on DevOps Online North America.

]]>
Moving away from traditional waterfall methods https://devopsnews.online/moving-away-from-traditional-waterfall-methods/ Wed, 04 Jul 2018 10:14:24 +0000 http://www.devopsonline.co.uk/?p=13287 How do tax and advisory services keep up-to-date with the rapid pace of innovations within technology? DevOps Manager at KPMG, Adnan Rashid, reveals

The post Moving away from traditional waterfall methods appeared first on DevOps Online North America.

]]>
KPMG is a global network of independent member firms in 154 countries and is most commonly known for providing audit, tax and advisory services. In recent years, many IT departments have found themselves under increased pressure to keep up with the rapid pace of innovation and change within technology along with increased demand from customers for new features and services.

In order to address these challenges, KPMG in the UK has been going through a steady transformation by moving away from traditional waterfall models to a collaborative, customer-focused DevOps approach which encompasses agile methodologies and cloud technologies such as AWS, Azure and GCP. Due to the nature of the services KPMG provides, there are considerable regulatory and governance considerations to comply with in order to protect our client’s data and ourselves. With this in mind, in 2015 KPMG launched a pilot project with the vision to deliver services efficiently, reliably and securely to our customers.

We started off by creating a small business-centric team to deliver an application in the cloud with the vision to become the benchmark of future projects and evidence to the business that it is possible to do DevOps in the enterprise whilst adopting the necessary security and governance requirements. The application was to be hosted in AWS and utilised infrastructure as code, allowing deployments into multiple environments in a consistent, repeatable manner. It also allowed deployments without having to be concerned around configuration drifts and with the added benefit of rapid deployment speeds.

The infrastructure stack

We also began to slowly develop a pipeline which would allow us to release software code from development through to production consistently and also allow controls and checks in between. The infrastructure stack consisted of Atlassian Stash, now known as Bitbucket, which is our secure, and private source code repository, TeamCity for testing and building the application package and finally to be handed to Octopus Deploy which manages the deployments into a number of environments. The project was an immediate success, as the application team now had the ability to quickly accommodate customer requests, and ensure the stability of the environment without having to rebuild any component manually.

It is a common misconception that large enterprises cannot adopt DevOps and Agile practices, as it is often believed it is suited to smaller start-ups due to the number of processes within a large organisation. We had started our automation journey, however, we now needed to address stringent security requirements, service management considerations and ensuring the ongoing maintenance of the application. Instead of attempting to change an entire department process and method of working, we started having discussions and listening to the various concerns whilst keeping at the forefront cloud best practices such as utilising disposable resources, infrastructure as code, automation, loosely coupled components and asynchronous integration.

Each team had built up their own set of considerations and best practices, therefore when attempting to adopt a new method of working and to deliver value, it was extremely important to try to find a middle ground and compromise. Once there was a clear understanding of the various controls, due to our agile way of working, it was easy for us to change the architecture design whilst minimising both downtime and development impact. We ran two-week sprints and began to work through the deliverables in order to present back to the relevant teams and put in place the necessary documentation.

Utilising services

As the business saw how quickly we were able to deliver value, the word spread around the organisation and the team found themselves going from a single project almost 100 within the space of a year. In order to accommodate for such growth, the team rapidly increased the number of engineers and also began working more closely with service management continuously improving to ensure incidents, changes and problems were handled efficiently and in a timely manner.  We saw improvements in change management as we developed all infrastructure as code, it was easy for us to track, review and manage changes. At a high level, everything was committed into source control, reviewed by senior engineers and evaluated by the product owner to validate there were no business risks.

Today we have matured our offering in a variety of ways from our pipeline by utilising services like Jenkins to handle orchestration, Terraform for infrastructure, Vault for security, Trend Micro for anti-malware and anti-virus along with our cloud transformation professional services helping some of the biggest companies in the world adopt DevOps and move towards working in a more agile manner to ultimately improve customer experience.

Adopting DevOps and cloud initiatives involves changes from all parts of the organisation and at all levels including culture, process and technology. Organisational change is a journey which takes time, understanding and collaboration to succeed.

Written by DevOps Manager at KPMG, Adnan Rashid

The post Moving away from traditional waterfall methods appeared first on DevOps Online North America.

]]>
Adopt DevOps & Kubernetes to stay competitive! https://devopsnews.online/adopt-devops-kubernetes-to-stay-competitive/ Mon, 02 Jul 2018 08:54:08 +0000 http://www.devopsonline.co.uk/?p=13129 Adopting DevOps helps enterprises expand their digital experiences and brings down costs. Here are some of the key reasons to adopt DevOps, if you haven't already! 

The post Adopt DevOps & Kubernetes to stay competitive! appeared first on DevOps Online North America.

]]>
Throughout the years’, the adoption of DevOps has increased in a number of different sectors and has come a long way in order for companies to stay ahead of the game quickly and efficiently.

As we all know, in regards to faster time-to-market, there is no room for errors as it can, potentially, lower business value and ruin brands. Because of this, enterprises are encouraging testing and development teams to use contemporary methodologies, frameworks and tools such as Kubernetes to increase the deployment of applications and software.

This is where DevOps comes in useful, bringing development and IT operations together to enhance collaboration between an array of functions.

Andrew Hardie, Chairman of the BCS DevSecOps Expert Group, commented: “DevOps is about speed. Speed through automation, reliability and consistency. Automation deals with commit, build, package, test, promote and deploy, with multiple levels of test and deploy for each environment (dev, sit, nft, pre-prod, prod).”

Speed, quality & innovation

Here are some of the key reasons to adopt DevOps, if you haven’t already!

  • Continuous testing, continuous integration and continuous deployment is a part of the DevOps methodology, enabling shorter development cycles, accelerating deployment frequencies, and making the software release process more dependable
  • DevOps ensures that the quality of the software release is maintained, supporting enterprises to meet their set business objectives regularly
  • It works towards facilitating teams with continuous feedback and improvement even in production, enabling automated release testing, continuous integration testing and continuous planning
  • DevOps supports automated tests for front-end, middle tier and backend validations, meaning quality check-gates are created and maintained at every stage of the software testing cycle
  • Agile and flexibility in development processes ensures speed, quality and innovation.

DevOps & Kubernetes

DevOps teams are committed to open source tools such as Kubernetes, as most DevOps engineers adopt it in place of closed-source alternatives.

Hardie agreed: “When implementing DevOps, you should also add monitoring, event logging, metrics and tracing, as well as GitOps for declarative environment promotion. Kubernetes is also evolving very fast.

“Keeping up is a struggle, both for vendors and DevOps professionals. A new ecosystem tool appears every week, or sooner. Deployment is no longer procedural but declarative. Build and package tools (e.g. Maven, Docker) disappear behind the curtain of automation.”

More and more firms appear to be turning to Kubernetes and microservices platforms to add more agility and, in many cases, computing power to their IT operations, while serverless deployments mainly seen as a way to save resources by relegating server management to third-parties.

“For the developer, everything between code commit and code deploy becomes automated and invisible. They see only test failure report or successful deployment. The DevOps team are the ones doing all the hard work to make that happen, keep it happening, keep it up-to-date and keep it secure,” Hardie added.

Hardie has been “online” since 1982 and automating IT systems since 1991. In that time, he has worked for seven parliaments, eight UK Government departments, three Local Authorities, Tony Blair’s private office, the Royal Household, the Council of Europe in Strasbourg, two of the “big five” UK banks, two hi-tech startups and three international NGOs. Andrew is a well-known figure on the London DevOps scene and a regular speaker at conferences and Meetups. He is also Chairman of the BCS DevSecOps Expert Group.

Written by Leah Alger

The post Adopt DevOps & Kubernetes to stay competitive! appeared first on DevOps Online North America.

]]>
Forrester report: Six trends that shape DevOps adoption https://devopsnews.online/forrester-report-six-trends-shape-devops-adoption/ Fri, 01 Dec 2017 08:00:04 +0000 http://www.devopsonline.co.uk/?p=11079 Forrester’s report 'Six Trends That Will Shape DevOps Adoption In 2017 And Beyond' highlights six major trends benchmarked from Forrester's 'Q1 2017 Global DevOps Benchmark'

The post Forrester report: Six trends that shape DevOps adoption appeared first on DevOps Online North America.

]]>
Forrester’s report Six Trends That Will Shape DevOps Adoption In 2017 And Beyond highlights six major trends benchmarked from Forrester’s Q1 2017 Global DevOps Benchmark online survey; where professionals determined which DevOps methods they want to adjust, enhance, or continue to pursue.

According to the report, to fully adopt DevOps practices, changes must be made regarding culture, automation, lean, measurement, sharing, and sourcing methods.

The report found that I&O professionals are “heading in the right direction” across multiple teams and functions, which is critical to breaking down silos between different organisations.

The report also found that in order to offer a better customer experience, organisations must speed up release cycles of applications and services.

Forrester’s Q1 2017 Global DevOps Benchmark online survey found that I&O professionals are demonstrating great results with the DevOps competency model: 63% of respondents indicated they have implemented or are implementing and expanding DevOps, while 27% are planning to implement it.

The survey also showed 61% of I&O professionals agree that team collaboration is encouraged and rewarded.

Forrester’s Q1 2017 Global DevOps Benchmark online survey was fielded by 623 individuals who work in technology management.

Written by Leah Alger

The post Forrester report: Six trends that shape DevOps adoption appeared first on DevOps Online North America.

]]>
Micro Focus helps DevOps transitions https://devopsnews.online/micro-focus-helps-devops-transitions/ Thu, 02 Nov 2017 12:16:41 +0000 http://www.devopsonline.co.uk/?p=10837 Micro Focus helps enterprises ease DevOps adoption to deliver higher quality software, quicker

The post Micro Focus helps DevOps transitions appeared first on DevOps Online North America.

]]>
Micro Focus helps enterprises ease DevOps adoption to deliver higher quality software, quicker.

Ashish Kuthiala, senior director at Micro Focus, said to SD Times: “It’s easier to build an effective DevOps practice when you’re starting with a blank slate.

“It’s harder for enterprises to change the way they operate so they can implement DevOps efficiently. To do that, they have to choose the right team members and the right toolsets, and stitch those toolsets together.”

A lot of money and time has been spent by large enterprises on tools for specific purposes.

“It’s difficult for enterprises to pivot when they have a legacy culture and considerable complexity built into their existing toolchains and processes. It’s an iterative journey that takes time to understand and implement,” added Kuthalia.

In a bid for the code to automatically integrate with running tests automatically and infrastructure, Micro Focus built a set of automated gates.

Kuthiala continued to SD Times: “It’s really important to develop a culture that allows people to experiment. You have to allow people to fail fast, learn from it and keep moving forward.”

“There’s a lot of leadership change and encouragement that’s needed to make this work at an enterprise scale.

“Your first pipeline serves as a proof point and then you have the best practices in place to build successive pipelines. That’s how we’re scaling this up for our customers and ourselves.”

Written by Leah Alger

The post Micro Focus helps DevOps transitions appeared first on DevOps Online North America.

]]>