Expert POV Archives - DevOps Online North America https://devopsnews.online/category/expert-pov/ by 31 Media Ltd. Wed, 23 Sep 2020 09:54:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 Leaders in Tech: Aimee Bechtle https://devopsnews.online/leaders-in-tech-aimee-bechtle/ Wed, 23 Sep 2020 09:54:15 +0000 https://devopsnews.online/?p=22686 We are proud to announce the launch of our Leaders in Tech editorial series. Speaking to leaders in the industry to capture their stories, career highs and lows, their trials and successes, their current company and their role, most recent projects, advice to others, and the individuals who they most look up to in the...

The post Leaders in Tech: Aimee Bechtle appeared first on DevOps Online North America.

]]>

We are proud to announce the launch of our Leaders in Tech editorial series. Speaking to leaders in the industry to capture their stories, career highs and lows, their trials and successes, their current company and their role, most recent projects, advice to others, and the individuals who they most look up to in the industry.

Aimee Bechtle is the Head of DevOps and Cloud Platform Engineering at S&P Global. Aimee specializes in helping large organizations deliver value to the business faster with safety, by working backward from the customer to solve their problems and then unleash innovation using the power of Agile, DevOps and the Cloud. 

So, we sat down with Aimee to find out more about why she joined the tech industry, what her role entails, what are the challenges she faces as a tech leader and her advice to aspiring engineers and developers.

 

What are your current role and responsibilities?

I am the Head of DevOps and Cloud Platform Engineering for S&P Global’s Market Intelligence Division. I am responsible for the system that enables application teams to deliver high quality working software at a high frequency into a scalable, highly available cloud infrastructure.

You work at S&P Global, what was your journey like? How did you get where you are now?

My journey has been and still is, an unpredictable and exciting journey. I am a late bloomer. In 1998, I was planning to be a stay-at-home mom and raise a family when I got an opportunity to work part-time in a corporate IT department at a federally funded research company.  I was there for 16 years of my 27-year career in IT.  The job afforded me the work-life balance I needed to raise my kids while my husband traveled, and it allowed me to stay current with technology. I got my first leadership position in 2004 leading a small automated test team. I planned to stay at that company until I retired, suppressing, or denying my ambition. But In 2013, I led the implementation of a Continuous Delivery pipeline and got introduced to DevOps. I fell in love and this unleashed my ambition. When the opportunity to practice DevOps and learn the Cloud, and grow as a leader, became available to me at a different and much larger company I took it. The uncertainty that comes with change was scary and I struggled to adapt at first, I questioned and doubted my decision. But I had strong leaders who gave me time and space to learn, who coached me and championed me. I was at that company for four years and I absorbed everything I could and took opportunities, and risks, to lead and learn how transformation-at-scale works. I learned what I was capable of as a leader and driving change. When I got the opportunity to do it at S&P as a more senior leader I took it.

The answer to how I got to where I am is having leaders who invested in me, I think they invested in me because of my grit, my ability to learn and be curious, my ability to collaborate and engage and align others to achieve goals, taking risks and becoming comfortable with uncertainty.

What inspired you to go on this journey? What drew you to the tech industry?

It isn’t so much of an inspiration but rather a calling. It is a natural inclination of mine to expand my scope, step in, and lead when there’s an opportunity to do so, and take personal risks to foster growth and development. I am drawn to the tech industry because I am a maker at heart and you can make some pretty incredible solutions and change people’s lives with technology. I also love technology because it is rapidly changing. I love change and am comfortable with new environments, challenges, and circumstances. I get to help others adopt technology and get through the change that comes with that adoption.

Who do you look up to for inspiration or mentorship?

I look to my existing leadership for inspiration and mentorship. If I can’t then I am working for the wrong person. Every leader I have had has shaped me as a leader and left me with a theme, lesson, or point-of-view that sticks with me. There’s a cliché that people don’t leave a job, they leave leaders. That has not been true for me. I have left jobs because I outgrew them, I outgrew them because I had great leaders.

What do you think are the most important qualities of successful tech leaders today?

The most important qualities are empathy, agility, authenticity, and grit.

How do you keep your team motivated despite conflicts and obstacles?

Always know and communicate the “why” and where you are going. Be really clear on the mission and vision and do not waiver. Acknowledge that change is challenging, validate feelings, and set an example. This is why I mention empathy and authenticity as important qualities in leadership. Your people need to know you are human and you understand change is hard, and that you know what they are going through. Sharing your personal stories of perseverance and persistence resonates and helps them to see themselves in you.

What is expected of you? What are your expectations for your team?

I am expected to establish goals and deliver results that benefit the business and create value, and to leave my team and environment better off than when I started. I am expected to drive high performing teams and bring out the best in my people. My team is expected to deliver to the goals and outcomes I establish and to solve problems.

What are your current goals? What projects are you currently working on?

My goals are to implement a system that accelerates the delivery of value to our customers and create an environment where our technology talent does the best work of their lives and innovates to create a competitive advantage.

I am working on a cloud-native architecture leveraging containers, microservices, and a continuous delivery pipeline.

What are you the proudest of in your career so far?

Seeing some really talented people I led to grow and move into leadership positions and lead other people.

What is your favorite part of your job?

Working with really smart and talented people that make it fun to come to work. And working with cutting edge technology that challenges us to learn.

What has been your greatest challenge from working as a tech leader?

Keeping up with technology and skills. It is challenging to find the time to learn new technologies and understand them enough to be respected by engineers and be able to ask the right questions. I’ve been dying to learn Kubernetes and get my hands on the keyboard but it is hard to find the time.

What’s the most important risk you took in your career and why?

Leaving a job I was comfortable in and at for 16 years to join a fortune 100 company. It was a risk because I left a quasi-government, non-competitive environment and went into a competitive commercial environment of top talent. I had to quickly learn and adapt, especially learn the cloud. I was petrified I would fail and return to my previous employer where I would be told “I told you so” about the commercial world.

How do you continue to grow and develop as a tech leader?

I am never done learning and I surround myself with people who are smarter than me and go to them for answers.

How do you align your team and company with your vision and mission?

By being clear about the mission and vision, making it visible and actionable, and then executing on it and showing the results backed up by data. I look for a story to tell that showcases the mission and vision and I tell it over and over again to key stakeholders.

What have you learned from your experience so far?

The journey is never finished, and you can always grow and learn. Challenge yourself and know what you are capable of.

Do you have a memorable story or an anecdote from your experience you’d like to tell?

I’d have to write a book to capture them all. One that stands out is I worked with a Product Manager who wanted to deliver features faster and was willing to slow down development for a while to focus on continuous delivery and upskilling his organization. He said to me “you have to slow down to go fast”. I have it on a t-shirt and use it with Product Managers I’m trying to influence.

Finally, do you have any advice to aspiring engineers and developers who want to grow in the tech industry?

Yes, don’t be an expert. Experts stop asking questions and have all the answers. Be a learner, ask questions, and know that the journey is never finished. Listen, learn then lead.

The post Leaders in Tech: Aimee Bechtle appeared first on DevOps Online North America.

]]>
How to manage the overhead of application support while streamlining your business operations https://devopsnews.online/how-to-manage-the-overhead-of-application-support-while-streamlining-your-business-operations/ Fri, 28 Feb 2020 10:43:25 +0000 https://www.devopsonline.co.uk/?p=22456 Knowing how and when to keep development teams ticking over to serve growing business needs is a full-time job, and with the advances in modern practices like Agile and DevOps, it can be complex with a high technical burden. In this helpful guide, Matt Muschol, Chief Technology Officer at Clearvision discusses how to manage the...

The post How to manage the overhead of application support while streamlining your business operations appeared first on DevOps Online North America.

]]>
Knowing how and when to keep development teams ticking over to serve growing business needs is a full-time job, and with the advances in modern practices like Agile and DevOps, it can be complex with a high technical burden.

In this helpful guide, Matt Muschol, Chief Technology Officer at Clearvision discusses how to manage the overhead of application support while streamlining your business operations.

Keeping development teams ticking over to serve growing business needs is a full-time job, and with the advances in modern practices like Agile and DevOps, it can be complex with a high technical burden. Lifecycle management tooling helps and can offer great results in improving the flow of work and automating delivery pipelines, but this comes at a cost. These tools need to be looked after and maintained to a high standard. Businesses soon find that application support can quickly become costly, difficult to manage and diverts talent away from serving the core business.

Development teams need to be focused on developing the applications that support customers and grow the business – they don’t want to be bogged down by ongoing incident break-fix issues, upgrades, patching, plugin support, escalations and the like. Yet, these support activities are crucial to the day-to-day operations of the team and also require specialist skills. If the tool stack is down or not running optimally, your business will suffer.

Many businesses simply don’t have the internal skillset or resource to handle support effectively. Additional IT staff can be recruited, but this is expensive and comes with all the management overheads. Support needs also often fluctuate over time. You can hire contractors, which would provide you with flexibility, but with IR35 coming in for the private sector in April, the burden of responsibility to determine the status of contractors for tax purposes moves from the contractor to the employer. For many businesses, this will negate the benefits to the business of hiring contractors in the first place.

Increasingly businesses are turning to managed service providers to help them improve their application support services and cut costs. MSPs can remotely manage a customer’s IT infrastructure and/or end-user systems, typically on a proactive basis and under a subscription model. They are specialists in particular applications, services or functions, and provide a range of services tailored to the needs of the organisation.

They can fit seamlessly with in house functions by providing technical expertise to second and third line support teams who may not have the skills themselves. Alternatively, an MSP can take on the full support service and run it for you in their Cloud data centre.

Moving support outside the organisation can seem counterintuitive – can an external company manage internal applications as effectively? 

A key advantage of MSPs is they have greater levels of expertise than in house teams as their core business is focused on the support of the applications they manage. They will always be up-to-date with the latest versions of the product set to add to their wealth of experience in earlier product versions. MSPs will also have the resources to assemble a team when needed to implement new products or projects quickly, something that would be much more difficult to achieve internally.

Another benefit is economy of scale. MSPs serve hundreds or even thousands of customers across an application stack, and therefore benefit from the collective experience of supporting an array of customers with the same product set as well as the payback necessary to invest in state-of-the-art monitoring and operational monitoring solutions. Whether it’s creating intricate workflows and schemas, troubleshooting network errors, discovering bugs and limitations, or becoming an expert on a new marketplace plug-in within the hour, application support engineers not only keep your applications running, they can add value to your business.

Today’s technology lends itself to engagement from any location and time zone, so when support is needed, it’s there. Remote support teams utilise service desks, screen-shares, video calls, and chat to troubleshoot issues in the same way as if they were on site. Remote support comes with the advantage of 24/7 coverage with teams generally spread across multiple regions and time zones. This is particularly beneficial when it comes to upgrades and troubleshooting, where preventing, minimising and recovering from downtime is critical. With support teams spread across the globe, they can coordinate events and ensure customers are not impacted in a negative way.

So, what about cost?

Outsourcing your support converts fixed IT costs into variable costs, which actually makes budgeting easier and reduces the cost of administering product environments. As an example, here at Clearvision we offer flexible solutions that scale with you. We deliver support solutions to suit the needs of your business, with flexible options and add-ons. We also provide many services beyond enterprise support including implementation, consultancy, hosting, migrations and more.

Agile and DevOps should be a joy, not a burden, something that drives your business forward, not tangles you in the thorns of yet more complex technology. Managed service providers can take the load off your shoulders and help you turn this vision into a reality.

Written by Matt Muschol, Chief Technology Officer, Clearvision

The post How to manage the overhead of application support while streamlining your business operations appeared first on DevOps Online North America.

]]>
What developers really want from their jobs https://devopsnews.online/what-developers-really-want-from-their-jobs/ Tue, 28 Jan 2020 12:35:34 +0000 https://www.devopsonline.co.uk/?p=22218 For a happier team, give robotic tasks to robots Gartner has estimated that by 2021, demand for application development will grow five times faster than tech teams can deliver, and the digital skills shortage is projected to result in 4 million vacant roles by 2030. With fierce competition for the best talent, it’s crucial for...

The post What developers really want from their jobs appeared first on DevOps Online North America.

]]>
For a happier team, give robotic tasks to robots

Gartner has estimated that by 2021, demand for application development will grow five times faster than tech teams can deliver, and the digital skills shortage is projected to result in 4 million vacant roles by 2030. With fierce competition for the best talent, it’s crucial for businesses to be able to attract and retain top technical talent to build a happy workforce with low turnover.

Our recent developer survey gave developers a chance to tell us the factors that impact their job satisfaction. Having realistic targets (87% agree) and the right tools (92% agree) came out on top. Contrary to popular belief, to increase their job satisfaction, developers are also eager to introduce automation for the repetitive tasks they don’t enjoy such as unit testing. In fact, 66% of those surveyed believe unit test setup is mundane.

Achievable targets lead to satisfied developers

Survey findings also suggested that manager expectations tend to be slightly higher than developers can deliver. In the companies our respondents work at, for example, the average target for code coverage by unit tests is 63%, but almost half (48%) of surveyed developers reported that they have found it difficult to achieve these coverage targets. 42% of the developers we surveyed agreed that they have skipped writing unit tests in order to speed up new feature development.

As the developers we surveyed agreed that having realistic targets is important to their job satisfaction, ensuring they have the right tools to reach these targets is critical to maintaining their job satisfaction. So if management’s expectations are going to stay high—as they should—what supporting tools do developers need in order to achieve their targets? According to the research, the answer appears to be increased automation, with 86% of respondents agreeing that the availability of automation for repetitive tasks is a factor in their job satisfaction.

 Which tasks should be automated?

Automation is frequently discussed as a key tenet of DevOps, but how and where to effectively introduce automation tools isn’t always clear in practice. Some development tasks are a better fit for automation than others. Does a task follow a set of rules? Does it need to be done often? Is it important to the quality or security of your software product? If the answer to all of these questions is “Yes,” then it’s probably worth automating.

Also important, however, is how teams feel about doing it manually: Do developers dislike this task? Will they do it if they don’t have to? When developers don’t want to find time to work on a task, there is an even greater reason to find out how to take it off their plates. In this way, automation humanises people by removing robotic work.

The stages of the software delivery lifecycle that are well suited for automation include deploying new releases, finding bugs, and testing—both creating tests and running them. Writing unit tests, for example, takes up 20% of a developer’s time, according to our survey. Unit tests are often repetitive and uninteresting to write, but they are an important part of catching unintentional changes in code that can lead to bugs. Still, 39% of developers wish they didn’t have to write unit tests at all. This is one area that can be automated for the benefit of everyone.

Automation benefits for code quality

Not only does introducing automation address a core cause of workplace dissatisfaction, but it also improves the speed of software development and the quality of the final result. 40% of the developers surveyed chose manual processes as a top factor contributing to poor software quality, likely because people are prone to making mistakes on repetitive tasks that they aren’t intrinsically motivated to do.

The organisations that attract and retain top talent by providing the support and tooling their teams require will also be the ones that produce quality software for their customers. Take care of your people and success will follow.

Written by Mathew Lodge, CEO of Diffblue

 

The post What developers really want from their jobs appeared first on DevOps Online North America.

]]>
Sumo Logic talks data, cloud and why they decided to make a free service https://devopsnews.online/22213-2/ Tue, 28 Jan 2020 11:32:40 +0000 https://www.devopsonline.co.uk/?p=22213 Cloud is everywhere. It is at the heart of data storage and computing power and without it, IT would not have been able to develop half as quickly as it has done. However, despite its all-round worthiness, it can be costly and competitive. With such fierce rivalry in place, Sumo Logic, a continuous intelligence platform, is...

The post Sumo Logic talks data, cloud and why they decided to make a free service appeared first on DevOps Online North America.

]]>
Cloud is everywhere. It is at the heart of data storage and computing power and without it, IT would not have been able to develop half as quickly as it has done. However, despite its all-round worthiness, it can be costly and competitive. With such fierce rivalry in place, Sumo Logic, a continuous intelligence platform, is working on a way to reduce costs of running real-time analytics and archiving data. After the organisation’s announcement for an On-Demand and Archiving Service, Mark Pidgeon, Vice President Technical Services at the company talks to us about what the plans will mean for the cloud, and at what point they realised that something needed to change.

What will the announcement of on-demand and archiving services mean for tech firms?

More companies today rely on data from their applications for operational insight and business intelligence – for example seeing how well their applications are performing, how their cloud infrastructure is operating, and opportunities on how to improve their services.  However, today’s data analytics pricing and licencing models are broken and simply don’t reflect the rapidly changing ways companies are using data. A new approach is needed to help harness the full value of all this data. Reducing the cost and changing the economics around cloud is essential if we are to help more companies take advantage of this in their DevOps projects.

With our new on-demand and archiving services, companies no longer have to make a trade-off as their machine data grows. They no longer have to choose between either paying runaway licence costs or simply not using this data.

Organisations will now be able to dynamically segment their data and tailor the analytics accordingly for real-time insights, frequent or infrequent interactive searching, or troubleshooting and full data archiving. These capabilities enable customers to maximise the value, choice, and flexibility required for operating and securing their digital businesses.

 Why did you decide to make the Archiving service free of cost?

Sometimes, you have data that you either have to keep for compliance or you want to keep it for analysis, but you don’t necessarily want to look at that data immediately. Take a security event – you want to be able to review a lot of data quickly, but you don’t want to be hosting all that data on standby and incurring a cost on the off chance. Instead, you should be able to store that data somewhere cheap and then look at it quickly when you need to.

Our approach with our Archiving Intelligence Service is to work with your existing storage, bring the data in and analyse it quickly to give you the result you need. This approach helps companies look at their data when they need to, or be more selective with what they import in and why. This helps companies change their thinking about this kind of activity, and when they might want to keep that data.

At what point did you realise that something needed to change in analytic capabilities?

 If you look at what DevOps teams are involved in today, they are building the applications that companies are using to compete in the market. For new companies that are formed today, they get started in the cloud and use a ‘cloud-native’ approach to IT around those applications. For the more established companies, they are looking at how to transition over to that model.

Looking at the data from those applications involves not just the DevOps team – this data can provide direct indicators of the impact that business decisions can have over time. If you redesign your app or update your mobile experience, you can see the effect on how people use your applications or how they buy. If you make a mistake, you can rectify it; if not, then you can see the results and the improvement immediately in the data. This business role for DevOps data is something that is evolving rapidly.

In terms of data analysis, where are the biggest areas where people tend to ‘shoot themselves in the foot’?

 The biggest problem is that many teams don’t have the right ways to measure their activities, or they are using older metrics that are not relevant any longer. How we work with clients – and how we run our own IT presses internally – is to look for a key goal that supports customers, and work back from there. For example, you may want to work on customer experience. Traditionally, you might have looked at qualitative data like Net Promoter Scores in post-sale surveys, and they still have value. However, when you have real-time data on site performance and shopping cart abandonment, you can see if there are any particular situations that lead to more people leaving the site.

You can then turn these into measures that you can track. In our own DevOps team, we refer to these as service level objectives and service level indicators. If you don’t put these kinds of metrics in place, then you can be optimising for the wrong kinds of activity.

What can you tell us about ‘blind spots’ in monitoring and analytics?

 Due to legacy volume-based, one-size-fits-all pricing models, many companies are being forced to make a trade-off when it comes to data for use cases such as code troubleshooting and preventive site or service maintenance. Since this data is being used in an ad hoc nature, many companies will choose not to use this data, since ingesting and analysing is simply not cost-effective.  By not using this data due to cost reasons, Developers, Ops and SecOps teams will not have the full visibility needed to operate and secure their digital business.

What big things are you seeing happen in Continuous Intelligence at the moment?

Continuous Intelligence is a natural next step for companies from continuous integration and continuous deployment. These pipelines are how forward-thinking companies run their application pipelines and get new features into production. Continuous Intelligence builds on this to show companies how those applications are performing, how secure they are, and how they are supporting the business.

More companies are adopting this kind of approach – it relies on getting data out into peoples’ hands in the right format so they can make use of it, and help them do more of their work using data to make smarter decisions.

It seems as though many companies aren’t putting enough practices in place when it comes to data security. Do you think this is true and if so, why do you think this is happening?

 Cloud SIEM is a big trend that will develop more in 2020. According to a survey conducted by Dimensional Research, 93 percent of security professionals think traditional SIEM solutions are ineffective for the cloud, and two-thirds identified the need to consolidate and rethink traditional tools.

Security Incident and Event Management, or SIEM, is the place where companies consolidate their data from multiple sources and understand what is taking place around them. Companies running in the cloud need that data on what is taking place all the time, so they understand and run their infrastructures securely. The traditional approaches to SIEM can’t scale up to cope with the sheer amount of data coming in, so new approaches are needed.

The traditional SIEM vendors are investing more into cloud, while cloud-native companies have built out their products to support the huge volumes of data that cloud applications create. There will be a lot that happens in the market over the next year, as more companies face problems around security and cloud.

What do you think people can do to ensure they are treating data security seriously?

 The most important thing is to look at how to automate your approach around security responses and analysis. For many Security Operations Centres, finding the right staff and skills is a massive challenge – according to ISACA, almost 60 percent of companies had open roles for security professionals, and 32 percent found it took at least six months to fill a role. That is a long time for your existing team to be short, so automating processes and making your existing staff more productive will certainly help.

Alongside this, it’s important to build up an understanding of how security functions for DevOps teams and supports all the great innovation work that is taking place. The impact of GDPR in the UK and the forthcoming data privacy regulations in California demonstrate that compliance is a big issue for business. This can be challenging, particularly when security is seen as a cost centre, but it’s essential that the business knows that this work is necessary and delivering value.

What do you think the big trends in 2020 will be?

 According to the ISC2 report on cybersecurity for 2019, 75 percent of those surveyed were very or extremely concerned about cloud security. The biggest challenges were struggling with compliance (34 percent) and lack of visibility into infrastructure security (33 percent).

Dealing with these issues will be difficult, but getting more clarity on what is taking place across applications and cloud instances will help. At the same time, more people will get the necessary skills to manage cloud services at scale successfully, so this should be something that will become less of an issue in the future.

Is there anything that you will be glad to leave behind in 2019?

 Default deployments of services that are not secure. The number of issues that have been down to poor application components that have not been hardened with steps like access control or encryption is shocking. I hope that this is something that will be left behind as people apply best practices around DevOps. However, that is not guaranteed to take place. Using data to spot poor deployments or missed security steps should help.

Where do you see the future of intelligence services heading?

 Getting data used across businesses is something that we have been talking about for the past few years, but it has not been adopted as widely as I think many people assume. Getting data to be pervasive within an organisation – where every team has access to data as part of their workflows and can use it productively – is an end goal. We already see companies like Samsung SmartThings or Tacton taking Sumo Logic beyond the DevOps team, giving access to that data to other departments and making it useful for them.

I think that kind of progression around data will take place over the next year or two, as more people across companies start to get curious around how other teams in their companies work around data. As one team gets more successful, others want to tap into that success and use it in their own ways. It’s an organic process, and there is no one size fits all approach that works for every company, but it can be encouraged. The customers that we work with really get enthusiastic, and they want to spread the word about the value that they see to other teams.

 

The post Sumo Logic talks data, cloud and why they decided to make a free service appeared first on DevOps Online North America.

]]>
Do buzzwords have more meaning than we give them credit for? https://devopsnews.online/do-buzzwords-have-more-meaning-than-we-give-them-credit-for/ Fri, 17 Jan 2020 16:17:21 +0000 https://www.devopsonline.co.uk/?p=22158 DevSecOps, IDaaS, Kubernetes, IoT, Digital Transformation, Waterfall… The list of trending words that rise and fall in popularity is ever-changing in DevOps. Considering how quickly tech evolves in the modern age, it’s inevitable that this is going to be the case. Even DevOps itself came from the amalgamation of what was ‘popular’ at the time....

The post Do buzzwords have more meaning than we give them credit for? appeared first on DevOps Online North America.

]]>
DevSecOps, IDaaS, Kubernetes, IoT, Digital Transformation, Waterfall… The list of trending words that rise and fall in popularity is ever-changing in DevOps. Considering how quickly tech evolves in the modern age, it’s inevitable that this is going to be the case. Even DevOps itself came from the amalgamation of what was ‘popular’ at the time. But look at the beautiful love child that Development and Operations created!

The popularity of these ‘buzzwords’ has always divided professionals. Whilst some people see them as trending topics, others see them as having much more potential. But it’s not just the words themselves that can bring confusion. Often, putting these words into practice can be just as problematic.

Understanding context

Daniel Ives, Head of Learning (Cloud) at QA Ltd suggests that whilst people are being told to implement certain “buzzwords” there isn’t always a full understanding behind why these things are popular in the first place and people also aren’t realising the advantage that using these trends can have.

“I’ve been talking to people about DevOps for 20 years or so, but it’s only recently that it’s been given a buzzword.”

“I talk to a lot of people who are being told they need to “be Agile”. That they need to “do DevOps” and / or “use the Cloud”. A lot of them are often confused about the “why” and the “what”, Says Ives.

He continues: “I often draw a series of overlapping circles as a Venn diagram. The circles are labelled “Agile”, “DevOps” and “Cloud”. Organisations can be “doing” zero or more of these things, but when they’re doing all three I call it the sweet spot. Agile is what we should be doing. DevOps is how we should be doing it and the Cloud is where we should be doing it.”

“Smorgasbord of buzzwords”

Ives also gives the advice: “If you’re “being Agile”, how are you verifying that the latest sprint is actually going to add value to the business? If you’re “doing DevOps”, how are you gating your latest commit’s promotion to production? If you’re “using the Cloud”, how do you know you’re not about to waste money on on-demand resources?”

“The truly successful “Industry 4.0” organisation is one that has invested the time and energy to understand the smorgasbord of buzzwords and technologies.”

With all this in mind, perhaps it’s important to consider how to use the trend, rather than if it will stay.

 

The post Do buzzwords have more meaning than we give them credit for? appeared first on DevOps Online North America.

]]>
2020: Change is afoot in everything DevOps https://devopsnews.online/2020-change-is-afoot-in-everything-devops/ Fri, 03 Jan 2020 12:18:01 +0000 https://www.devopsonline.co.uk/?p=21953 It is officially 2020 and with the new year comes the inevitability of change, fresh trends and original ways of looking at things. DevOps is one environment that is no stranger to rapid changes and more often than not,  these can lead to varying opinions and that sets a path towards a wide range of...

The post 2020: Change is afoot in everything DevOps appeared first on DevOps Online North America.

]]>
It is officially 2020 and with the new year comes the inevitability of change, fresh trends and original ways of looking at things. DevOps is one environment that is no stranger to rapid changes and more often than not,  these can lead to varying opinions and that sets a path towards a wide range of industry predictions.

With a plethora of experience, Chief Product Officer, at Logic Monitor, Tej Redkar has seen trends come and go and so when recently speaking to DevOps Online gives his opinion on what he believes is trending right now and what the future looks like for DevOps.

Mixing tech

His first point is that the integration of technology will lead the way for change.

“When it comes to DevOps, companies are looking for more than just tools. They are looking for platforms that play nice with their current tool offerings while also adding value with artificial intelligence (AI) and analytics. Between 2019’s public cloud adoption rate of 94% and private cloud adoption rate of 72%, the next trend I see is a strong shift toward flexible platforms that can be adapted to the unique needs of the company rather than tools that offer out-of-the-box solutions.”

Automation

Redkar continues that automation is going to play a bigger role in the development of firms and that right now, companies are just seeing the tip of what DevOps really means and the potential it has.

“As continuous integration (CI) and continuous delivery (CD) deployments become more commonplace, and infrastructure stays elastic based on demand, another trend I predict is companies demanding more and more automation. Today we are barely scratching the surface with DevOps automation, and in the future, more companies in the DevOps space will likely develop features that enable workflows.”

The problem with “hybrid environments”

The CPO’s final point on the trends he notices in DevOps is that it can often be made into an unnecessarily complex thing. Redkar believes that mixed environments are becoming a more normal way of working and suggests the impact this will have.

“Hybrid environments have become more commonplace, either due to business needs or simply due to a lack of long-term digital strategy. Developers now have jump-through environments for end-to-end testing and deployment. To reduce the frustration created by complex workflows, troubleshooting and time to delivery–and ensure security–, ephemeral applications that enable short-lived resource existence and secure connections will gain traction. This will be especially true when fast troubleshooting is essential or additional security is needed.”

It seems clear that there is a definite change happening in DevOps right now, whether it’s an alteration that needs to come from workers or from the general industry, it’s exciting to see how things are about to change.

 

The post 2020: Change is afoot in everything DevOps appeared first on DevOps Online North America.

]]>
Understanding control, security and risk when moving to cloud https://devopsnews.online/understanding-control-security-and-risk-when-moving-to-cloud/ Wed, 11 Dec 2019 12:33:08 +0000 https://www.devopsonline.co.uk/?p=21907 Fears that cloud environments are less secure than on-premise solutions have long been a barrier to organisations making the move to the cloud.  While some companies remain hesitant, cloud security is no longer the obstacle that it once was. According to a report conducted by Nominet, 61% of security professionals now believe that the risk...

The post Understanding control, security and risk when moving to cloud appeared first on DevOps Online North America.

]]>
Fears that cloud environments are less secure than on-premise solutions have long been a barrier to organisations making the move to the cloud.  While some companies remain hesitant, cloud security is no longer the obstacle that it once was.

According to a report conducted by Nominet, 61% of security professionals now believe that the risk associated with a security breach in a cloud environment is the same as or less than that of software installed on-premise.

This view is backed by Gartner who are predicting that over the course of 2020 public cloud Infrastructure-as-a-Service (IaaS) workloads will suffer at least 60% fewer security incidents than those in traditional data centres.

Attitudes have changed, and for most technology leaders the benefits of public cloud now outweigh the concerns. Cloud providers are now viewed as the experts in security – their business depends on it. Cloud providers hire the best industry talent to protect the infrastructure and invest heavily in the latest security innovation for cloud-based solutions. If cloud adoption is to be increased, security needs to be a cornerstone of any cloud-based business.

Amazon Web Services (AWS) is currently the largest provider dominating the public cloud market and has been the leader in the Gartner Magic Quadrant for nine straight years. According to Gartner, AWS accounts for 47.8% of the 2018 IaaS public cloud services market share. That is more than three times the market share of Microsoft Azure, which has 15.5% of the market.

Operational challenges of public cloud

When it comes to migrations, while cloud infrastructure and services are now considered secure, firms need to be aware that moving workloads to the cloud may result in vulnerabilities in the applications becoming easier to exploit.

Virtual cloud infrastructure and services provide you with maximum flexibility and great security options. However, to keep your workloads secure, security patching, application security and pen testing processes need to be in place. This means that a balance needs to be struck between utilising the flexibility of the cloud to react to ever-changing business needs on the one hand, and the compliance and security requirements of the business on the other.

An element adding additional complexity is the more recent trend for multi-cloud strategies. Whilst it is beneficial for a business to not make itself reliant on a single vendor, each vendor has different pricing models, features and terminology along with different compliance models. These need to be navigated and processes put in place to achieve a common base level of security and compliance across all vendors.

The effort, expertise and level of skill required are significant and many businesses struggle to manage this effectively. Furthermore, none of these things are moving your organisation’s core business forward and they are often a distraction from your business’s core mission but still tie up essential resources with non-core tasks.

Retaining business focus with managed cloud services

The alternative to doing this in-house is to work with a trusted partner who specialises in cloud-hosted managed services to deliver the solution on your behalf. It is a classic case of making the day-to-day operation of the platform someone else’s problem, leaving you to focus on moving your business forward with core activities without having to give up on the technology that so far has served you well. It also allows you to benefit from the flexibility and seemingly limitless resources available in the cloud without needing to build an in-house team of cloud experts.

According to Gartner: “The cloud managed service landscape is becoming increasingly sophisticated and competitive. In fact, by 2022, up to 60% of organisations will use an external service provider’s cloud managed service offering, which is double the percentage of organisations from 2018.”

Realising the benefit of cloud

For many enterprises, this is a win-win. Here at Clearvision our ClearHost offering is an example of a managed service designed and operated by our in-house team of cloud experts. The solution is aimed at helping firms of all sizes with the management of their mission-critical applications. It is powered by AWS who are ISO 9001, ISO 27001 and SOC2 certified but is managed entirely by the company.

With the world of business moving to the cloud, no matter where you are on your cloud journey, understanding how to manage complexity, cost and security are top priorities.  Many corporations, however, are finding that the vast number of options available to them can result in too many blind alleys being followed and they, therefore, struggle to realise the real benefits of cloud.

With many options available it ultimately comes down to what is best for your business. It is important to take a realistic view of the challenges that cloud can introduce and the resources needed to build and manage cloud environments so that you can make the right choice for your business – be that IaaS (Infrastructure-as-a-Service), SaaS (Software-as-a-Service), or a hosted environment built on the secure foundation of public cloud and managed by a trusted partner.

Written by Matt Muschol, Chief Technology Officer, Clearvision 

 

The post Understanding control, security and risk when moving to cloud appeared first on DevOps Online North America.

]]>
GraphQL: what front-end e-commerce software developers and architects need to know https://devopsnews.online/graphql-what-front-end-e-commerce-software-developers-and-architects-need-to-know/ Thu, 05 Dec 2019 10:14:58 +0000 https://www.devopsonline.co.uk/?p=21874 GraphQL is becoming a popular standard for e-commerce due to the reduction in time and amount of code needed for front-end developers. But what is GraphQL and what do users need to know before diving in? What is GraphQL? GraphQL is a layer that sits on top of REST APIs, any application, or datastore, that...

The post GraphQL: what front-end e-commerce software developers and architects need to know appeared first on DevOps Online North America.

]]>
GraphQL is becoming a popular standard for e-commerce due to the reduction in time and amount of code needed for front-end developers. But what is GraphQL and what do users need to know before diving in?

What is GraphQL?

GraphQL is a layer that sits on top of REST APIs, any application, or datastore, that that makes it easy for front-end developers to retrieve exactly the data they want, regardless of where it is. Using a SQL-like syntax, front-end developers can effortlessly retrieve all the data needed to render a web page or app screen with a single query. GraphQL has been used since 2014 to power Facebook and Twitter ended up adopting it shortly after. Every Facebook post or Tweet you read is powered by GraphQL behind the scenes. GraphQL is now being used by AWS, Airbnb, PayPal, and a who’s who of other large tech and consumer companies. GraphQL is also being used by online commerce vendors because of the many problems it solves in that domain.

With GraphQL, all a client-side developer has to do is describe what data they want in one query and submit it to GraphQL. For example, a developer could be rendering an order history page and need data from the order, customer, and product catalogue REST APIs. Rather than access each one of those APIs individually, the GraphQL layer will do that for you. Developers then get back a single JSON object with exactly the data that was requested. It doesn’t matter which programming language is used, as it is a specification rather than an implementation.

What problems does GraphQL solve?

Lack of discoverability

When calling REST APIs, it’s often unclear to front-end developers which ones hold what data and what how fresh that data is. For example, should a front-end developer retrieve a product’s inventory from the Order Management System (OMS), the Enterprise Resource Planning system (ERP), the Warehouse Management System (WMS), or the e-commerce platform? It’s never very clear where data is or how fresh it is.

Data underfetching

Underfetching data is a common problem with traditional REST APIs. Sometimes a REST API won’t return enough data, or sometimes you have to retrieve data from one REST API before you can call another REST API. This particularly affects devices like mobile phones which have limited processing power and are connected to high latency cellular networks. As a consequence, when making lots of HTTP requests, the time it takes a browser to view a product page can go up from milliseconds to seconds. It may not seem like much of a difference, but in a world where the consumer expects a web page or app to load instantaneously, slow performance could dramatically reduce customer satisfaction and lead to loss of customers.

Data overfetching

Overfetching is also a serious problem. Let’s say a developer is building a product detail screen for an Apple Watch client. You’d only need the product’s name, an image and its price to render that screen. But retrieving an entire product object from a REST API could result in dozens or hundreds of fields for that product being fetched and then ignored. Again, this leads to serious performance issues because of the sheer amount of data being retrieved over what is often a high latency, low bandwidth connection back to the server.

What are the benefits of GraphQL?

Easy front-end development

As GraphQL is the layer that decouples back-ends from front-ends, it allows front-end developers to make changes quickly and easily. Front-end developers don’t need to spend hours looking for the right APIs to call or for the most up-to-date versions of data, as that is handled by GraphQL. This enables e-tailers to more regularly innovate by testing and rolling-out new shopping features, differentiating their offer and staying ahead of the competition.

Better performance for end-customer

Instead of developers having to make HTTP requests over mobile networks – which are typically constrained by bandwidth and processing power – GraphQL makes all the requests within a data centre where bandwidth and computing power are essentially unlimited and latency is basically zero. This allows faster loading of content and applications for the end-user, creating a much more pleasant shopping experience and driving higher consumer satisfaction.

Less code maintenance

With GraphQL there is far less code to maintain as front-end developers don’t need to deal with the logic needed to call and authenticate with the various REST APIs or other backend systems. Instead, the clients just need one connection to the GraphQL layer. Everything else is managed from there. Moreover, GraphQL gets rid of the need for back-ends for front-ends, as developers can just make requests to GraphQL for the data they need, slashing the amount of code that must be written, tested, maintained and run. As a result, front-end developers can spend more time being creative and less time in mundane work, which in turn boosts job satisfaction.

What are the disadvantages?

It is important to remember GraphQL is not a silver bullet. While it eliminates the need for what could be hundreds of back-ends for front-ends, it is ultimately another layer that must be maintained with its own architecture, development, operations and more. It also leaves the responsibility of security up to the user.

Another drawback is that it can be difficult to combine multiple GraphQL endpoints and schemas. While one front-end developer may want one endpoint and schema, other teams and vendors will have their own, so front-end developers will have to access multiple endpoints. This can be solved, but it does require some exposing a single GraphQL endpoint and schema via schema stitching or schema federation.

From the reduction in development times to the huge decrease in code to manage overall, it is no surprise GraphQL is making waves. But in order to fully capitalise on the technology, front-end e-commerce developers should ensure they educate themselves fully before making the leap.

Written by Kelly Goetsch, Chief Product Officer at commercetools, and author of GraphQL for Modern Commerce which will be out in January 2020 (published by O’Reilly).

 

The post GraphQL: what front-end e-commerce software developers and architects need to know appeared first on DevOps Online North America.

]]>
The top 10 DevOps trends of 2019- the professional’s prediction https://devopsnews.online/the-top-10-devops-trends-of-2019-the-professionals-prediction/ Wed, 04 Dec 2019 11:45:24 +0000 https://www.devopsonline.co.uk/?p=21861 Long read Daniel Berman is the Product Marketing Manager at Logz.io, an Israeli and US-based organisation that focuses on combining cloud services with machine learning to provide visualisation of data for platforms and apps. The firm relies heavily on working in a DevOps environment and so, Berman has provided his insight into next year’s DevOps...

The post The top 10 DevOps trends of 2019- the professional’s prediction appeared first on DevOps Online North America.

]]>
Long read

Daniel Berman is the Product Marketing Manager at Logz.io, an Israeli and US-based organisation that focuses on combining cloud services with machine learning to provide visualisation of data for platforms and apps. The firm relies heavily on working in a DevOps environment and so, Berman has provided his insight into next year’s DevOps predictions because as he says, “it’s important, as [DevOps] changes almost every single day!”

Pipeline Automation

The tendency to automate tasks where possible and practical is a consistent trend throughout DevOps. The concept of automated pipelines for software has become ubiquitous. For example, one can see the number of continuous integration and continuous delivery (CI/CD) tools continue to grow since GitHub introduced GitHub Actions.

Hand-in-hand with the popularity of automation comes the continuing rise of “infrastructure as code” tooling. Tools such as Terraform, AWS Cloud Formation, Azure Resource Manager, and GCP’s Deployment Manager allow environments to be spun up and down at will as part of the development process, in CI pipelines, or even in delivery and production. These tools are continuing to mature.

Kubernetes

It feels like Kubernetes was everywhere in 2019. From its inception in 2015, this immensely popular container orchestrator has had the most mindshare in the DevOps community, despite competition from products like Mesos and Docker’s Swarm. Major software vendors like RedHat and VMWare are fully committed to supporting Kubernetes. An increasing number of software vendors are also delivering their applications by default on Kubernetes.

Kubernetes adoption is still growing. While the platform has yet to prove itself for all classes of workloads, the momentum behind it seems to be strong enough to carry it through for a good while.

Service Meshes

Conversations about implementing Kubernetes increasingly go hand-in-hand with conversations about service meshes. “Service mesh” is a loose term that covers any software that handles service-to-service communication within a platform.

Service meshes can take care of a number of standard application tasks that application teams have traditionally had to solve in their own code and setups such as load balancing, encryption, authentication, authorisation, and proxying. Making these features configurable and part of the application platform frees up development teams to work on improvements to their code rather than standard patterns of service management in a distributed application environment.

Observability

Another trend in DevOps is to talk about observability in applications. Observability is often confused with monitoring, but they are two distinct concepts. A good way to understand the difference is to think of monitoring as an activity and observability as an attribute of a system. Observability is a concept that comes from real-world engineering and control theory. A system is said to possess observability when its internal state can be easily inferred from its outputs. What this means in practice is that it should be easy to infer from an application’s representation of its internal state what is going on at any given time. As applications get more distributed in nature, determining why parts of it are failing (and therefore affecting the system as a whole) becomes more difficult.

This is where the associated concept of cardinality, which refers to the number of discrete items of time-series data a system stores, comes in. As a rule, the higher the cardinality, the more likely a system is to be observable, since you have more pieces of data to look over when trying to troubleshoot it. Of course, the data gathered still needs to be pertinent to the system’s potential points of failure, and a mental map is also still required to effectively troubleshoot.

DevSecOps

While the DevOps portmanteau has been a standard part of IT discussions for some time, other neologisms are coming to the fore. DevSecOps is one of these. This concept is gaining traction as teams aim to get security “baked in” to their pipelines from the outset rather than trying to bolt it on after development is complete. Thus security increasingly becomes a responsibility of DevOps, SRE, and development teams; consequently tools are springing up to help them with that.

“Compliance as code” tools like InSpec have gotten popular as automated continuous security becomes a priority for organisations buckling under the weight of the numerous applications, servers, and environments they track simultaneously.

Automated scanning of container images and other artefacts is also becoming the norm as applications proliferate. Products like Aqua and SysDig are fighting for market share in the continuous security space.

You may also hear DevSecNetQAGovOps mentioned as more and more pieces of the application lifecycle seek to make themselves part of automated pipelines. However, DevSecOps is still the most common reiteration to the by-now somewhat-classic DevOps pairing.

The Rise of SRE

Site Reliability Engineering is an engineering discipline that originated in 2003 at Google (before the word DevOps was even coined!), described at length in their eponymously book Site Reliability Engineering. Eschewing traditional approaches to the support and maintenance of running applications, Google elevated operations staff to a level considered equivalent to their engineering function. Within this paradigm, SRE engineers are tasked with ensuring that live issues are monitored and fixed, sometimes by writing fresh software to aid reliability. In addition, their feedback on architecture and rework pertaining to reliability and stability is taken on by the development team.

SRE works at the scale of Google’s operations, where a division between development and operations (normally an anti-pattern for DevOps) is arguably required because of the infrastructure’s size. Having a team responsible for an entire application from development to production (a more traditional DevOps approach) is difficult to achieve when the platform is large and standardized across hundreds of data centres.

DevOps companies more frequently advertised for “SRE Engineers” than “DevOps Engineers” in 2019. This may be in recognition of SRE’s specific engineering focus, as opposed to DevOps’ company-wide one.

Artificial Intelligence

There is increasing speculation about the role artificial intelligence (and, specifically, machine learning) can play in aiding or augmenting DevOps practices. Products such as Science Logic’s S1 are starting to trickle into the market and gain traction, although they are still in the early stages of adoption. These products use machine learning to detect anomalous behaviours in applications based on previously observed or normative behaviours.

In addition to traditional monitoring activities, AI can be used to optimise test cases, determining which to run and not run on each build. This can reduce the length of time it takes to get an application into production without taking unnecessary risks with the stability of the system.

On the more theoretical side, Google has published information about their use of machine learning algorithms to predict hardware failures before they occur. As machine learning becomes more mainstream, expect more products like these to arrive in the DevOps space.

Serverless

Serverless has been a buzzword since AWS introduced AWS Lambda in 2014. Things have been heating up since then, as other providers and products have been getting in on the act.

The term “serverless computing” can be confusing—in part because servers still have to be involved at some level. Essentially, it describes a situation where the deployer of the application need not be concerned with where the code runs. It’s “serverless” in the sense that providing the servers is not something the developer needs to deal with. Typically, serverless applications are tightly coupled with their underlying computing platforms, so you need to be sure that you’re comfortable with that level of lock-in.

“Shifting Left and Right” in CI/CD

The concepts of “shifting left” and, to a lesser extent, “shifting right” in CI/CD gained visibility this year. As release cycles get smaller and smaller, “shifting left” means making efficiency improvements by failing builds earlier in the release cycle—not just with standard application testing, but also with code linting, QA/security checks, and any other checks that can alert the developer to issues with their code as early in the process as possible.

“Shift-right” testing takes place in production (or production-like) environments. It is intended to bring problems to the surface in production before monitoring or user issues are raised.

Summing Up

These are just some of the more noteworthy trends we’ve been watching amidst the maelstrom of activity in the world of DevOps in 2019. The acronym “CALMS” (Culture, Automation, Lean, Measurement, Sharing) is a helpful way to structure thinking about DevOps tools and techniques and, going from 2019 to 2020, the 10 DevOps trends in this article certainly exemplify these principles!

 

The post The top 10 DevOps trends of 2019- the professional’s prediction appeared first on DevOps Online North America.

]]>