data virtualisation Archives - DevOps Online North America https://devopsnews.online/tag/data-virtualisation/ by 31 Media Ltd. Fri, 21 Feb 2020 12:37:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 “The Internet is now effectively the enterprise backbone”: Ian Waters reveals why visual data is critical for business https://devopsnews.online/the-internet-is-now-effectively-the-enterprise-backbone-ian-waters-reveals-why-visual-data-is-critical-for-business/ Thu, 20 Feb 2020 11:06:27 +0000 https://www.devopsonline.co.uk/?p=22382 ThousandEyes is a network intelligence company that produces performance monitoring tools to help businesses improve the quality of their online experiences. They achieve this by utilising innovative data visualisations that enable users to better understand the inner workings of their online infrastructure. We spoke to Ian Waters, ThousandEyes’ Director of Solutions Marketing, to find out...

The post “The Internet is now effectively the enterprise backbone”: Ian Waters reveals why visual data is critical for business appeared first on DevOps Online North America.

]]>
ThousandEyes is a network intelligence company that produces performance monitoring tools to help businesses improve the quality of their online experiences. They achieve this by utilising innovative data visualisations that enable users to better understand the inner workings of their online infrastructure.

We spoke to Ian Waters, ThousandEyes’ Director of Solutions Marketing, to find out how the business was created, how it enhances client online performance, the continual growth of the internet, the future of CX and more.

Where did you see the gap in the market in the need to create a firm like yours?

Our co-founders, Mohit Lad and Ricardo Oliveira, had an idea to create technology that would create a Google Maps-type view of a company’s entire IT whilst at UCLA. During their studies, they learned about how the Internet has grown into a hybrid interconnected web of networks and services. Mohit and Ricardo saw that there were no tools out there that could visualise the complex mechanics of the Internet and allow businesses to get a comprehensive understanding of the Internet as a whole in order to understand connectivity problems.

They wanted to solve this problem and empower people to understand and improve their own networks. As such, ThousandEyes was born. The idea won them a research grant for seed funding from the United States’ National Science Foundation, a very unusual route to start a company.

On your website, it says, “Digital experience is only as good as the quality of the internet”, what specifically do you mean by that?

Today’s users have high expectations of their digital experience. They expect web and app response times to happen within two seconds or less. After three seconds, most users will simply abandon the interaction and move on, possibly to a competitor. Delivering a great user experience depends on Internet performance and third-party systems and networks. The need for visibility of application performance has never been greater and, as a result, performance monitoring tools have become critical to everyday life for many businesses.

How do you ‘visualise’ connection across the internet?

We have hundreds of Cloud Agents installed around the world collecting performance data from local transit providers right through to 4G LTE vantage points, broadband ISPs and cloud providers. Using these Cloud Agents, we’re actively monitoring and testing the network traffic paths across internal, external, carrier and Internet networks to give deep network performance and availability insights that are enriched with routing and device data for a multi-dimensional view of digital experience. Today, ThousandEyes measures more than 8 billion service paths per day, and more than 33 million network traces are collected per hour. This data is pulled into a customised dashboard that can meet an individual business’ needs. Widget configuration allows enterprises to map enterprise agents and their status, test applications and create reports.

What barriers have you faced so far and how did you get over them?

Generally, our customers have a set of traditional network monitoring tools that are fine for equipment or applications they own. However, as they transition to delivering a digital experience for their employees and customers across the Internet and third party or cloud provider Networks, they’re not aware they need to completely reassess this monitoring stack so it’s appropriate for modern networks. Demonstrating to customers that there are solutions that let them troubleshoot the complete end-to-end service delivery for their applications is often a genuinely eye-opening experience for them.

What can you tell us in terms of cloud performance across the globe and why location matters?

In 2018 we introduced the industry’s first and only comparative look at performance metrics of three public cloud service providers: Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure. In the 2019 report, we added IBM Cloud and Alibaba Cloud into the cloud application mix while also examining North American broadband ISP performance, connectivity to and from China as well as AWS’ Global Accelerator — raising the number of measurements to more than 320 million data points.

The main takeaways from our study were that some cloud providers rely heavily on the public Internet to transport traffic instead of their backbones, which can impact performance predictability. The report found that AWS’ Global Accelerator is not a one-size-fits-all solution and doesn’t always outperform the Internet. With regards to location and cloud performance, Latin America and Asia have the highest latency and variability across all clouds, whereas in North America, cloud performance is generally comparable. Ultimately, we found that choices need to be made on a case by case basis, what’s right for one company or one part of the world may not be right for all, and only with data and visibility can you make informed choices.

What can you tell us about the growth of the internet and why we are experiencing outages?

It’s important to understand that the Internet has grown into a network of over 60,000 Autonomous Systems all connected on a trust basis. The Internet has to be an open network, in order to give us the dynamic and global connectivity we all want, but being open leads to vulnerabilities, and also variability as not all of those networks perform to the same standards.

At the same time, enterprises are increasingly relying on Internet transport to connect their sites and reach business-critical applications and services. Gone are the days in which applications are solely hosted in private data centres.

What can you tell us about how the Internet is acting as an unregulated collection of independent networks and providers?

The Internet is now effectively the enterprise backbone, made up of a large and complex ecosystem of Internet-facing services such as CDN, DNS, DDoS mitigation, and public cloud. These services work together to provide great digital experiences but are also vulnerable to potential outages, which can be extremely disruptive to the business by preventing users from reaching your applications and services. Delivering a seemingly simple customer or employee digital experience is dependent on every link in the chain performing and with the complex and distributed nature of modern applications that chain is lengthening.

How does this affect the control that businesses have over it?

By relying so heavily on third-party service providers, from IaaS to SaaS to ISPs to CDNs to deliver a cloud digital service, companies are relinquishing much of the control they’ve previously had over their IT systems and infrastructure. When organisations controlled the infrastructure and the application they were delivering, they could potentially find and fix any issues. In today’s world, it’s very likely that you don’t own either the infrastructure or the application, so operating models have to change. Given that an estimated 96% of businesses use the cloud in some manner, nearly every company is now operating on networks outside of their ownership.

On the opposite end, connection is going from strength to strength, what do you see happening with the future of connection?

It’s interesting that you say “connection is going from strength to strength”.

Yes, it is, with the rise of 5G and edge computing, there’s currently a dramatic connectivity acceleration underway. However, we’re also experiencing a “Splinternet”, where some regions of the world are experiencing a lack of connectivity as the Internet becomes increasingly fragmented. In 2019, Russia passed it’s ‘Sovereign Internet’ law to block off its Internet from the rest of the world, and Iran implemented a near-total Internet shutdown. In the years to come, this “Splinternet” trend of a fragmented Internet will accelerate, as more countries will attempt to create restrictions of their Internet using government control overflows of traffic and internet-based services. The most likely candidates to extend these restrictions? Turkey, Turkmenistan, and Saudi Arabia.

What are the biggest things you are seeing happening with the digital experience right now?

What we hear from our customers is that they want to move away from supplier and service level management, where they measure individual components in terms of service delivery and focus instead on the end-to-end user experience. Ultimately it doesn’t matter that all your lights are green if your customer is not happy. The fact is that the modern digital user experience relies on partners to help you deliver it, and large companies will own and deliver less and less of the underlying technology going forward, so operational mindsets need to change and companies need end-to-end visibility informed by data, which gives them control without necessarily owning all parts of the journey.

The post “The Internet is now effectively the enterprise backbone”: Ian Waters reveals why visual data is critical for business appeared first on DevOps Online North America.

]]>
Completing the OODA Loop of DevOps https://devopsnews.online/completing-the-ooda-loop-of-devops/ Tue, 30 Aug 2016 10:52:48 +0000 http://www.devopsonline.co.uk/?p=8189 Adam Bowen, World Wide Innovation Lead, Delphix, explains how DevOps speeds up OODA Loops. Good strategy on the battlefield can often be applied for a winning strategy in the marketplace. As the book Scrum: Doing Twice the Work in Half the Time by Jeff Sutherland points out, the ‘OODA Loop’ is a methodology which holds...

The post Completing the OODA Loop of DevOps appeared first on DevOps Online North America.

]]>
Adam Bowen, World Wide Innovation Lead, Delphix, explains how DevOps speeds up OODA Loops.

Good strategy on the battlefield can often be applied for a winning strategy in the marketplace. As the book Scrum: Doing Twice the Work in Half the Time by Jeff Sutherland points out, the ‘OODA Loop’ is a methodology which holds many parallels in the technology sector.

Back in the 1950s United States Air Force Colonel John Boyd identified four stages of combat strategy: Observe, Orient, Decide, and Act. Completing those four stages returns participants back to the Observe phase where the process begins again. Boyd maintained that the way to defeat an enemy was to complete this process, known as the OODA Loop, faster than your enemy can lap theirs.

The DevOps methodology

As Jeff Sutherland highlights, the ideas behind outpacing competitors using faster decision-making processes aren’t just prevalent in warfare, but instead appear throughout government activities. Nowhere are they more crucial than in software development. The catchphrases that power DevOps (for example, ‘continuous feedback,’ ‘fail fast,’ ‘agile development,’ and ‘SCRUM’) all point back to the model proposed in Colonel Boyd’s OODA Loop.

And indeed, DevOps has proven itself invaluable in expediting an organisation’s OODA Loop. Thanks to DevOps tools and methods, companies like Amazon do a distinct code change to production once every 11.6 seconds (video). That’s over 7000 times a day! They are able to observe market trends and user feedback in real time, make a decision about that information, release new features which respond to that information, and then observe the effect of those changes. This loop is completed many thousands of times each day, allowing them to fly past their competition.

Allowing organisations to treat infrastructure as code

DevOps speeds OODA Loops by automating the many numerous touch points that are required in software delivery: support desk, infrastructure, Ops, DBA, storage, security, project managements, etc. Great tools like Puppet, Jenkins, Chef, and Ansible have automated the codified process flow and allowed companies to trim down environment requests from weeks or months to days or even just hours. In addition to the speed gains, the continuous feedback made possible by DevOps has allowed organisations to treat infrastructure as code and leverage version control to raise the overall quality of products and projects.

Companies can now build applications as fast as they can imagine them – with one major caveat. Application projects still require a great deal of waiting due to the antiquated approach the industry currently takes to data delivery. You can have the world’s fastest car, but having the world’s slowest pit crew negatively affects your ability to cross the finish line.

In addition to application development projects, there are many activities that experience this high speed, high drag effect.  Development and modernisation projects, data center and cloud migration projects, disaster recovery failover exercises, data masking and auditing, and BI reporting, all require an unnecessarily long waiting period – hours, days, or weeks – during database and application resets and refreshes as terabytes of data are restored and copied across the network. That drag can be pinpointed to the refresh/reset process. A 10 minute destructive/failover test of data often requires a reset process that takes between ten and a hundred times longer than the actual test. And that’s if the IT team even bothers to attempt these tests. Instead, they’re often dissuaded by the degree of time and effort required.

Eliminating bottlenecks through data virtualisation

OODA Loops can only move as quickly as their slowest bottleneck allows. Data mobility has hampered the DevOps OODA Loop for far too long. Whoever can puzzle through data delivery in a new and more efficient way has a great deal to gain. If seconds matter, then what about the days, weeks, and even months it takes to provision data sets? What is that costing our mission?

New technologies like data virtualisation have been reducing the strain on the OODA Loop by decreasing the time needed for reset/refresh activities down to minutes and a few mouse clicks. That means that feedback cycles can happen over twice as quickly. When applications or database environments have fresh data near-instantly and on demand, IT teams aren’t stuck waiting while they should be acting. Instead, they’re making use those valuable hours, days, weeks that they’ve gotten back from more efficient delivery of data; Observing, Orienting, Deciding, and Acting. Suddenly, data virtualisation morphs DevOps operations from high speed, high drag to a coveted high speed, low drag scenario.

By delivering data faster through virtualisation, organisations are able to tighten their OODA Loops, in order to get ahead of the competition. The resultant real time updates can make all the difference for customers, who are also beholden to their own OODA Loops.

 

Edited from press release by Cecilia Rehn.

The post Completing the OODA Loop of DevOps appeared first on DevOps Online North America.

]]>