ML Archives - DevOps Online North America https://devopsnews.online/tag/ml/ by 31 Media Ltd. Thu, 08 Jul 2021 10:33:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 Google to collaborate with AI Singapore to build AI talents https://devopsnews.online/google-to-collaborate-with-ai-singapore-to-build-ai-talents/ Thu, 08 Jul 2021 10:33:40 +0000 https://devopsnews.online/?p=23502 It was recently announced that Google Cloud will be collaborating with Singapore’s national artificial intelligence (AI) program in order to get more people interested in machine learning (ML) and AI. Indeed, with this partnership, aspiring AI engineers will have the opportunity to do two months of intensive training as well as get hands-on experience in...

The post Google to collaborate with AI Singapore to build AI talents appeared first on DevOps Online North America.

]]>
It was recently announced that Google Cloud will be collaborating with Singapore’s national artificial intelligence (AI) program in order to get more people interested in machine learning (ML) and AI.

Indeed, with this partnership, aspiring AI engineers will have the opportunity to do two months of intensive training as well as get hands-on experience in real-world projects. The projects include solving problems for businesses through AI and how to support the growth of local AI start-ups.

Moreover, this collaboration will allow companies to use Google’s AI and machine learning services so as to develop products with half of the cost co-funded by AI Singapore. The partnership with Google is also reported to not be exclusive, as some projects in the 100 Experiments initiative run on Amazon Web Services and Microsoft Azure.

This Google partnership is seen as a“natural extension of the work that AI Singapore is doing with cloud providers and should attract more companies to the 100 Experiments program as well as the Google Cloud Platform.

 

The post Google to collaborate with AI Singapore to build AI talents appeared first on DevOps Online North America.

]]>
The role of AI in cloud computing https://devopsnews.online/the-role-of-ai-in-cloud-computing/ Tue, 29 Jun 2021 10:09:27 +0000 https://devopsnews.online/?p=23416 As Artificial Intelligence (AI) is gaining in popularity, it is now clear that its evolution also complements the growth of cloud computing. Using AI within the cloud can then enhance the performance and efficiency of the cloud as well as drive the digital transformation of organizations. AI capabilities within the cloud computing environment are a...

The post The role of AI in cloud computing appeared first on DevOps Online North America.

]]>
As Artificial Intelligence (AI) is gaining in popularity, it is now clear that its evolution also complements the growth of cloud computing. Using AI within the cloud can then enhance the performance and efficiency of the cloud as well as drive the digital transformation of organizations.

AI capabilities within the cloud computing environment are a strategic key to make businesses more efficient, strategic, and insight-driven, all the while giving them more flexibility, agility, and cost savings by hosting data and applications in the cloud.

Hence, we have asked experts in the industry to explore the ever-growing role of AI in cloud computing.

 

What is AI Cloud Computing?

AI Cloud computing essentially means combining artificial intelligence (AI) with cloud computing.

Indeed, according to Dipanshu Shekhar and Swathi Sreekant, AAdvisor Digital Strategy at DXC Technology & Advisor Automation Strategy at DXC Technology respectively, it means that AI tools and AI software are synched with the power of cloud computing, which provides an enhanced value to the existing cloud computing environment and this combination makes enterprises efficient, strategic and insight-driven.

Cloud computing helps enterprises to be more agile and flexible and provide cost benefits by hosting data and application on the cloud. With this new layer of AI, which helps in generating insights from the data, it gives intelligence to existing capabilities and delivers exceptional customer experience. This leads to a powerfully unique combination that enterprises can use for their benefit.

Christopher Patten, Cloud & DevOps leader at Centrica adds that Cloud is a simulation, like a video game. As such, it is inherently observable and like a Tesla electric car, your Cloud kicks out a tremendous amount of operational data and telemetry. Hence, AI Cloud Computing is essentially AI Ops, using algorithms to make sense of all this data and determine the optimal course of action rather than leaving it to people.

In a post-COVID world, Dipanshu and Swathi continue, Cloud-computing has accelerated with spending increasing to 37% to $29 billion in the first quarter of 2020 as compared to the first quarter of 2019 (Source-: Gartner). Therefore, synergizing AI and cloud computing solutions will bring organizations closer to their end customers and improve their operational effectiveness

 

The role of AI in cloud computing

Cloud computing environment and solutions are enabling enterprises to become more agile, flexible, and cost-effective as this substantially reduces infrastructure management costs for enterprises, Dipanshu and Swathi state. Artificial Intelligence (AI) enables additional flexibility as it helps them manage large data repositories, streamline data, optimize workflows, and produce real-time insights to transform day-to-day operations and re-imagine end customer experience.

Christopher underlines that AI operations allow shifting the operational burden from process and people to engineering and data.

Hence, Dipanshu and Swathi add, AI is improving cloud computing in myriad ways. AI in the cloud is now being utilized effectively via the SaaS route. Many SaaS providers are adding the AI layer to their products which offer exceptional functionality to end-users and customers. This is especially true for CRM software where customer data is being utilized to make personalized actionable insights.

Additionally, AI, as a service, is also one of the ways enterprises are using AI to improve their current cloud setup. AI makes things agile and ensures process efficiencies that help minimize errors and improve productivity.

 

The benefits of AI in cloud computing

According to Dipanshu and Swathi, there are several benefits of using AI in the cloud:

  • Enhanced data management: We live in a data-driven world with countless data out there. Simply managing that data is a huge challenge that enterprises face. AI tools and applications that run over the cloud that help to manage data effectively by identifying it, updating it, cataloging it, and offer real-time data insights to customers. AI tools also help in detecting fraudulent activities and noticing some patterns in the system that look out of place. Financial institutions and banks are heavy users of this technology, this allows them to stay relevant and secure in very risky environments.
  • Automation: This combined technology of AI and cloud removes the barriers to Intelligent automation and enables enterprise-wide rollout across the organization. AI brings in predictiveness as algorithmic models provide real-time insights based on data pattern, historicity, etc. Leveraging AI and cloud computing solutions can generate forces of hyper-automation for enterprises as it will not only bring in cognitive automation on semi-structured and unstructured documents but also push boundaries for effective infrastructure management thereby ensuring minimum disruption. This leads to cost transformation for enterprises and transformative end customer experience
  • Cost Savings: The adoption of the cloud enables enterprises to only pay as much as they use. This is a huge cost saving over traditional infrastructure costs of setting huge data centers and managing them. The cost saved from this arrangement can be used to set up the more strategic development of AI tools and accelerators that can be further used to generate greater revenue and save fundamental costs for the enterprise.

According to Christopher, AI cloud computing will lead to higher operational quality and lower operational costs.

 

The challenges

The challenges are that a technical DevOps/SRE mindset and skillset are required, Christopher states. Indeed, this is hugely disruptive to existing operating models built around ITIL with its emphasis on fragmentation by function.

Moreover, Dipanshu and Swathi point out some challenges that can crop up with the integration of these two technologies:

  • Integration: Whenever two disparate technologies come together, there is always a challenge in beginning the integration smoothly. However, this integration is fundamentally dependent on enterprises first moving their applications and technologies to the cloud completely, which itself is a huge task for many enterprises. Only after such a transformational change can enterprises think about adding the AI layer to the cloud. The technology sync hence is too dependent on enterprises working on a concrete digital transformation of their infrastructure.
  • Inadequate data: AI tools work best with large sets of good data. Enterprises need to ensure that data is accessible and clean so that AI can deliver value. This is a huge challenge as we see that many times data is very unstructured, siloed, or incomplete. The quality of data is extremely important for the solution to deliver value.
  • Security and privacy issues: Enterprises use a lot of sensitive and financial information that can be targeted for data breaches by hackers so one needs to be cognizant of ensuring that the privacy breaches do not happen.

 

AI Cloud Computing strategies & businesses

Dipanshu and Swathi think that businesses should implement AI cloud computing strategies.

However, they continue, there is no one size fits all approach in this regard. Every enterprise must keep in mind their end goal and then slowly and steadily work their way into moving their tech stack to the cloud after which the integration with AI can be made plausible.

Businesses and enterprises must incorporate AI cloud strategy as part of their overall strategic technology roadmap in line with their strategic and operational goals to stay relevant and much ahead of their competition. It is imperative to also note that, this synchronization of AI and cloud requires significant expertise, resources, and cost to effectively translate into something of value for the enterprise.

But once the integration of cloud systems and AI come through successfully, it will allow enterprises access to potent machine learning capabilities such as image recognition tools, natural language processing, and recommendation engines and these toolsets are very integral and disruptive in nature. This will set a precedent for other enterprises to follow suit.

Christopher also adds that as Cloud marches ever onwards, businesses need an operational AI Cloud to look after it, so it is vital that they adopt AI cloud computing strategies.

 

The future of AI Cloud Computing

Christopher believes that the future of AI Ops is ultimately to be the de facto model for operating the cloud.

Dipanshu and Swathi think similarly. Indeed, they believe AI will add to the already powerful capabilities of the cloud and make it an even more potent technology. Analysis and management of data will completely change with this combination. In this world where we are inundated with massive amounts of data, AI + cloud combination is a game-changer and will provide unmatched value to end-users.

Cloud computing and AI are causing disruption in sectors of all shapes and sizes in the post COVID world and are leading to democratization owing to its wider availability. We are seeing a world where technology has become a reality and occupied its position from operational to strategic priority.

In the pre-COVID scenario, as per the Gartner report in 2019, the AI market was expected to grow at a CAGR of 33.2% from 2019 to 2027. This has increased much more as more sectors have awakened to the reality of a post COVID world.

Most organizations have already doubled their focus on moving to a cloud-enabled world. With the inclusion of AI, the organization expects to solve more visible and new problems and create a new world for its prospective customers

 

Special thanks to Dipanshu Shekhar, Swathi Sreekant, and Christopher Patten for their insights on the topic.

The post The role of AI in cloud computing appeared first on DevOps Online North America.

]]>
New report highlights the importance of data science, machine learning, and AI in business https://devopsnews.online/new-report-highlights-the-importance-of-data-science-machine-learning-and-ai-in-business/ Wed, 16 Jun 2021 10:21:39 +0000 https://devopsnews.online/?p=23443 A recent report from Forrester revealed that data science, machine learning, and artificial intelligence (AI) are vital for businesses yet only a few executives know how to use them efficiently. Indeed, it was stated that a lack of understanding among leading executives was preventing businesses from deploying successfully data science, machine learning, and AI projects....

The post New report highlights the importance of data science, machine learning, and AI in business appeared first on DevOps Online North America.

]]>
A recent report from Forrester revealed that data science, machine learning, and artificial intelligence (AI) are vital for businesses yet only a few executives know how to use them efficiently.

Indeed, it was stated that a lack of understanding among leading executives was preventing businesses from deploying successfully data science, machine learning, and AI projects. Due to this, businesses end up with poor outcomes, waster resources, and a delay in future innovation.

Hence, the report recommends organizations to have technical experts who understand the technologies as well as start with projects that are feasible and brings value to the business.

The post New report highlights the importance of data science, machine learning, and AI in business appeared first on DevOps Online North America.

]]>
Success Factors for Enterprise Machine Learning Projects https://devopsnews.online/success-factors-for-enterprise-machine-learning-projects/ Tue, 08 Jun 2021 09:48:46 +0000 https://devopsnews.online/?p=23402 Over the past ten years, the aggregate threat of the newly arrived digital-native market disruptors has proven to be serious enough to push some of the big incumbents off the cliff and cause severe revenue losses to many others. The large players have been facing a “do or die” mandate which grows exponentially now: To...

The post Success Factors for Enterprise Machine Learning Projects appeared first on DevOps Online North America.

]]>
Over the past ten years, the aggregate threat of the newly arrived digital-native market disruptors has proven to be serious enough to push some of the big incumbents off the cliff and cause severe revenue losses to many others.

The large players have been facing a “do or die” mandate which grows exponentially now: To replicate what these little foes have as the secret sauce to their success: Agility and AI-enhanced Digital Technology.

Artificial Intelligence (AI) and its prodigy child, Machine Learning (ML) have become a deciding factor for competitiveness and relevancy is many market sectors on a massively expanding scale. With the acceleration of governments’ spending on ML projects – to establish and promote smart public services – we now witness  ML projects defined and implemented in the public sector at an accelerated rate.

While statistics show that nearly 70% of organizations are already using Machine Learning at some level and 97% are going to use ML within the next 3 years, roughly more than 85% of Machine Learning projects fail to complete or deliver value in the end, which translates into tens of $billions of dollars in financial losses and millions of wasted expert hours each year and this trend is expected to continue through 2022!

There are many predictable – and avoidable – mistakes that would lead to the failure of ML projects. They are easy mistakes with expensive price tags, which would not only cause financial and moral damage to the teams but also hamper their efforts in maintaining their market positioning in an extremely competitive landscape that is heavily bombarded by the Digital-native disruptors.

Based on my experience and observation of ML projects over the past few years, I would like to share the following tips on succeeding in ML projects at enterprise levels:

 

  1. Data is the bloodline and should be treated as such

There is a famous quote, attributed to Napoleon that says, “An army marches on its stomach”. I would like to bring that quote into the 21st century by saying “A Machine Learning project marches on its Data pipeline!”

Your ML project – and later your ML model servicing in production – should have access to available, abundant, relevant, and good quality data and you need to have a properly structured Data Strategy to keep it flowing.

As a body can only be as healthy as the blood that is pumped through its veins, a Machine Learning project, or product in service, is reflective of the quality of the data it is trained with.

Data can have an array of issues when it arrives at your doorstep. If the data quality is poor, it will take away precious time and budget from your ML teams to bring it up to the needed quality through cleaning up, enriching and formatting.

Data problems can also include a variety of biases due to a number of collection and sampling issues, which would not only cause inaccurate outcomes but also be illegal to use.

To maximize the value that is received from Data, it should be democratized and provisioned across the organization, to all groups and teams, based on the needed regulatory and compliance requirements of the enterprise and local and international governing bodies.

When it comes to establishing the Data pipelines, the organization must focus on multiple horizons: the data that is readily available now (immediate), what can be generated or acquired within a short and reasonable timeframe (short term), what data will be needed within the next few months (long-term).

I would not recommend upfront planning for longer windows, as the changes in the market would cause too much drift in your entire data setup over time and would render today’s efforts less relevant and useful at that point.

 

  1. Use Machine Learning to address real market problems

As attractive and techie it would sound to have a Machine Learning pipeline (even establishing your own corporate MLOps), the cost and the ongoing effort to keep your ML Models to market service level and maintaining them, would mount up fast, and unless you are solving real problems for your customers (or improving their experience), it will soon run out of economic justification.

To properly establish the reason to have your ML Models in the production environment, and get your ROI reasonable, you need your business experts to work closely with your Data Scientists to define the business problems that are going to be addressed. Remember that not all problems need Machine Learning answers.

As per the tip above, Data will be needed to feed and drive your Machine Learning solution, so the availability of data, its quality, interval, and stability is a great deciding factor when choosing a Machine Learning approach.

Once we have established the ROI and have the Data supply figured out, to establish a functional and efficient ML team, we need to look beyond just pairing the business experts and data scientists.

 

  1. Form up your Machine Learning ARTs®

The Scaled Agile Framework (SAFe®) which is the most adopted Agile Scaling framework among the Fortune 500 enterprises (over 70% adoption rate), uses a highly effective approach when forming the large teams which serve the Development Value Streams in an organization.

A Development Value Stream is the group and sequence of people (from all teams and segments), processes, and tools, that work together, to create the products and services that will be sold to customers, or tools that will enable the sales team to sell them.

SAFe® calls these large teams, Agile Release Trains (ARTs) and positions them at the heart of every program in an enterprise. ARTs are designed to group together the experts – which may come from several silos – into a solution creating and delivering machine, which can almost independently design, develop, test, deploy and even release their work to the production.

As I mentioned at the beginning of this article, Enterprise Agility is a vital factor for organizations in their efforts to survive and thrive in the ever-changing market landscape.

Organizations cannot stay competitive unless they can scale their Agility all the way up to enterprise portfolio level, which is what SAFe specializes in, allowing the aggregate power of all programs and their ARTs to combine upward toward the manifestation of strategic plans, and to allow Strategic Plans to cascade down to all programs and to their ARTs for agile market response and re-positioning.

 

  1. Onboard the leadership to champion the change

Machine Learning brings, not only a major shift in the way organization thinks about technology and pictures the final product, but also a strong demand for a cultural paradigm shift that goes across the entire fabric of the enterprise, as it follows the people (business and technology), processes and tools used by all that are involved in its creation, service provisioning, and maintenance.

Leadership needs to learn about the “What and Why?” of Machine Learning, to the level that would enable it to make educated strategic decisions about it, and then commit to championing the “How?” through all the involved groups and silos. They cannot be expected to answer all the questions, but to lead the charge on the orchestration required to find them.

Machine Learning requires an investment of time and money and the only way to weather the painstaking initial period in finding what resonates best with the market and how to provide it to customers, is through a culture of continuous exploration, learning, and training.

 

  1. Invest in your people before investing in Machine Learning

There is an ongoing shortage of skilled ML experts in the market. In fact, it’s quite hard and expensive to find and hire people skilled in all areas of data science, especially the MLOps pipeline. A smart way to approach this is to invest internally and train and specialize the existing team into new skills they require to serve as part of the Machine Learning ART.

Since ART is made of people from all engaged groups in the ART, you need to train everyone – based on their future role in the ML pipeline – from business (aka subject matter/domain) experts to technology.

Some big players, like Facebook, Google, Amazon, and Alibaba, have their own internal training academies and programs, which excel in training existing staff and new hires.

When training the organization, it is recommended to start with the leadership, and then to cascade down the management org chart. This way, you will have ML-Savvy executives running the cultural and technical shift, with the required authority and budgetary levels to facilitate the work.

 

  1. Migrate to Cloud or start from there

All the major cloud service providers have extensive, time-tested tools for every segment of the Machine Learning pipeline, from Data extraction all the way to ML service provisioning and Monitoring.

Their high scalability, elasticity, and availability make them the best choice in maximizing the reach of your ML budgets by adjusting the resourcing needed for running your MLOps in a dynamic and transparent way.

The Agility gained through this service structure, combined with the available cloud-based tools, will reduce the needed effort and time your experts should invest in running your Machine Learning projects and serving your ML products. This saved time, energy and money can allow your organization to invest more in exploratory and market testing, which in turn will nurture your teams with live and timely feedback from the market, to be used in fine-tuning your solution approach, and re-training your ML Model as the parameters and data will change with the market over time.

The low cost of forming up temporary pipelines for experimentation, will boost your teams’ creativity and avoids unnecessary capital spending on infrastructure.

Your teams can always lock in the needed core capacity to execute the production ML models and their data pipelines, to benefit from significant discount pricing on reserved resources while staying safe with the active, live and elastic capacity that will automatically rises up to meet any spikes in demand that your service may encounter.

The global reach of the cloud services also enables your organization to reduce service latency to unprecedently low numbers by providing replications of your services in the nearest possible locations to your customers.

 

  1. MLOps and DevSecOps should collaborate and co-exist

DevSecOps is the infinite dual loop that has integrated and pipelined the software product lifecycle management with embedded security considerations and implementations in all its steps.

DevSecOps uses Continuous Integration (CI) and Continuous Delivery (CD) which allow for your pipeline to maintain a flow of developed and tested code to join the existing code and push it to the production, thus raising your organizational agility in responding to changes in the market trends and customer interests.

Recent enhancements to DevSecOps tools have brought in the power of ML in many aspects of its planning, development, testing and deployment stages. Both DevSecOps and MLOps can benefit from Automation in their pipeline.

MLOps is a set of techniques and processes responsible for managing your ML Model’s lifecycle management including the needed data provisioning.

DevSecOps principles apply to MLOps as far as the software creation is involved, but there are some key differences between the two that would not allow MLOps to simply merge into DevSecOps:

  • Continuous Experimentation: ML Models require many iterations of execution and fine-tuning before the accuracy and performance of a model are acceptable for the production environment. This is different from how DevSecOps works where much less retesting and parameter tweaking is required. As a result, it is not easy to formulate how long a new ML Model would need to arrive at the service performance level required for the production environment.

DevSecOps is designed for agility in market response by the rapid creation of software solutions (or rapid incremental updates to existing ones), while we cannot force the experimentation in ML Models to go faster beyond what computational power and automation can provide. A big portion of the time required for experimentation stays with the human (manual) work on the models.

  • Continuous Monitoring: As ML Models are trained with a set of data that was prepared in a certain timeframe in the past, their accuracy and performance are reliant on the data pattern and value ranges to stay consistent over time. Though for some applications, the data may hold steady for long periods of time, in most cases, due to the ongoing market changes and customers’ shifting demands, the data changes impact the accuracy of the ML Model and cause performance degradation. To catch and fix that issue, ML Models need to be monitored continuously and triggers to be set to execute retraining of the Model using the fresh batch of data off the market.

DevSecOps has production performance monitoring in place but this is just to make sure the SLAs that have been set are still valid and the services maintain the needed response time. It is not usually related to concerns about accuracy degradation.

  • Continuous Training: The continuous cycle of ML Model monitoring flows into the loop of re-training the ML Model whenever new data shows enough drift that would justify the processing cost of re-executing the training process for the ML Model. DevSecOps does not have such a cycle as part of its standard pipeline.
  • Team Structure: MLOps would require a different set of specializations than what you have in your DevSecOps teams. We need Data Scientists, Data Engineers, Model Risk specialists, Machine Learning Architects, and Engineers. The common expertise between the two Ops would be the need for Software Developers and Release Engineers.
  • Testing Approach: DevSecOps would use the Software Testing disciplines (including Unit Testing, Functional Testing, Integration / Regression Testing …) while ML Models (in addition to all of those) would need Data Validation, Model Evaluation and Validation in the pipeline.

Value Streams (as we mentioned earlier) are the collaboration environments where both DevSecOps and MLOps create the aggregate value delivery through supporting each other and amplifying the power of Machine Learning with agile support and prototyping of the supporting systems required to keep it running and performing as expected.

DevSecOps and MLOps both adhere to CI and CD disciplines and that is where they meet with their most collaborative capacity. While DevSecOps would run their code through CI/CD as many as multiple times in minutes (as is the case with Tech Unicorns like Amazon and Facebook), the MLOps would send the ML Models for integration into the staging, preproduction, and eventually production pipeline, every once in a while, when a new model is validated and it would be the time to put it into training using the data and prod-like environments for final tuning and validation. Also, once the ML Model is in production, the orchestration of its supporting software would need to operate smoothly and as per the needed SLAs that are defined.

 

  1. MLOps need special Metrics at Model and Pipeline levels

The ML Code is only a small (less than 5%) of the entire ML pipeline. The rest is composed of Configuration, Data Collection, Feature Extraction, Data Verification, Analysis, Process Management, Machine Resource Management, Serving Infrastructure and Monitoring.

This means that when selecting Metrics and trying to establish KPIs, we need to factor in all these functions as part of our measurement and tracking structure.

ML Models also require their own metrics used in the continuous monitoring of their accuracy and performance required in the production environment. This is in addition to the evaluation metrics that are used during their experimentation stage and later in the pre-prod and production validation levels.

Fortunately, there are several time-tested, existing solutions in the market that we can choose from, customize, enhance, and properly fit into our pipeline. Many of these tools are already provided by cloud platforms and serve a multi-layered coverage that goes from the cloud service layer all the way and into each of your ML Model’s live performances.

DevSecOps would share in some of the metrics that relate to pipeline performance and SLAs in production. In both cases, we need to ensure Metrics are tied into Action Items that will be triggered once a certain threshold is met so that the required staff is alerted of a rising problem and relevant automated mitigatory actions are launched as needed.

 

Conclusion

Machine Learning is now a great competitive advantage to many market competitors and their rising contenders, and we can expect the market size to continue to grow at a 44% CAGR rate through 2025, to surpass $100 billion.

As per McKinsey, Artificial Intelligence can add around $13 trillion to the world economic output in 2030, and we can expect Machine Learning to have Lion’s share of that large number.

Today, the organizations’ need for Machine Learning, resembles their need for Software and later Web presence, during the 1990s and 2000s. We are at the onset of an era where without Machine Learning models, no business can continue to stay in the market and customers would not buy from providers who are not providing the benefit of ML enhanced services to them. This is while most Machine Learning projects are failing due to a lack of proper understanding of the pitfalls and best practices.

I hope that following the shared tips and recommendations will help in the successful implementation of your ML Models and serving them to your customer.

 

Article written by Arman Kamran, CTO of Prima Recon and Enterprise Transition Expert in Scaled Agile Digital Transformation

The post Success Factors for Enterprise Machine Learning Projects appeared first on DevOps Online North America.

]]>
Reinforcing organisations’ cybersecurity with AI https://devopsnews.online/reinforcing-organisations-cybersecurity-with-ai/ Mon, 07 Jun 2021 10:05:24 +0000 https://devopsnews.online/?p=23419 It was recently reported by the Capgemini Research Institute that organizations should reinforce their cybersecurity defenses with artificial intelligence (AI). Indeed, the growth of AI in recent years showed that it could help deal with the complexity and ever-increasing cyberattacks. AI systems are able to detect malware, recognize patterns, and detect new threats and behaviors...

The post Reinforcing organisations’ cybersecurity with AI appeared first on DevOps Online North America.

]]>
It was recently reported by the Capgemini Research Institute that organizations should reinforce their cybersecurity defenses with artificial intelligence (AI).

Indeed, the growth of AI in recent years showed that it could help deal with the complexity and ever-increasing cyberattacks. AI systems are able to detect malware, recognize patterns, and detect new threats and behaviors that could indicate malware and ransomware attacks.

Moreover, AI can also protect endpoints by establishing a baseline of behavior for endpoints and flagging for immediate action anything that is out of the ordinary. Besides, AI and machine learning (ML) can build a comprehensive understanding of web traffic and reinforce developers’ ability to differentiate between threats.

As cybercrimes are evolving quickly, using AI and ML to help protect businesses and people is more than necessary. By automating threat detection and multiple defenses, organizations will be able to stop the malware before it can start to cause issues.

The post Reinforcing organisations’ cybersecurity with AI appeared first on DevOps Online North America.

]]>
AI jobs to continue to grow in the coming years https://devopsnews.online/ai-jobs-to-continue-to-grow-in-the-coming-years/ Thu, 20 May 2021 09:35:37 +0000 https://devopsnews.online/?p=23362 A recent study by Indeed revealed that the demands for AI jobs continue to grow in the United States. It was reported that despite the fear of being replaced by a robot doing the work for you, jobs in this artificial intelligence (AI) technology are more sought-after than ever. It then showed that developing sophisticated...

The post AI jobs to continue to grow in the coming years appeared first on DevOps Online North America.

]]>
A recent study by Indeed revealed that the demands for AI jobs continue to grow in the United States.

It was reported that despite the fear of being replaced by a robot doing the work for you, jobs in this artificial intelligence (AI) technology are more sought-after than ever. It then showed that developing sophisticated AI has actually encouraged job opportunities and offer new, higher-level roles for employees.

Indeed, it was noted that this advanced technology would help new create new jobs such as ‘Robot/AI trainer’ and ‘Chief of Artificial Intelligence’ as well as help reduce the need for low-level tasks.

Moreover, the report highlighted the top AI jobs sought after in 2021. The list includes data scientists, senior software engineer, machine learning engineer, data engineer, software engineer, software developer, software architect, senior data scientist, full-stack developer, and finally, principal software engineer.

Hence, this shows that the rise in AI technology is providing a lot of opportunities that should be seen as positives rather than negatives.

The post AI jobs to continue to grow in the coming years appeared first on DevOps Online North America.

]]>
The growing role of AIOps https://devopsnews.online/the-growing-role-of-aiops/ Tue, 16 Mar 2021 11:00:20 +0000 https://devopsnews.online/?p=23112 In order to monitor and manage more dynamic and modern IT environments, it is becoming essential to use artificial intelligence (AI) within IT operations. This process is then called AIOps. AIOps reinforces IT Ops and DevOps teams to do smarter and faster work, so they can detect issues earlier and resolve them quickly. With AIOps,...

The post The growing role of AIOps appeared first on DevOps Online North America.

]]>
In order to monitor and manage more dynamic and modern IT environments, it is becoming essential to use artificial intelligence (AI) within IT operations. This process is then called AIOps.

AIOps reinforces IT Ops and DevOps teams to do smarter and faster work, so they can detect issues earlier and resolve them quickly. With AIOps, Ops teams are able to manage a large quantity of data generated by modern IT environments. It is expected that AIOps will become more and more popular in the future, even rise up to 30% by 2023, according to Gartner.

Hence, we have talked to experts in the industry to shed light on this topic and see what the future holds for AIOps.

 

What is AIOps?

First of all, we have asked them to explain what is AIOps and what can it do.

Hitesh Khodani, QA Leader, defines AIOps as applying Artificial Intelligence to improve IT operations.

Indeed, AIOps leverages Machine learning, big data & analytics capabilities to:

  • Get the data load of operations data from multiple IT infrastructure components, applications, and performance-monitoring tools (i.e., Splunk)
  • Intelligently remove the noise from the collected data and identify patterns & events concerning system issues and performance.
  • Identify the issues and report to IT for rapid action and response.

Wayne Ariola, DevOps thought leader, reinforces this point by highlighting that, as the name “AIOps” suggests, the focus of the data analysis is to improve operations. But, it is vital to know that the concept of AIOps has far-reaching value across all value-stream in an organization.  For instance, Curiosity Baseline applies these AIOps techniques to the vast array of software quality data that resides throughout the software development lifecycle.

George Ukkuru, Head of Quality Engineering at UST, also says that AIOps is all about applying machine learning and analytics to augment IT Operations. According to him, AIOps can be used for simple tasks like finding the best engineer to fix an issue to performing auto-healing.

He continues by saying that you can use AIOps for various things such as identifying anomalies and make predictions by analyzing patterns in log files, co-relating events to identify root causes, providing automatic resolutions to problems based on continuous learning, etc.

Hitesh also emphasizes that, in legacy operations monitoring systems, multiple IT operations tools were used to detect and alert any operations issues, but all of this is still done manually and involves coordinating with multiple teams. With AIOps, big data is used to aggregate siloed operations data in one single place. Hence, the data collected can range from systems logs, past performance and event data, Network data, past Incident data, and resolutions notes, knowledge articles, etc.

On the collected data, he continues, AIOps applies machine learning and analytics to filter the noise and identify critical events and raise alerts; identify root causes and propose solutions based on past and current information. Besides, it also automates the responses and recommends solutions based on the past resolution data. Machine learning capability provides the system to predict the events much before they happen and propose solutions in advance.

Therefore, AIOps can help grow and consolidate businesses and teams as well as improving their performances and productivity.

 

Why do we need AIOps?

Wayne points out that we need AIOps so we can prevent an issue instead of having to fix it.  Yet, we also need to realize that computing power, data access, and AI techniques have allowed humans to take another incremental step forward with automation.

Hitesh adds that most organizations and businesses are now migrating to new infrastructure capabilities like cloud, hybrid leveraging virtualized services that can scale and handle the demand instantly. Hence, applications across these platforms generate a huge amount of data, which the existing manual IT operations processes cannot cope with.

AIOps alongside Machine learning, big data & analytics capabilities, on the other hand, can consume volumes of data across all infrastructure and logically analyze the data to report significant events pertaining to performance degradation, outages, and trigger alerts automatically to be actioned by the IT operations staff.

Finally, George highlights 3 main reasons why AIOps is essential:

  • The dependency on infrastructure is very high critical applications are accessed from the cloud, and the availability of 5 9’s is the new norm
  • The application environment has become very complex due to an increase in scale and elasticity
  • Engineering leaders are looking at deriving insights from the vast amount of data that is available in disparate formats

 

Implementing AIOps

After we’ve seen why should enterprises use AIOps, let’s explore how to implement it.

According to George, the first step to implement AIOps is to identify the pain points in operations and convert them to a use case.

Indeed, in order to do this, you have to carry out a cost-benefit analysis to see whether there will be sufficient returns in solving the problem. When doing so, you need to evaluate whether you need to leverage AI, ML, or Big Data to solve the problem. The next step is to look at the data you can use to find a solution to the problem, and this may exist in various formats such as log files, resolution data, device data, etc. Then, you have to review the quality of data and eliminate bad data.

You may need to define, train and refine the machine learning model. To do this, you need to start with simple models and then improve the complexity, by selecting the best algorithm and build an interface for visualizations or interactions.

Once the system is ready, George points out, it is time to test and observe the behavior. You may need to fine-tune the data models and algorithms to improve the accuracy before putting the solution to production use. But be careful to do enough change management to weed out concerns among employees around Bots replacing humans.

For Hitesh, here are the main steps to follow in order to implement in AIOps:

  • Identifying the pain points in current IT operations and coming with use cases for implementation,
  • Map current tooling and infrastructure,
  • Socialize the plan for AIOps implementation with involved teams
  • Identify data requirements
  • Configure solution around existing tools and infra
  • Setup and monitor
  • Review and refine based on learnings

Hence, Hitesh highlights that the AIOps solution integrates existing tools and processes of an organization. IT teams use multiple tools for monitoring for various purposes. AIOps ties them all together and delivers seamlessly shared visibility across all tools, teams, and domains.

Wayne emphasizes that, for most organizations, AIOps will be implemented with the assistance of a platform that provides an interface to simplify the three steps:  data access, data analysis, and event management.

Besides, he continues, there are software vendors who are leveraging known patterns within specific value-streams of an organization which will assist teams to get to valuable business outcomes faster than trying to build AI algorithms by themselves.

Wayne also recommends that organizations should invest in training critical employees about AI and AIOps – the more depth in the knowledge that an organization possesses the more ‘realistic’ outcomes will prevail.

‘Just like any new technology, AIOps represents a culture change that cannot be ignored. Teams will need to play nice in the sandbox with other teams. Traditional organizational silos will be challenged, and the “answers” provided by predictive systems will precipitate some uncomfortable discussions.’

 

The benefits…

Implementing AIOps also comes with many advantages…

According to Wayne, the entire goal of AIOps is to manage complexity.

Indeed, our interconnected world provides a roadmap for the simplification of both mundane and complex tasks. Hence, humanity needs a way to leverage data to expose patterns and risks that would go unnoticed using manual techniques. Patterns that might be exposed over years or decades could be highlighted in days.

‘AIOps is another step in the evolutionary journey that started with smoke signals and has evolved to AI.’

For Hitesh, AIOps can help modernize the IT operations and operations teams, predict management, and speed up the MTTR (Mean time to resolution).

Indeed, AIOps can modernize the IT operations by bringing intelligence to the alerting system by only reporting issues which are worthy of reporting with complete diagnostic details and the best possible solution. It also keeps learning with each alert raised making future diagnosis easier and helps keep the lights on. Besides, AIOps tools perform continuous monitoring without the need to rest or sleep. This helps the IT operations team to focus on serious, complex issues and initiatives that can then increase business stability and performance.

Moreover, AIOps helps predictive management as current operation processes are mainly reactive due to the action being taken post facto. Hence, AIOps brings the whole process to be predictive and identify problems before they become major outages.

Finally, Hitesh points out that it can offer faster MTTR as AIOps can identify root causes and propose solutions faster and more accurately than manual processes. This then enables organizations to set and achieve previously unthinkable MTTR goals.

George highlights four key benefits regarding AIOps:

  • Enable faster decision making and provide insights
  • Improve the availability and reliability of applications
  • Reduce cost by proactively fixing issues
  • Decrease the MTTR

 

… And the challenges

But what are the challenges that come with it?

For George, the major drawback is that you may need to spend a fair amount of effort customizing existing data models or creating a new data model for the use case you are trying to implement. It might take time to get to a reasonable accuracy level so that your solution is 100% reliable. Besides, he adds that you should always look at the ROI (Return of Investment) and benefit that can be achieved from each use case before investments are made.

Wayne also adds that, just like all transformative technologies, you need to take into account the human component. Indeed, humans must learn that the predictive analysis that AIOps delivers is the new normal. There will then be a shift in the way we consume information and act upon the information.

According to Hitesh, there are a few challenges to consider regarding AIOps including:

  • AIOps is as good as the data it is fed and hence has limitations.
  • There is a steep learning and implementation curve as the initial setup and maintenance require significant effort.
  • Too much dependency on diverse data sources, as well data retention, protection, and storage.
  • AIOps can’t report every type of software monitoring or management task in real-time. There will always be situations where manual intervention is required and can delay the resolution.
  • Cannot be relied on for more complex issues which are a bit tricky and involve lots of critical systems.

 

The future of AIOps

Hitesh believes that AIOps will evolve continuously and will be used extensively in large organizations which are currently undergoing digital transformation and platform modernizations.

‘With improved algorithms, patterns, and large datasets, AIOps will continue to see large adoption across the industry.’ Hitesh says.

He adds that AIOps is a powerful solution, but we need to be informed that it cannot solve all problems. Indeed, AIOps will possibly be used more in resolving simple and routine issues which will free up significant human time to focus on more critical and complex issues.

George also thinks that AIOps will grow more powerful in the years to come. Preventing outages and improving customer satisfaction of digital customers will be the number one priority of every CIO, he points out. Hence, AIOps could help by increasing innovations and investments in this area going forward.

Finally, Wayne believes that AIOps is an evolutionary step in data analysis that can make all organizations “data-driven.”

Organizations that are not investigating or are slow in adopting these techniques will fall behind in a few ways. Indeed, according to him, organizations will be first disconnected from the actual customer’s experience versus their competition using AIOps. Then, they will be culturally behind in understanding how to leverage data across a broader swath of an organization’s value stream. Finally, due to the lack of data insights, innovation will be relatively riskier versus an organization’s peer group which will be a distinct disadvantage.

It then seems that AIOps is here to stay and will only grow more powerful in the coming years…

 

Special thanks to Hitesh Khodani, Wayne Ariola, and George Ukkuru for their insights on the topic!

The post The growing role of AIOps appeared first on DevOps Online North America.

]]>
Toyota VC to invest in AI startups for better processes https://devopsnews.online/toyota-vc-to-invest-in-ai-startups-for-better-processes/ Mon, 08 Mar 2021 10:47:05 +0000 https://devopsnews.online/?p=23126 It was recently reported that Toyota Motor Corp’s first venture capital fund has started to invest in AI startups in order to help refine everyday processes with better supply-chain management and add robotics to the factory floor. Indeed, Toyota AI Ventures fund has invested in 36 early-stage startups, such as self-driving car software firm Nauto,...

The post Toyota VC to invest in AI startups for better processes appeared first on DevOps Online North America.

]]>
It was recently reported that Toyota Motor Corp’s first venture capital fund has started to invest in AI startups in order to help refine everyday processes with better supply-chain management and add robotics to the factory floor.

Indeed, Toyota AI Ventures fund has invested in 36 early-stage startups, such as self-driving car software firm Nauto, factory video analytics company Drishti, and air mobility firm Joby Aviation, to add artificial intelligence to its cars as the industry is turning towards self-driving cars.

Hence, this investment wants to lead Toyota to the top of the market in the industry by incorporating the latest technology available.

The post Toyota VC to invest in AI startups for better processes appeared first on DevOps Online North America.

]]>
PGA Tour to partner with AWS for an enhanced experience https://devopsnews.online/pga-tour-to-partner-with-aws-for-an-enhanced-experience/ Thu, 04 Mar 2021 10:46:59 +0000 https://devopsnews.online/?p=23115 It was recently announced that the PGA Tour has chosen Amazon Web Services (AWS) to be its official cloud provider, and will start using artificial intelligence (AI), deep learning, and machine learning (ML). Indeed, following this partnership, the PGA Tour will be looking to migrate its archives and annotate it. Amazon S3 will then be...

The post PGA Tour to partner with AWS for an enhanced experience appeared first on DevOps Online North America.

]]>
It was recently announced that the PGA Tour has chosen Amazon Web Services (AWS) to be its official cloud provider, and will start using artificial intelligence (AI), deep learning, and machine learning (ML).

Indeed, following this partnership, the PGA Tour will be looking to migrate its archives and annotate it. Amazon S3 will then be used to develop a data lake, where insights can be found and a stream will be built for live footage from future tournaments.

Moreover, the PGA Tour will use AWS media services to offer video content for televised event coverage quicker and in better quality, as well as OTT streaming for online viewers. This will also feature the introduction of a new feature where fans will be able to change their viewing perspective thanks to alternate camera angles and navigation.

Therefore, this partnership aims to give a unique experience to fans, by giving them real-time access to events.

 

The post PGA Tour to partner with AWS for an enhanced experience appeared first on DevOps Online North America.

]]>
US to invest massively in AI to counter China and Russia https://devopsnews.online/us-to-invest-massively-in-ai-to-counter-china-and-russia/ Wed, 03 Mar 2021 10:56:51 +0000 https://devopsnews.online/?p=23109 On Monday, a congressional report was released calling for billions of dollars of U.S. investment towards artificial intelligence (AI) as an effort to counter China and Russia threats. Indeed, the report aims to get the U.S. Department of Defense ‘AI-ready’ by 2025 in order to stay ahead of China in the development of AI-enabled technologies and...

The post US to invest massively in AI to counter China and Russia appeared first on DevOps Online North America.

]]>
On Monday, a congressional report was released calling for billions of dollars of U.S. investment towards artificial intelligence (AI) as an effort to counter China and Russia threats.

Indeed, the report aims to get the U.S. Department of Defense ‘AI-ready’ by 2025 in order to stay ahead of China in the development of AI-enabled technologies and capabilities. This report wants to be a wake-up call for the US to be prepared.

Hence, the Pentagon is expected to increase its artificial intelligence research and development to $8 billion a year by 2025 and the DoD to spend about 3% of its budget on science and technology. Besides, nondefense AI R&D spending will likely rise to $32 billion by 2026.

Moreover, it was stated that Pentagon leaders have realized the importance of AI in the future, as the department leadership will have broad visibility in AI projects and measurements from now on.

It also reported that future warfare will depend on AI-enabled autonomous weapons systems. However, this raises some concerns about compliance with international law. Besides, it has become essential that the U.S. need to engage with China and Russia on autonomous weapons.

This shows that our world now considers AI tools to be the first weapons to use between great nations.

The post US to invest massively in AI to counter China and Russia appeared first on DevOps Online North America.

]]>