hardware Archives - DevOps Online North America https://devopsnews.online/tag/hardware/ by 31 Media Ltd. Fri, 18 Mar 2022 11:58:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 Meta to be resolving its silent data corruption issue https://devopsnews.online/meta-to-be-resolving-its-silent-data-corruption-issue/ Fri, 18 Mar 2022 11:58:57 +0000 https://devopsnews.online/?p=24055 Meta has recently published its approach for resolving the silent data corruption (SDCs) issue. Indeed, SDCs are data errors that don’t leave any trace in system logs but can affect memory, storage, networking, and cause data loss and corruption. Meta has started testing three years ago after having difficulties detecting SDCs within their data center fleets....

The post Meta to be resolving its silent data corruption issue appeared first on DevOps Online North America.

]]>
Meta has recently published its approach for resolving the silent data corruption (SDCs) issue.

Indeed, SDCs are data errors that don’t leave any trace in system logs but can affect memory, storage, networking, and cause data loss and corruption. Meta has started testing three years ago after having difficulties detecting SDCs within their data center fleets.

Hence, Meta is now using both out-of-production and ripple testing to detect the hardware issue. It was then recommended that big organizations should use both approaches as well in order to detect data corruption at scale as quickly as possible.

It was also announced that Meta will provide five grants, worth around $50,000, for academia in order to develop research proposals in this field of research.

 

The post Meta to be resolving its silent data corruption issue appeared first on DevOps Online North America.

]]>
Database breakdown: 5 tips for avoiding data disasters https://devopsnews.online/database-breakdown-5-tips-for-avoiding-data-disasters/ Fri, 15 Jun 2018 08:37:02 +0000 http://www.devopsonline.co.uk/?p=13064 Losing data is everyone's biggest nightmare, so here are five simple tips for keeping things running smoothly and minimising risk

The post Database breakdown: 5 tips for avoiding data disasters appeared first on DevOps Online North America.

]]>
It’s every system administrator’s worse nightmare. An attempt to restore a database results in empty files, and there is no way to get the data back, ever.

Despite the fear and panic created by data loss, more often than not it’s due to simple things that are under our control and can be prevented. Studies have shown that the single largest cause for data outages is human error. Regardless of how much you try, there are still going to be mistakes and you have to account for them in the way database changes are managed.

Here are five simple tips for keeping things running smoothly and minimising risk.

Define roles and responsibilities

Safeguards need to be put in place to ensure that only authorized people to have access to the production database. The level of access shouldn’t be determined only by an employee’s position but also by the level of seniority. A famous story made the rounds last year when a developer shared that while following instructions in a new employee manual, he accidentally deleted the production database. To make things worse, the backup was 6 hours old and took all too long to locate. You might be shaking your head in disapproval right now over how the company could have been so irresponsible to let this happen, but it turns out…it’s really not uncommon (check out the comments on this tweet). To prevent unauthorised changes in the database that can result in utter disaster, it is essential to define, assign, and enforce distinct roles for all employees. If you need to, set roles and permissions per project to avoid any accidental spillover.

Confirm back up procedures

You need a well-planned backup strategy to protect databases against data loss caused by different types of hardware, software, and human errors. You’d be surprised by how often backups simply aren’t happening. In one case a sys admin complained that bringing hard drives home with backed up data was inconvenient, so the company invested in an expensive remote system; the same sys admin never got around to creating the new procedure, so the latest version of the backed-up data was 3 months old. Another employee discovered at his new job there hadn’t been a single back-up for the past three years. Knowing the back-ups are happening isn’t enough. You also need to also check to make sure they are usable and include all the data that’s needed. It’s worth restoring and then checking that the restored database is an exact match to the production data. A Nagios check such as “Is the most recent backup size within x bytes of the previous one” is a simple solution to make sure the restored database matches the production database.

Adopt version control best practices

Version control practices have long since been adopted in other code development environments, ensuring the integrity of code as only one person can work on a segment at any given time. Version control provides the ability to identify which changes have been made, when, and by whom. It protects the integrity of the database by labelling each piece of code, so a history of changes can be kept and developers can revert to a previous version. Bringing these practices into the database is crucial for data loss prevention, especially in today’s high-paced environment with increasingly shorter product release cycles. By tracking database changes across all development groups you are facilitating seamless collaboration while enabling DevOps teams to build and ship better products faster.

Implement change policies

Databases are code repositories, so they need the same safeguards when changes are made. It’s crucial to have clear policies on which changes are allowed and how they are administered and tracked. Is the action of dropping an index in a database allowed? How about a table? Do you prohibit production database deployments during daytime hours? All of these policies should not only be practised by participating teams, but enforced on the database level, too. Keep track of all the changes and attempted changes made. A detailed audit can help detect problems and potential security issues.

Automate releases

By taking advantage of comprehensive automated tools, DBAs and developers can move versions effortlessly from one environment to the next. Database development solutions allow DBAs to implement consistent, repeatable processes while becoming more agile to keep pace with fast-changing business environments. Automation also enables DBAs to focus instead on the broader activities that require human input and can deliver value to the business, such as database design, capacity planning, performance monitoring and problem resolution.

Databases often hold the backbone of an organisation, a priceless container for the transactions, customers, employee info and financial data of both the company and its customers. All this information needs to be protected by following clear procedures for managing database changes. Reducing the likelihood of data loss due to human error can help everyone sleep better at night.

Written by Yaniv Yehuda, CTO and Co-founder at DBmaestro

The post Database breakdown: 5 tips for avoiding data disasters appeared first on DevOps Online North America.

]]>
Microsoft protects Azure with ‘confidential computing’ https://devopsnews.online/microsoft-protects-azure-with-confidential-computing/ Mon, 18 Sep 2017 15:17:06 +0000 http://www.devopsonline.co.uk/?p=10112 Microsoft opens access programme called ‘confidential computing’, which protects data from Azure security features

The post Microsoft protects Azure with ‘confidential computing’ appeared first on DevOps Online North America.

]]>
Microsoft has opened an access programme called ‘confidential computing’, which protects data from Azure security features.

The new service offers greater assurance to customers that avoid putting personal data in a public cloud, by encrypting data while in use, and is aimed at organisations in finance and health that need to share highly sensitive data.

Confidential computing focuses on hardware-based encryption, ensuring that data is required to be processed in the clear.

It also protects data against threats from malicious insiders with access to hardware, external attacks that exploit bugs in the OS, application, and hypervisor, and unauthorised third-party access.

Mark Russinovich, Microsoft Azure CTO, wrote in a company blog post: “Despite advanced cyber security controls and mitigations, some customers are reluctant to move their most sensitive data to the cloud for fear of attacks against their data when it is in-use.

“With confidential computing, they can move the data to Azure knowing that it is safe not only at rest, but also in use from [various] threats.”

Written by Leah Alger

The post Microsoft protects Azure with ‘confidential computing’ appeared first on DevOps Online North America.

]]>
Tech Pro says companies are embracing hybrid cloud https://devopsnews.online/tech-pro-says-companies-embracing-hybrid-cloud/ Mon, 07 Aug 2017 16:29:36 +0000 http://www.devopsonline.co.uk/?p=9754 Survey participants in recent Tech Pro research said that they would rather have a hybrid cloud model because of the high cost of on-premises solutions

The post Tech Pro says companies are embracing hybrid cloud appeared first on DevOps Online North America.

]]>
Survey participants in recent Tech Pro research said that they would rather have a hybrid cloud model because of the high cost of on-premises solutions.

According to a report in Tech Pros “sister site”, Tech Republic, companies are “increasingly embracing” hybrid cloud as a strategy of its own, because of its deployment combining “reliability and stability” of a private cloud, along with “on-demand capabilities” of the public cloud.

The majority of Tech Pro’s survey participants said they were familiar with the hybrid cloud concept, 36% said their company currently has a hybrid cloud solution, 32% said their company is currently evaluating one, and 32% said their company is not considering it.

The 2017 survey results show “little difference” from its 2016 survey, with the top two reasons for choosing a hybrid cloud model staying the same: to avoid hardware costs and its “usefulness in disaster cost”.

Java programmer at Tech Pro, James Sander, said: “For industries with seasonal or variable workloads, assembling a private cloud to handle normal workloads while relying on public cloud providers to handle burst workloads can be a budget-friendly IT strategy.”

Respondents also noted that, when it comes to choosing a public cloud vendor, companies look for three factors: features and services, cost, and familiarity.

Written by Leah Alger

The post Tech Pro says companies are embracing hybrid cloud appeared first on DevOps Online North America.

]]>