git Archives - DevOps Online North America https://devopsnews.online/tag/git/ by 31 Media Ltd. Tue, 04 Sep 2018 09:27:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 Datree receives US$3million to build DevOps engine in GitHub https://devopsnews.online/datree-receives-us3million-to-build-devops-engine-in-github/ Mon, 03 Sep 2018 11:58:43 +0000 http://www.devopsonline.co.uk/?p=13898 Datree announces it has raised US$3million in seed funding from TLV Partners to help fix the challenges of DevOps

The post Datree receives US$3million to build DevOps engine in GitHub appeared first on DevOps Online North America.

]]>
Datree announced it has raised US$3million in seed funding from TLV Partners.

The first Git-centric operations management platform fixes the challenges of DevOps scaling by cataloguing an organisation’s entire development stack, automating Git Operations tasks and preventing dangerous changes to applications and infrastructure.

Agile and DevOps broke down software companies into small autonomous teams using different programming languages, multiple repositories and microservices. This helps make software development faster, more efficient and more creative.

However, autonomy comes with a price. Software teams often have no idea what code components other teams are using or who’s working on what; company standards become optional because there’s no way to enforce them without excessive control; and CIOs, Software Architects and DevOps Managers have lost visibility into the company’s stack because it’s so distributed.

Security compromises

Over time, these issues snowball, leading to inconsistent quality, security compromises, and even outages. For many companies, Git has become the single source of truth for applications and infrastructure code. With ‘infrastructure as code’, even servers are software and are defined in Git repositories. Every developer can make code commits or pull requests that affect both applications and infrastructure stability and quality in production.

Companies can have hundreds or even thousands of Git repositories, linked by a complicated network of dependencies, with automated processes building and deploying them into applications in ways that nobody fully understands.

Datree solves this problem. First, it scans and monitors all of a company’s public and private code repositories to build a 360-degree catalogue of the entire company’s ecosystem: all the code components (open source and internal packages, infrastructure as code, cloud services and more), as well as people and repositories.

Best-practice guidelines

Datree’s real-time catalogue knows which people and teams work on which code components, and who are the in-house experts on particular tools and packages. All of this is accessed via interactive dashboards.

Next, datree’s smart policy engine helps developers work within an organisation’s best-practice guidelines. Developers get real-time information and insights pushed to their existing work environment (eg the command line and Slack) to help them make informed decisions. Datree checks every pull request against user-defined policies and standards and blocks risky code components, unstable versions, and unauthorised changes to ‘infrastructure as code’ and Git configurations.

“Even among the software giants, a typo in a configuration file can cause huge infrastructure outages. One company told us that a minor change to a template file on Git removed their whole application’s security systems by mistake due to automated deployment,” says datree, CEO, Arthur Schmunk.

“Datree would prevent a change like that from being merged in the first place”.

Written from press release by Leah Alger

The post Datree receives US$3million to build DevOps engine in GitHub appeared first on DevOps Online North America.

]]>
How to keep on top of testing techniques & tools https://devopsnews.online/how-to-keep-on-top-of-testing-techniques-tools/ Thu, 05 Jul 2018 09:40:33 +0000 http://www.devopsonline.co.uk/?p=13295 Test Magazine Journalist Leah Alger interviews senior software testing assets to find out how they keep on top of testing techniques & tools

The post How to keep on top of testing techniques & tools appeared first on DevOps Online North America.

]]>
Test Magazine Journalist Leah Alger interviews senior software testing assets to find out how they keep on top of testing techniques & tools

Tell me about yourself and your job role:

Sudeep Chatterjee: I am a Senior Technology Manager who has more than 19 years’ experience with top-tier investment banks, fintech and consulting firms; managing testing globally for enterprise-wide change programmes. I am currently consulting as Head of Testing at Bank of America Merrill Lynch within FICC – Global FX Technology group. Prior to the Bank of America Merrill Lynch, I have worked as Head of Testing with Lombard Risk, Barclays, UBS, GE and Accenture, while primarily focusing on building high performing multi-disciplinary testing teams and delivering testing for complex technology-driven business transformation initiatives.

Niranjalee Rajaratne: I am an IT professional with over 12 years’ experience who has worked in various verticals such as telecom, financial services, publishing and e-commerce. Currently, I am the head of quality assurance at Third Bridge. Third Bridge is a leading independent financial research company that provides private equity firms, hedge funds and strategy consultants with the information they need to make an informed decision about investment opportunities.

What is the most complex system architecture you have managed?

Sudeep Chatterjee: I have managed testing of many complex system architectures starting from PoS systems for one of UK’s largest retail company, complex integrated voice recognition systems for one of UK’s largest telecommunication company, complex banking systems built over quants and big data solutions and recently work on the high frequency low latency electronic trading platform for FX.

Niranjalee Rajaratne: One that was developed on unstable grounds where the system was built and managed in an ad-hoc manner. There was a poor team structure in place and there wasn’t enough skills and knowledge in the teams. Also, the business knowledge was not built into the system properly. This created uncertainty when it came to software testing and delivery. It became difficult to achieve quality and to take proactive measurements to risk management.

Are your change management processes modernised and agile friendly?

Sudeep Chatterjee: Change management process is agile and focuses on ensuring SDLC is built over robust processes, which allows for faster delivery with high quality.

Niranjalee Rajaratne: Yes, up to some extent. We keep evolving. We try to iterate the change management process so that it enables us to achieve the business needs faster, better and within the budget. As an outcome, we try to do early engagements with business units, improve communication to keep it consistent and get the buy-in from senior management as early as possible. The team is given autonomy and time to try new things and adapt to change as new and brilliant ideas come to light.

How do you stay ahead of the competition when it comes to mobile systems, cloud computing and APIs?

Sudeep Chatterjee: Adopting a modern digital transformation strategy is part of the technology roadmap, which includes best-in-class delivery for mobile platforms and cloud solutions for end users.

Niranjalee Rajaratne: Research and assessments help us to continuously refine and adopt the right level of technology to achieve business objectives. We assess our current workflows and software to identify gaps in efficiency that technology can bridge.

Can you give me an example of a CI/CD failure you’ve experienced, and what you learned from it?

Sudeep Chatterjee: CI/CD has gone wrong mostly in my experience, especially when dev teams are used to implementing plans and runbooks for deployments in non-production and production environments. When teams start using CI/CD tools without proper training there are times when an issue comes up, particularly around configuration setups.

Niranjalee Rajaratne: I can give many, such as the wrong selection of CI servers, inefficient build infrastructure, uncorded and unstructured pipeline, incompetent skill set and lack of management support. It resulted in delivering business value to the end user albeit on broken builds. CDs cannot be performed and organisation cannot be productive with an inconsistent CI. To make CI/CD work, it was important that the business and technology management understood the value of an efficient build pipeline, and support it without seeing the effort as pure technical work.

How has DevOps changed your way of working?

Sudeep Chatterjee: DevOps has helped organisations to reduce the time to deliver software to production with continuous testing embedded in the process. It has also strengthened testing frameworks like behaviour driven development.

Niranjalee Rajaratne: DevOps improved infrastructure and created efficient build pipelines to help get new products and services to the users quickly. It created a shift in testing and quality assurance by enabling automation of repetitive tasks such as regression and smoke testing. The development teams were empowered to follow test first development practices such as TDD and BDD, helping them towards an efficient build pipeline, and to receive fast feedback.

What DevOps tools do you use?

Sudeep Chatterjee: Jenkins, Git, BitBucket, Maven and Ansible.

Niranjalee Rajaratne: Github, Ansible, Docker, Jenkins, Elasticsearch, Monit are few.

Have you faced any bad experiences when implementing DevOps?

Sudeep Chatterjee: Implementing DevOps without organisational culture change can cause conflicts between teams. For successful DevOps teams, it is important the change management process complements the ethos, and all team members from BA, developers, architect, environment management, QA and application support must understand their role in the DevOps world.

Niranjalee Rajaratne: We certainly did have challenges and we still have some to this day, but I would not say they were bad experiences. Rather they were learning opportunities for the team to perfect the delivery pipeline.

How will DevOps continue to improve software delivery?

Sudeep Chatterjee: DevOps will continue to improve and will be ‘the norm’ for all software delivery. Dev and QA teams will learn to work with DevOps tools including the continuous integration and continuous delivery de-facto standard.

Niranjalee Rajaratne: It will further support the team’s autonomy to be self-organised by removing dependencies, further breaking silos. It will continue to improve inter-team collaboration, empowering people to learn new skills and share knowledge. It will bring users and technology teams closer to achieve a symbiotic relationship. If iteratively improved, this will enable organisations to take a paradigm shift in how they deliver software.

Do you believe manual testing will come to a close because of automation and DevOps?

Sudeep Chatterjee: There will always be a requirement for manual exploratory testing by domain experts though this may not just be a QA professional but can be anyone in the team with strong domain knowledge like BA, product owner, or users.

Niranjalee Rajaratne: The purpose of manual testing and the value it brings cannot be fully replaced by automated testing. Whilst automation testing is an integral part of DevOps, manual testing should continue to function in order to help achieve optimum levels of quality in software.

Anything else you would like to add?

Sudeep Chatterjee: One of the challenges in the agile and DevOps world will be how organisations measure product quality and compensate its employees for performance management. You must make sure the HR process matures to be to able to provide compensation to the entire team, or you will have to still rely on individual feedbacks and do performance benchmarking between team members.

The post How to keep on top of testing techniques & tools appeared first on DevOps Online North America.

]]>
SANS institute announces DEV540 course https://devopsnews.online/10235-2/ Tue, 26 Sep 2017 13:36:58 +0000 http://www.devopsonline.co.uk/?p=10235 The global leader in information security training, SANS Institute, announces its new course to help developers and security professionals

The post SANS institute announces DEV540 course appeared first on DevOps Online North America.

]]>
The global leader in information security training, SANS Institute, today announced a new course, aimed to help developers and security professionals.

DEV540 Secure DevOps and Cloud Application Security is a five day hands-on course, providing security, IT and risk professionals with new ways to help their organisation utilise DevOps and cloud-based technology, safely.

Taught by DevOps security programmes, the DEV540 Secure DevOps and Cloud Application Security course offers detailed insight into how developers and security professionals can build and deliver secure software using DevOps and cloud services, specifically Amazon Web Services.

Students will learn how code is automatically built, tested, and deployed using popular open source tools such as git, Puppet, Jenkins, and Docker and build a secure DevOps CI/CD toolchain.

The final three days of the course covers how developers and security professionals can utilise AWS services and features for encryption, autoscaling, serverless computing, and more to build secure software in the cloud.

The course co-author and instructor, Frank Kim, said: “DevOps and cloud are radically changing the way that organisations design, build, deploy and operate online systems. Because traditional approaches to security can’t come close to keeping up with this accelerated rate of change, security must be reinvented. If your company is adopting DevOps and/or moving to the cloud (with AWS), you will want to take the Dev540 course.”

Written by Leah Alger

The post SANS institute announces DEV540 course appeared first on DevOps Online North America.

]]>
Gamesys road to DevOps https://devopsnews.online/gamesys-road-devops/ Tue, 06 Jun 2017 11:46:28 +0000 http://www.devopsonline.co.uk/?p=9094 Senior Software Engineer at Gamesys, Zsolt Szilard Sztupak, proclaimed his road to DevOps at this year’s National DevOps Conference. Even with a helping hand from developers, testers, deploys, monitors, log, configuration management and collaboration platforms, the online gaming company’s road to DevOps wasn’t straightforward. “Our road to DevOps consisted of monthly releases, downtime during release,...

The post Gamesys road to DevOps appeared first on DevOps Online North America.

]]>
Senior Software Engineer at Gamesys, Zsolt Szilard Sztupak, proclaimed his road to DevOps at this year’s National DevOps Conference.

Even with a helping hand from developers, testers, deploys, monitors, log, configuration management and collaboration platforms, the online gaming company’s road to DevOps wasn’t straightforward. “Our road to DevOps consisted of monthly releases, downtime during release, costly meetings between teams, communication issues and late integration issues,” said Sztupak.

In a bid for Gamesys to change its path another route needed to be found. “We needed to find a way to prevent the following issues: we needed to split up monolith, create a platform that allows people to create microservices easily, deploy those microservices, automate processes and automate what’s on the Dev’ side,” he added.

Before the change, backend and frontend teams were separated, teams were effectively going at different speeds and frontend and backend teams were merged into verticals.

“A platform team needed to be built up from members of different teams. We set up to drive the move to microservices, gave ourselves more leeway in assessing new technologies, created Dropwizard (a common platform) and found a way of specifying our APIs,” revealed Sztupak.

After 5 months and 3 weeks they finally found solutions. Legacy-in-a-box removed the convoluted build and deployment process, and made it easy to make changes to the legacy monolith.

“To ensure that the following was achieved, teams started to create microservices even though there was no way of deploying them, containers were introduced, GoCD was used as a framework, and everything was built automatically in git,” he added, concluding that Ansible, Docker and GoCD are the backbone of DevOps technologies.

Written by Leah Alger

The post Gamesys road to DevOps appeared first on DevOps Online North America.

]]>