Find more at GeneAka Marketplace With Recent Update on 21/09

The 13-Types Of Ops


Operations was unloved. In the world of software application development and enterprise technology management, no system runs reliably without a solid engineering support function and service layer – in most cases we call that entity the operations team.

As the Ops in DevOps (the portmanteau joining of Dev-developers and Ops-operations teams in one unified work ethic methodology to promote workflow harmony and encourage a more equitable unified approach to running enterprise technologies so that we all get apps that work), the operations function has been previously looked down upon by professional developers (hence much of the reason for DevOps in the first place) who sometimes view the IT ‘service mechanics’ that work in this division as people who couldn’t cut it at the coalface of hard coding and programming logic.

Ops evolution

But Ops has evolved. In an era when it’s chic to be geek (thanks Sheldon and Big Bang gang), the core (some would say key) workers that make up the operations team span everyone from Database Administrators (DBA) System Administrators (sysadmins), Network Administrators (no acronym normally used), System Integrators (SI), Security Project Managers aka Penetration Testers (Pen-Tester), Site Reliability Engineers (SRE) and a host of other support technicians who may fall into more generic but no less essential roles.

Operations has enjoyed a vibrant renaissance throughout the post-millennial era of cloud computing, probably because this model of computing is so inherently based upon service provision. The applications are still important, more important than ever in fact, but the need to underpin them with a services structure that is ever-present, always-on and resilient in the face of user demands for scale, enhancements and change is now paramount. Let’s tour the 17 types of Ops with a contextual definition.

1. DevOps

“DevOps is a software development methodology emphasizing collaboration, communication and integration between development (Dev) and IT operations (Ops) teams throughout the entire software lifecycle. It aims to streamline the development, deployment and management processes, fostering a culture of continuous delivery and improvement,” said Prashanth Nanjundappa, VP of product management at Progress Chef.

Nanjundappa adds to his clean and precise core definition by saying that at its core, DevOps seeks to break down the traditional silos between developers and operations professionals, encouraging cross-functional teams to work together seamlessly with enhanced agility. This agility breeds quicker feedback loops, enabling organizations to respond swiftly to change.

“Another DevOps advantage lies in increased collaboration and communication between teams. By aligning goals and integrating workflows, developers and operations personnel can better understand each other’s needs, minimize conflicts and collectively address challenges. Additionally, DevOps encourages a more stable and reliable software environment. Automated testing and continuous monitoring ensure that issues are detected early, reducing downtime and enhancing overall system reliability,” clarified Progress’ Nanjundappa.

2. DevSecOps

“DevSecOps really does create an opportunity for security to reduce friction and embed cyber risk mitigations into the processes and tooling used by the developer and IT operations teams,” said James Blake, field CISO of EMEA at data security and information management company Cohesity.

With the increased cadence of releases in Continous Integration & Continuous Deployment (CI/CD) environments typified by cloud computing, the chief information security officer (CISO) must, of course, embrace the opportunities that increased tool integration, orchestration and automation provide.

“The unfortunate state of affairs [that exists in the real world] are the silos that exist between many DevOps and security teams,” notes Cohesity’s Blake, with realism. “This means that DevOps simply see DevSecOps as an opportunity to circumvent time-consuming security activities; meanwhile, the security team is resistant to embrace empowering concepts and methodologies – like Agile development and Infrastructure-as-Code – within the security environment. As the ex-CISO of a DevSecOps shop that made dozens of releases a week, I see it as an opportunity to fix problems early in the SDLC cycle, improve consistency through automation and vastly increased telemetry.”

3. MLOps

“For businesses adopting Machine Learning (ML) in their daily operations, a streamlined approach to deploying ML workloads leads to faster time to market. This methodology, known as MLOps, is achieved through the integration of data science and data engineering techniques with existing DevOps approaches towards software development and operations. By using MLOps, businesses deploying ML-focused workloads can create repeatable, reliable and auditable projects, helping improve delivery time, reduce defects, and make data science more productive,” explained Steve Lawson-Turner, senior manager, solutions architecture, AWS.

Like DevOps, Lawson-Turner says that MLOps relies on a collaborative and streamlined approach to the machine learning development lifecycle where the intersection of people, process and technology optimizes the end-to-end activities required to develop, build, and operate machine learning workloads.

“Using MLOps reduces the reliance on data scientists and data engineers to manually pre-process data for training. It also creates an environment for experimentation, allowing users to test multiple models, and once deployed, monitor them for accuracy over time. By reducing the burden these important operational tasks place on users, it creates more opportunity for developers and data scientists to experiment and innovate, and ultimately leads to the creation of greater business value whilst reducing costs,” clarified AWS’s UK-based Lawson-Turner.

4. AIOps

“Originally coined by Gartner, AIOps refers to the use of Artificial Intelligence (AI) to automate IT operations,” offered Roman Spitzbart, VP EMEA solutions engineering at unified observability and security company Dynatrace. “In general, AIOps solutions involve automating the collection and interpretation of data from every layer of an organization’s tech stack. By doing this, IT teams can then rapidly identify the cause of issues to resolve them, or even implement processes to automate remediation outright.”

What differentiates AIOps from simple data collection and dashboarding is the use of AI to ingest, sort and analyze data to present in an easily actionable format for IT teams. Spitzbart clarifies this point and further explains that AIOps has flourished as a solution over the past several years as the result of ever-greater complexity witnessed in applications and technology stacks as a whole, which in itself is no small part as a result of the rise of extremely distributed environments built atop containers and cloud-native development architectures.

“It has become impossible to manage these environments with traditional manual approaches,” warned Spitzbart. “Early AIOps solutions were based on probabilistic Machine Learning models, which relied on spotting correlations between events in an environment to suggest the potential cause of issues. More advanced AIOps solutions can provide greater precision by using causal AI-based approaches, which perform a step-by-step fault tree analysis across every component of the IT environment, to deduce the exact root cause of issues without any ambiguity.”

5. NoOps

The notion of NoOps (short for No Operations, clearly) is a statement intended to explain the ability to work in the absence of what we might consider the traditional operational aspects that would underpin a technology service.

“From load balancers to databases, there is no shortage of services in today’s cloud, said Matt Butcher, co-founder and CEO of Fermyon, the serverless WebAssembly company. “Operational platforms like Kubernetes expose as many configuration options as possible, giving platform engineers tremendous flexibility. But developers often feel overwhelmed by the myriad options available, about which they lack either interest or expertise. Developers don’t want to spend the day tuning a database instance or optimizing the load balancer – they want to spend their time writing code.”

Because those aspects of operations are considered to be distractions, the concept of NoOps satisfies the developer’s desire to focus on code if we can build it and ensure that the infrastructure layer is operated or automated on the developer’s behalf. Butcher illustrates the point and asks us to consider a database. In a NoOps environment, developers do not install the database (even in their development environment), nor do they create credentials, manage access controls, configure security or even work with a connection string.

“All of this is done at a lower level. The developer merely declares the intention to use a database (perhaps in an application configuration) and then begins working with the database (creating tables inserting data and querying). NoOps is about keeping the application developer’s focus on the application code, not the environment in which it runs,” explained Fermyon’s Butcher.

6. FinOps

“While DevOps encourages development and operations teams to collaborate, FinOps is a way of operating where DevOps, software engineering and finance teams work together to manage data-driven spend, take ownership of the costs generated by cloud usage and strive to achieve cost excellence,” said Nick Durkin, field CTO at software developer platform specialist Harness.

Durkin advises that effective FinOps will see these cross-functional teams work together on quantifiable indicators related to cost visibility, optimization and governance with an emphasis on real-time operations.

“These FinOps procedures and practices are designed to ensure spend doesn’t spiral out of control, while at the same time enabling the team to implement policies that make it easier for software developers to do the right things. The most important principle of FinOps is to integrate cost management into daily processes and acknowledge that optimizing cloud costs is not a one-off event,” specified Harness’ Durkin. “FinOps is becoming essential, as cloud costs continue to increase due to ‘cloudflation’ and the compute demands of IT infrastructure grow,” he added.

7. GreenOps

“GreenOps is one of the most recent Ops buzzwords to have emerged over the past couple of years and, not unlike many buzzwords in IT, it can mean different things depending on who you ask,” said Benjamin Brial, founder of Cycloid, a sustainable platform engineering provider. “At its heart, GreenOps is a framework for organizations to start understanding and quantifying the environmental impacts of their IT strategies whilst promoting a culture of environmental sobriety which flows through a workforce. Closely linked to more established terms like FinOps – another framework for managing operational expenditure across an organization – GreenOps is about generating greater cost transparency whilst simultaneously promoting environmental responsibility.”

Brial advises that, in practice, by placing sustainability at the orchestration layer and empowering users to consume less infrastructure to achieve desired results and business value, organizations can reduce software delivery/cloud costs and related carbon emissions.

“Combined with a shift in mindset which sees IT Managers, as well as CIO’s and other IT leaders become part of a much broader conversation about how to bake sustainability into the overall IT strategy from the start, GreenOps offers a path to bring development, finance, ESR and business teams together to ensure greater financial and environmental accountability,” he added.

8. APIOps

In the world of Application Programming Interfaces (APIs), APIOps consists of both DevOps best practices and tooling along with the singular focus and CI/CD pipelines taken from GitOps. With this responsibility, the engineer team creates and manages a central infrastructure so developers can build and deploy APIs with ease.

“It’s all about scaling the same learnings and benefits from generations of Ops to get better APIs out, faster,” said Elina Meister, developer relations lead at cloud-based web and mobile application platform company, Pipedrive. “APIs are the essential connectivity that allow the interoperability and functionality of modern IT – a collaboration between engineers and developers that allow service partnerships greater than the sum of their parts. APIOps is about deploying better quality code, more speedily and ever easier.”

With the right practices all the stakeholders from developer to implementor benefit. Standardization and automation promote consistent and continuous delivery that scales. In fact, partnering and building APIs between different firms’ products boosts all parties with greater visibility and capabilities, making it a great way to scale the business, too.

Meister clarifies further and explains that, “Skilled tech talent has historically been in massively high demand. A great APIOps approach boosts the ability to reach new audiences, markets and highs through API offerings. It does so whilst addressing security concerns earlier in the development cycle and enhancing interoperability and longevity.”

But she adds, as part of any effective APIOps program, IT teams need to ensure a link between APIs and the active developer community, via developer relations activities. Allowing users to understand, integrate and build with APIs means listening to the community and helping them succeed.

9. CloudOps

“CloudOps, or cloud operations, is a disciple that should work to support the reality of hybrid multi-cloud deployments that are truly functional,” said Rob Tribe, VP of systems engineering at Nutanix. “In line with DevOps (which should drive an efficient software development pipeline to enable continuous integration), CloudOps is focused on continuous operations in order to offer users high-availability cloud services underpinned by a unified platform that works across all environments.”

Looking at the recent and short history of CloudOps, Tribe notes that this approach may have been born in the public cloud, but that its application must now be agile enough to extend and be optimized seamlessly across on-premises environments and edge computing estates in the Internet of Things. Essentially, he describes CloudOps as a means of codifying optimal procedures for efficient cloud provisioning with a focus on cost-optimization, policy control and abstraction onwards into development, deployment and delivery.

Amazon Most Wished For

“Because enterprises need the cloud to be scalable, efficient and cost-effective, CloudOps can help provide a window into the underlying infrastructure. Twinned with a higher-level platform view, CloudOps helps illuminate aspects of system architecture such as resource allocation, configuration (and indeed misconfiguration) management, regulatory and policy compliance and performance in relation to Service Level Agreements (SLAs) and more,” illustrated Nutanix’s Tribe.

10. DataOps

DataOps describes the process that teams have to take around managing data more effectively so that they can retain it efficiently and make it useful for the business, generally through analytics.

“In the realm of DataOps from a data management perspective, there are a lot of tasks that used to be the sole responsibility of database administrators like running backups, setting up high availability systems, clustering and security. Many of these tasks can be automated today, improving reliability and performance, but there are still lots of situations where you need the right skills and understanding too,” said Percona chief technology officer Vadim Tkachenko.

According to Tkachenko, these DataOps data management skills are now needed across more of the software supply chain. “Teams like DevOps and Site Reliability Engineering now have more responsibility for running databases and supporting analytics pipelines, so DataOps helps those teams deliver what the business needs,” he said. “However, automation can take you far, but there are instances where a little bit of understanding can make your systems perform much better. Areas like query and schema design [how databases organise fields of data for storing, sorting and analysis] represent huge opportunities to make systems more efficient.

11. ModelOps

“To manage generative AI, you have to combine your data and models as part of delivering a service to users. In practice, managing this – ModelOps – will require ongoing infrastructure, data and model tuning so that you can keep getting the most out of your data,” said Peter Greiff, data architect leader for EMEA region at real-time AI company DataStax.

Getting started with ModelOps involves a large data preparation phase. Greiff details this process and says that this involves encoding data to create vectors, which then makes it easier to search and retrieve data for generative AI services, then sorting and organizing data so that it is based on semantic meaning and similarity rather than specific keywords.

“At this point, we also have to implement ‘feature stores’ [zones for frequently used Machine Learning data to reside in] to manage data for AI, as this simplifies data processing, versioning and management over time, as these reduce the complexity in building and maintaining data pipelines,” clarified Greiff. “After this, we have to look at how models and generative AI services perform. We have to make decisions about how to implement our data in the right formats, how to get data into our systems… and then monitor how the results match up against our – and customers’ expectations from a gen AI service.”

Ideally, this whole ModelOps approach makes the whole process as easy as possible to get going and keep running, so that developers can iterate and innovate with AI. This can be achieved by isolating data from the application layer so that systems can be protected from model changes and vice versa.

12. AppOps

“AppOps spans the operational processes and approaches that deliver applications into the various stages of the software and data lifecycle before launch – it is focused on operationally managing updates and monitoring in application production. The main facet these days is that application development is a mobile-first context for most, but not all, instances (there are some fat client projects & non-web platform work ongoing on desktop platforms),” said Phil Hoyer, field CTO for EMEA region at identity management platform company Okta.

Why should anyone care about this level of detail asks Hoyer? Because the functionality and experience of an application has to be consistent and bug-free across every device and operating system. He insists that really, there is no context in which a different experience, of the same app, is a good thing. Consistency demands coordinated processes behind the scenes.

“For example, in terms of the code that pertains to security – specifically the identification of a user – there are many different sources for that: open source, proprietary, development, outsourced work etc. The idea of AppOps is to keep these in sync and integrated, while not forming a weak link in the security – or user experience – chain. Of course, like any other process, identification changes – be it in the field vs. remote, or customer vs. employee. Therefore, AppOps also means monitoring the applications in use via analytics, to ensure the app remains both bug free, relevant in its features and consistent,” explained Okta’s Hoyer.

13. XOps & AnyOps

Weaving together many of these Ops practices and methodologies is GitOps, which we have explained in full here and further clarified with reference to specialists in this space, most notably GitLab.

As detailed in the link above, GitOps enables developers to automate infrastructure and manage it alongside their codebase, using the Git open source version control system to manage infrastructure and configure applications.

Although we have covered 13 types of Ops here, there are arguably more (PeopleOps for HR and HCM is a reasonable possibility) and we can imagine a time when we see Ops-washing with every IT and workplace function in the enterprise tagged with an Ops prefix as we enter the realm of XOps or AnyOps. If all of this Ops-centricity gets the poor operations team more respect, then that can’t always be a bad thing. Let’s build future Ops without any oops.

Original Post>

//Last UPDATE ON 18/09
Today's deals

Leave a Reply