When someone says "AI," they could mean any number of fundamentally different technologies. Before a ResearchOps lead can evaluate, govern, or integrate AI into a research pipeline, they need a shared vocabulary for what they're actually talking about.
Most published classification frameworks fall into one of two camps.
Academic classifications organise AI by theoretical capability: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). These describe where the field might go. They are useful for understanding the long arc of AI research, but they do not help a ResearchOps lead decide what to buy, build, or govern today. No one in a research organisation is choosing between "narrow" and "general" AI — every tool on the market is narrow AI.
A second academic lens classifies AI by how it processes information: reactive machines, limited memory, theory of mind, and self-aware AI. Again, this is useful context but not operational. Every commercial AI product a researcher touches today is a limited memory system.
Operational classifications organise AI by what it does — the practical function it performs in a workflow. This is the classification that matters when you are scoping a pilot, writing a governance policy, negotiating with a vendor, or training your team. The five types below are defined from the perspective of a ResearchOps lead: what will you encounter, what does each type actually do, and where does each one belong (or not belong) in a research pipeline.
Single-turn content generation. You type a prompt, the model returns text, code, or media. This is what most researchers mean when they say "AI." It summarises, drafts, rephrases, and translates. It does not analyse — it produces output that mimics the vocabulary and structure of analysis. This is where automation bias lives: well-formatted output that looks like rigorous work but has not been tested against any analytical standard.
Agents plan, reason, select tools, and iterate toward a goal within boundaries you define. Unlike generative AI, which responds to a single prompt, an agentic workflow executes multiple steps: one agent extracts, another critiques, a third resolves. The model decides what to do next — but only within the rules you have written. This is where governance matters most, and where ResearchOps has the clearest opportunity to lead rather than follow.
Pattern recognition and forecasting from structured historical data. Predictive AI identifies correlations in existing datasets and estimates the probability of future outcomes. Researchers encounter this in analytics dashboards, recommendation engines, survey scoring tools, and churn models. It does not generate new content — it classifies, clusters, or forecasts based on what has already happened.
Older text processing that predates the current generation of large language models. This includes keyword extraction, sentiment scoring, entity recognition, and taxonomy matching. NLP powered most "AI-assisted coding" features in qualitative tools like Atlas.ti before 2023. It is still useful for surface-level tagging at scale, but it operates on statistical frequency — it does not understand meaning the way an LLM does.
Interpreting or creating visual content. Computer vision extracts information from images and video (object detection, OCR, facial recognition). Image generation creates new visuals from text prompts. Less central to ResearchOps today than the other four types, but worth tracking because vendors increasingly bundle visual AI features under the same "AI-powered" label as everything else.