Agentic AI / Agentic Workflows Unlike standard generative AI where a user must input isolated, manual prompts, an agentic workflow involves giving the machine a goal, an integrated toolkit, and the permission to execute several steps semi-independently to achieve that goal, governed by human review checkpoints.
AIGP (AI Governance Professional) A professional certification covering the frameworks, enterprise risk models, and organisational accountability structures required to deploy AI systems responsibly, rather than focusing purely on technical coding.
Automated Oracle (AI Oracle) The dangerously misguided perception of AI as an infallible, all-knowing answer-engine. Relying on an AI as an oracle leads researchers to blindly accept machine outputs rather than actively challenging and calibrating the data against their own human rigour.
Automation Bias The human psychological tendency to over-trust and over-rely on machine-generated outputs simply because they appear formally structured, visually clean, or mathematically confident.
Confidence calibration: The gap between how certain a system appears to be and how certain it actually is. A well-calibrated system expresses less certainty when its evidence is weak. Most LLMs are poorly calibrated, delivering every output with equal authority regardless of reliability.
Dark Data A graveyard of legacy, historical research and data locked deep in highly sensitive, air-gapped internal enterprise servers (like strict SharePoint sites) that are functionally inaccessible to third-party cloud vendors but can be mined by secure, in-house agentic workflows.
Deterministic Logic The rules-based programming paradigm (standard in Symbolic AI and traditional software) where a human must explicitly write the code instructing the system exactly how to behave in every scenario (e.g., "If X, then Y").
Error profile: The characteristic way a specific AI system tends to fail. Different architectures break differently: an LLM hallucinates plausible patterns; a classification model misses categories absent from its training data.
Generative AI Systems built primarily to generate text, images, or media based on statistical probability models (like LLMs). Often confused with Agentics, generative AI inherently relies on isolated human prompting rather than taking semi-autonomous action.
Harness: The configuration layer wrapped around an AI model that controls how it behaves in practice: system prompts, skill files, input constraints, evaluation rules, and guardrails. The model is the engine; the harness is the steering, brakes, and seatbelt.
Integration Model The structural and methodological framework defining exactly how an AI capability is effectively embedded into an existing human workflow. Unlike a superficial 'technical deployment' (simply turning the software on), an integration model addresses contextual governance, data privacy parameters, and the precise cognitive hand-offs required between human researchers and machine agents.
Large Language Model (LLM) A type of probabilistic AI algorithmic model trained on massive amounts of text data to recognise, translate, predict, and generate human-like language outputs based purely on statistical likelihood.
Machine Learning The lineage of AI (originating with the Perceptron) where a machine is not explicitly programmed with rigid deterministic rules, but rather learns to recognise patterns and adjust its own internal weights entirely through repeated exposure to data.
Mark I Perceptron Assembled in 1959 by Frank Rosenblatt, this was the world's first machine learning device. Equipped with a grid of electronic eyes, it proved that a machine could successfully identify visual patterns using probabilistic randomness rather than explicitly coded logic.
Meta-Research (Researching the Researcher) The practice of running deep qualitative studies or listening sessions on internal research teams themselves to catalogue their cognitive workflows, pain points, and tool adoption behaviours.
Ontology / Taxonomy (in ResearchOps) The structured classification logic and vocabulary an organisation uses to label, store, and retrieve its knowledge assets. Strong ontological maturity is required before AI can successfully sift through internal repositories.
Pilot Trap A common enterprise phenomenon where organisations launch dozens of isolated AI tool tests (pilots) based on hype, generating high noise but suffering massive failure or cancellation rates due to a lack of governance, contextual integration, or clear ROI.
PWDRs (People Who Do Research) A broader organisational grouping that encompasses formal qualitative/quantitative researchers as well as adjacent professionals (Product Managers, Designers, Data Scientists) who actively conduct research as part of their systemic workflows.
Sparring Partner (AI) A conceptual approach to using agentic AI where the researcher refuses to treat the tool as an "oracle," instead purposefully using the machine's synthetic analysis or its miscalculations as a foil to test and elevate their own human analytical standards.
Symbolic AI The deterministic approach to Artificial Intelligence (pioneered by John McCarthy) that relies on explicit rules and top-down logic written directly by human programmers.