AI glossary for business leaders.

Clear, jargon-free definitions of the AI terms that matter for your business. Written for decision-makers, not data scientists.

528

AI Terms Defined

16

Categories

A-Z

Browse by Letter

AI TERMINOLOGY

Browse the glossary.

A

A/B Testing
Data & Analytics

A/B Testing is a controlled experimental method that compares two versions of a product, feature, or experience by randomly assigning users to each version and measuring which performs better against a defined metric. It replaces opinion-based decisions with statistically validated evidence.

AI A/B Testing
AI Operations

AI A/B Testing is the practice of simultaneously running two or more versions of an AI model in production, each serving a portion of users or requests, to measure which version performs better against defined business and technical metrics. It provides data-driven evidence for choosing between model versions rather than relying on offline testing results or intuition.

AI Abuse Prevention
AI Safety & Security

AI Abuse Prevention is the set of technical measures, policies, and operational practices designed to detect, deter, and stop the intentional misuse of AI systems for harmful purposes such as fraud, harassment, disinformation, manipulation, and other malicious activities.

AI Accelerator
AI Infrastructure

AI Accelerator is a category of specialised hardware chips designed specifically to speed up artificial intelligence computations, including training and inference, delivering significantly higher performance and energy efficiency for AI workloads compared to general-purpose processors.

AI Access Control
AI Safety & Security

AI Access Control is the framework of policies, technologies, and processes that govern who can use, modify, retrain, deploy, and decommission AI systems within an organisation, ensuring that only authorised individuals and systems interact with AI assets at appropriate levels of privilege.

AI Adoption
AI Strategy

AI Adoption is the organizational process of integrating artificial intelligence technologies into business operations, encompassing the technical implementation, employee training, workflow redesign, and cultural change required to move AI from experimentation to everyday business practice.

AI Adoption Metrics
AI Operations

AI Adoption Metrics are the key performance indicators used to measure how effectively an organisation is integrating AI into its operations, workflows, and decision-making processes. They go beyond simple usage statistics to assess whether AI deployments are delivering real business value and being embraced by the workforce.

AI Agent
Generative AI

An AI agent is an autonomous software system powered by large language models that can plan, reason, and execute multi-step tasks with minimal human intervention. AI agents go beyond simple chatbots by taking actions, using tools, and making decisions to achieve defined goals on behalf of users.

AI Alignment
AI Safety & Security

AI Alignment is the field of research and practice focused on ensuring that artificial intelligence systems reliably act in accordance with human intentions, values, and goals. It addresses the challenge of building AI that does what we actually want, even as systems become more capable and autonomous.

AI Audit
AI Governance & Ethics

AI Audit is the systematic examination and evaluation of an artificial intelligence system to assess its compliance with regulations, adherence to ethical principles, technical performance, data handling practices, and alignment with organisational policies. It provides independent assurance that AI systems are operating as intended and meeting governance standards.

AI Benchmark
Generative AI

An AI Benchmark is a standardized test or evaluation framework used to measure and compare the performance of AI models across specific capabilities such as reasoning, coding, math, and general knowledge. Benchmarks like MMLU, HumanEval, and GPQA provide objective scores that help business leaders evaluate which AI models best suit their needs.

AI Benchmarking
AI Strategy

AI Benchmarking is the systematic process of measuring and comparing an organization's AI capabilities, performance, and maturity against industry standards, best practices, and competitors to identify gaps and prioritize improvement opportunities.

AI Bias
AI Governance & Ethics

AI Bias is the systematic and unfair discrimination in AI system outputs that arises from prejudiced assumptions in training data, algorithm design, or deployment context. It can lead to inequitable treatment of individuals or groups based on characteristics like race, gender, age, or socioeconomic status, creating legal, ethical, and business risks.

AI Bill of Rights
AI Governance & Ethics

An AI Bill of Rights is a framework that defines fundamental protections for individuals affected by artificial intelligence systems, typically including rights to safe systems, protection from discrimination, data privacy, notice that AI is being used, and the ability to opt out in favour of human alternatives.

AI Build vs Buy
AI Strategy

AI Build vs Buy is the strategic decision-making process where organizations evaluate whether to develop custom AI solutions internally using their own engineering resources or purchase ready-made AI products and services from external vendors, weighing factors like cost, speed, differentiation, and long-term maintainability.

AI Business Case
AI Strategy

AI Business Case is a formal document or analysis that justifies an organization's investment in an artificial intelligence initiative by outlining the expected costs, benefits, risks, and timeline required to deliver measurable business value.

AI Canary Deployment
AI Operations

AI Canary Deployment is a release strategy where a new or updated AI model is rolled out to a small subset of users or traffic before being deployed to everyone. This allows teams to monitor the new model's performance in real production conditions, detect issues early, and roll back quickly if problems emerge, all without exposing the entire user base to potential risks.

AI Center of Excellence
AI Strategy

An AI Center of Excellence is a dedicated cross-functional team or organizational unit that centralizes AI expertise, establishes best practices, governs AI initiatives, and supports business units across the company in identifying, developing, and deploying AI solutions effectively.

AI Center of Gravity
AI Operations

An AI Center of Gravity is the organisational unit, team, or function that serves as the primary driving force for AI adoption and coordination across a company. It concentrates AI expertise, sets standards, manages shared resources, and ensures that AI initiatives align with business strategy rather than emerging in uncoordinated silos.

AI Certification
Business Applications

An AI certification is a formal credential that validates a person's knowledge and skills in artificial intelligence. Corporate AI certifications focus on practical business applications and responsible AI use, while technical certifications cover machine learning, data science, and AI engineering.

AI Champion
AI Operations

An AI Champion is a designated individual within an organisation who advocates for AI adoption, bridges the gap between technical teams and business users, and drives enthusiasm and practical understanding of AI across departments. AI Champions accelerate adoption by providing peer-level support, gathering feedback, and demonstrating AI value through hands-on examples.

AI Change Management
AI Operations

AI Change Management is the structured process of preparing, equipping, and supporting people across an organisation to adopt AI-driven tools and workflows. It addresses the human side of AI transformation, including communication, training, resistance management, and cultural shifts needed for successful AI implementation.

AI Coding Agent
Agentic AI

AI Coding Agent is an autonomous software development tool powered by artificial intelligence that can write, edit, debug, and refactor code based on natural language instructions, dramatically accelerating how businesses build and maintain software products and internal tools.

AI Competitive Advantage
AI Strategy

AI Competitive Advantage is the strategic use of artificial intelligence to create capabilities, efficiencies, or customer experiences that rivals cannot easily replicate, enabling an organization to outperform competitors in its market over the long term.

AI Compliance
AI Governance & Ethics

AI Compliance is the process of ensuring that an organisation's artificial intelligence systems meet all applicable legal requirements, regulatory standards, industry guidelines, and internal policies. It involves systematic assessment, documentation, monitoring, and reporting to demonstrate that AI systems operate within established rules and frameworks.

AI Compliance Monitoring
Business Applications

AI Compliance Monitoring is the use of artificial intelligence to automatically track, detect, and report regulatory compliance violations and risks across an organisation. It continuously analyses business activities, communications, transactions, and data against regulatory requirements, reducing the manual effort of compliance management while improving detection accuracy and speed.

AI Content Generation
Generative AI

AI Content Generation is the use of artificial intelligence to create text, images, audio, video, and other media for business purposes, enabling companies to produce marketing materials, documentation, social media posts, and other content at significantly greater speed and lower cost than traditional methods.

AI Continuous Improvement
AI Operations

AI Continuous Improvement is the ongoing, systematic process of monitoring, evaluating, and enhancing AI system performance after deployment. It applies the principles of continuous improvement methodologies like Kaizen and Six Sigma to AI operations, ensuring that AI systems become more accurate, efficient, and valuable over time rather than degrading.

AI Copilot
Generative AI

AI Copilot is an AI assistant embedded directly into software tools and workflows that works alongside employees to boost productivity by suggesting actions, drafting content, automating repetitive tasks, and surfacing relevant information in real time.

AI Cost Management
AI Operations

AI Cost Management is the practice of tracking, analysing, and optimising the total cost of operating AI systems across their full lifecycle. It covers infrastructure expenses, data costs, talent costs, licensing fees, and ongoing maintenance, ensuring that AI investments deliver positive returns and that spending remains aligned with business value.

AI Cost Optimization
AI Infrastructure

AI Cost Optimization is the systematic practice of reducing the compute, storage, and operational expenses associated with developing, training, deploying, and running AI systems while maintaining acceptable performance and quality levels, ensuring that AI investments deliver maximum business value per dollar spent.

AI Course
Business Applications

An AI course is a structured educational programme that teaches participants how to understand, use, or implement artificial intelligence tools and concepts. Corporate AI courses focus on practical business applications rather than academic theory, and typically range from 1-day workshops to multi-week programmes.

AI Customer Service
Business Applications

AI Customer Service is the use of artificial intelligence technologies, including chatbots, virtual agents, and natural language processing, to automate and enhance customer support interactions. It enables businesses to provide faster responses, handle higher volumes, and deliver consistent service quality around the clock.

AI Data Ops
AI Operations

AI Data Ops is the set of operational practices, processes, and tools used to manage data throughout its lifecycle in AI production environments. It covers data ingestion, quality monitoring, pipeline automation, versioning, and governance to ensure that AI systems consistently receive the accurate, timely, and well-structured data they need to perform reliably.

AI Demand Forecasting
Business Applications

AI Demand Forecasting is the use of machine learning algorithms to predict future customer demand for products or services by analysing historical sales data, market trends, seasonal patterns, and external factors. It enables businesses to optimise inventory, production planning, and resource allocation.

AI Democratization
AI Strategy

AI Democratization is the organizational and technological movement to make artificial intelligence tools, knowledge, and capabilities accessible to a broad range of employees across the company, not just data scientists and engineers, enabling wider participation in AI-driven innovation and decision-making.

AI Development Environment
AI Infrastructure

AI Development Environment is an integrated set of tools, platforms, and infrastructure that provides data scientists and AI engineers with everything they need to build, experiment with, train, test, and deploy AI models, streamlining the development workflow from initial research through production deployment.

AI Documentation Standards
AI Operations

AI Documentation Standards is the set of practices and templates that define how AI systems, models, datasets, decisions, and processes are recorded and maintained throughout their lifecycle. Good documentation ensures that AI systems are transparent, reproducible, auditable, and manageable by anyone in the organisation, not just the original developers.

AI Ecosystem
AI Strategy

AI Ecosystem is the interconnected network of technology vendors, platform providers, consulting partners, data sources, research institutions, and internal teams that collectively support an organization's ability to develop, deploy, and scale artificial intelligence initiatives.

AI Enablement
AI Operations

AI Enablement is the set of organisational capabilities, processes, infrastructure, and cultural conditions that collectively support the successful adoption and sustained use of artificial intelligence across a business. It encompasses everything from data readiness and technology platforms to talent development and governance frameworks that allow AI initiatives to move from concept to production.

AI Endpoint
AI Infrastructure

AI Endpoint is a network-accessible interface, typically a URL, through which applications and services send data to a deployed AI model and receive predictions in response, serving as the connection point between your AI models and the software systems, applications, and users that consume their outputs.

AI Ethics
AI Governance & Ethics

AI Ethics is the branch of applied ethics that examines the moral principles and values guiding the design, development, and deployment of artificial intelligence systems. It addresses fairness, accountability, transparency, privacy, and the broader societal impact of AI to ensure these technologies benefit people without causing harm.

AI Evaluation (Evals)
AI Strategy

AI Evaluation, commonly called Evals, is the systematic process of testing and measuring AI system performance across quality, accuracy, safety, and reliability dimensions before and after deployment to ensure the system meets business requirements and user expectations.

AI Expense Management
Business Applications

AI Expense Management is the application of artificial intelligence to automate and improve how businesses process, categorise, audit, and analyse employee expenses and business spending. It uses optical character recognition, natural language processing, and machine learning to extract data from receipts, enforce policy compliance, detect anomalies, and provide spending insights with minimal manual effort.

AI Experimentation Culture
AI Strategy

AI Experimentation Culture is an organizational mindset and set of practices that actively encourages teams to form hypotheses, test AI solutions rapidly, learn from both successes and failures, and systematically apply those learnings to improve business outcomes and accelerate AI adoption.

AI Fairness
AI Governance & Ethics

AI Fairness is the practice of designing, developing, and deploying artificial intelligence systems that treat all individuals and groups equitably, without producing outcomes that systematically disadvantage people based on characteristics such as race, gender, age, or socioeconomic status.

AI Feedback Loop
AI Operations

An AI Feedback Loop is the continuous cycle where AI system outputs are evaluated by humans or automated processes, corrections are captured, and those corrections are used to improve the AI model over time. It is the mechanism that transforms AI from a static tool into a continuously improving system that gets smarter the more it is used.

AI Financial Planning
Business Applications

AI Financial Planning is the use of artificial intelligence and machine learning to automate and enhance financial analysis, budgeting, forecasting, and strategic financial decision-making. It enables businesses to process complex financial data faster, identify patterns humans might miss, and generate more accurate financial projections.

AI Forensics
AI Safety & Security

AI Forensics is the discipline of investigating AI system incidents, failures, and anomalies to determine their root causes, understand their impact, and gather evidence that supports remediation, accountability, and prevention of future occurrences.

AI Gateway
AI Infrastructure

An AI gateway is an infrastructure layer that sits between applications and AI models, managing routing, authentication, rate limiting, cost tracking, and failover to provide centralised control and visibility over all AI model interactions across an organisation.

AI Governance
AI Governance & Ethics

AI Governance is the set of policies, frameworks, and organisational structures that guide how artificial intelligence is developed, deployed, and monitored within an organisation. It ensures AI systems operate responsibly, comply with regulations, and align with business values and societal expectations.

AI Governance Framework
AI Strategy

An AI Governance Framework is a structured set of policies, processes, roles, and accountability mechanisms that an organization establishes to ensure its artificial intelligence systems are developed, deployed, and managed responsibly, ethically, and in compliance with applicable regulations.

AI Governance Platform
AI Safety & Security

An AI Governance Platform is a software solution that helps organisations manage AI risk, ensure regulatory compliance, and maintain oversight of all AI systems across the enterprise. These platforms centralise model inventories, automate compliance workflows, and provide dashboards for tracking fairness, transparency, and accountability at scale.

AI Guardrails
AI Safety & Security

AI Guardrails are the constraints, rules, and safety mechanisms built into AI systems to prevent harmful, inappropriate, or unintended outputs and actions. They define the operational boundaries within which an AI system is permitted to function, protecting users, organisations, and the public from AI-related risks.

AI Impact Assessment
AI Governance & Ethics

AI Impact Assessment is a structured evaluation process conducted before deploying an AI system to identify, analyse, and mitigate potential risks and effects on individuals, communities, and the organisation, ensuring that benefits are maximised while harms are minimised.

AI Incident Management
AI Operations

AI Incident Management is the structured process of detecting, responding to, resolving, and learning from failures or unexpected behaviours in production AI systems. It adapts traditional IT incident management frameworks to address the unique characteristics of AI, including model drift, data pipeline failures, biased outputs, and cascading errors that can affect business operations and customer trust.

AI Incident Reporting
AI Governance & Ethics

AI Incident Reporting is a systematic process for identifying, documenting, analysing, and communicating failures, near-misses, and unexpected behaviours of AI systems, enabling organisations to learn from problems, prevent recurrence, and maintain accountability to stakeholders and regulators.

AI Incident Response
AI Safety & Security

AI Incident Response is a structured organisational process for detecting, evaluating, containing, and recovering from failures, breaches, or harmful behaviours in AI systems. It extends traditional IT incident response to address the unique challenges posed by AI-specific risks.

AI Innovation Lab
AI Strategy

AI Innovation Lab is a dedicated team, facility, or organizational unit established to explore, experiment with, and prototype artificial intelligence solutions in a controlled environment before scaling successful ideas across the broader business.

AI Kill Switch
AI Safety & Security

An AI Kill Switch is a mechanism designed to immediately shut down, override, or disable an AI system when it behaves unexpectedly, causes harm, or operates outside its intended parameters. It ensures humans retain ultimate control over AI systems in critical situations.

AI Knowledge Base
Business Applications

AI Knowledge Base is an intelligent information management system that uses artificial intelligence to automatically organise, update, and retrieve organisational knowledge. Unlike static wikis and document repositories, AI knowledge bases learn from usage patterns, surface relevant information proactively, and keep content current, serving both internal teams and external customers.

AI Knowledge Transfer
AI Operations

AI Knowledge Transfer is the structured process of ensuring that critical knowledge about AI systems, including how they work, why design decisions were made, and how to maintain them, is effectively shared when team members change roles, leave the organisation, or when new staff join. It prevents the loss of institutional AI knowledge that can render systems unmaintainable and business-critical capabilities fragile.

AI Learning Path
AI Strategy

An AI learning path is a structured sequence of courses, workshops, and resources designed to progressively build AI skills from beginner to advanced. For companies, an AI learning path maps employee roles to specific training milestones over weeks or months.

AI Liability
AI Governance & Ethics

AI Liability is the legal framework and principles determining who is responsible when an artificial intelligence system causes harm, financial loss, or damage. It addresses questions of fault, accountability, and compensation across the chain of AI development, deployment, and operation.

AI Lighthouse Project
AI Strategy

An AI Lighthouse Project is a strategically selected, high-visibility AI initiative designed to demonstrate tangible business value, build organizational confidence in AI capabilities, and create a replicable blueprint for scaling AI adoption across the rest of the organization.

AI Literacy
AI Operations

AI Literacy is the ability to understand, evaluate, and effectively interact with artificial intelligence systems. It encompasses knowing what AI can and cannot do, how AI-driven decisions are made, how to interpret AI outputs critically, and how to identify appropriate use cases for AI within a business context.

AI Load Balancing
AI Infrastructure

AI Load Balancing is the process of distributing incoming AI inference requests across multiple servers or model instances to prevent any single server from becoming overwhelmed, ensuring consistent performance, high availability, and efficient use of computing resources.

AI Maturity Model
AI Strategy

An AI Maturity Model is a framework that assesses an organization's current level of AI capability across dimensions like data readiness, technology infrastructure, talent, and governance, helping leaders understand where they stand and what steps are needed to advance.

AI Meeting Assistant
Business Applications

AI Meeting Assistant is an artificial intelligence tool that joins virtual or in-person meetings to automatically transcribe conversations, generate summaries, extract action items, and organise key decisions. Popular examples include Otter.ai, Fireflies, and Granola, helping teams capture meeting outcomes without manual note-taking.

AI Microservices
AI Infrastructure

AI microservices is an architectural approach that breaks AI functionality into small, independent, and separately deployable services, each handling a specific AI task such as text analysis, image recognition, or recommendation generation, allowing teams to develop, scale, and update each capability independently.

AI Model Lifecycle Management
AI Operations

AI Model Lifecycle Management is the end-to-end practice of governing AI models from initial development through deployment, monitoring, updating, and eventual retirement. It ensures that AI models remain accurate, compliant, and aligned with business needs throughout their operational life, not just at the point of initial deployment.

AI Native Application
AI Strategy

AI Native Application is software designed from the ground up with artificial intelligence as its core architecture, where AI capabilities drive the primary user experience and value proposition rather than being added as a secondary feature to an existing legacy application.

AI Observability
AI Infrastructure

AI observability is the practice of continuously monitoring and understanding the behaviour, performance, and data quality of AI systems in production, going beyond basic uptime metrics to detect model drift, data anomalies, prediction quality degradation, and fairness issues before they impact business outcomes.

AI Operating Model
AI Strategy

An AI Operating Model is the organizational design that defines how a company structures its teams, processes, governance, and technology infrastructure to develop, deploy, and continuously manage AI capabilities at scale across the business, ensuring alignment between AI initiatives and strategic objectives.

AI Ops Team Structure
AI Operations

AI Ops Team Structure is the organisational design that defines how roles, responsibilities, and reporting lines are arranged to manage AI systems effectively in day-to-day business operations. It encompasses the mix of technical and business-side talent, coordination models, and governance mechanisms needed to keep AI initiatives running smoothly and delivering value.

AI Performance Benchmarking
AI Operations

AI Performance Benchmarking is the practice of measuring and comparing how well AI systems perform against defined standards, historical baselines, industry averages, or competing solutions. It provides objective data on whether AI systems are delivering the expected business value and identifies areas where performance can be improved.

AI Pilot
AI Strategy

An AI Pilot is a controlled, limited deployment of an AI solution in a real business environment with actual users, designed to validate operational viability, measure business impact, and identify issues before committing to a full-scale rollout across the organization.

AI Pipeline Orchestration
AI Infrastructure

AI pipeline orchestration is the automated coordination and management of end-to-end machine learning workflows, from data ingestion and feature engineering through model training, evaluation, and deployment, ensuring each step executes reliably, in the correct order, and with proper error handling.

AI Platform
AI Infrastructure

An AI platform is an integrated suite of tools and services that provides everything needed to build, train, deploy, and manage artificial intelligence models in one environment, enabling businesses to develop AI solutions more efficiently without assembling disparate tools from multiple vendors.

AI Policy
AI Governance & Ethics

AI Policy is the formal set of organisational rules, guidelines, and procedures that govern how artificial intelligence is researched, developed, procured, deployed, and monitored within an organisation. It provides clear boundaries and expectations for AI use and serves as the operational backbone of AI governance.

AI Portfolio Management
AI Strategy

AI Portfolio Management is the strategic practice of managing a collection of AI initiatives as an integrated portfolio, balancing investments across different risk levels, business functions, and time horizons to maximize overall business value while managing resource constraints and organizational capacity for change.

AI Pricing Optimization
Business Applications

AI Pricing Optimization is the use of machine learning algorithms to analyse market conditions, competitor pricing, customer behaviour, and demand patterns to determine optimal prices for products or services in real time. It enables businesses to maximise revenue, improve margins, and respond dynamically to market changes.

AI Procurement
AI Strategy

AI Procurement is the structured process of evaluating, selecting, negotiating, and acquiring artificial intelligence solutions, services, and platforms from external vendors, ensuring alignment with organizational strategy, technical requirements, and budget constraints.

AI Proof of Value
AI Strategy

AI Proof of Value is a structured evaluation that goes beyond technical feasibility to demonstrate the measurable business impact of an AI initiative, quantifying financial returns, operational improvements, and strategic benefits to justify continued investment and broader organizational deployment.

AI Quality Assurance
Business Applications

AI Quality Assurance is the application of artificial intelligence and machine learning to detect defects, monitor quality standards, and ensure product and service consistency. It uses computer vision, sensor data analysis, and predictive models to identify quality issues faster and more accurately than traditional manual inspection methods.

AI ROI
AI Strategy

AI ROI is the measurement of the financial and strategic returns generated by artificial intelligence investments relative to their costs, encompassing direct savings, revenue gains, productivity improvements, and broader business value that AI initiatives deliver over time.

AI Readiness Assessment
AI Strategy

An AI Readiness Assessment is a systematic evaluation of an organization's preparedness to adopt artificial intelligence, examining data quality, technology infrastructure, talent capabilities, organizational culture, and governance frameworks to identify gaps and create an actionable plan.

AI Red Teaming
AI Safety & Security

AI Red Teaming is the practice of systematically testing AI systems by simulating attacks, misuse scenarios, and adversarial inputs to uncover vulnerabilities, biases, and failure modes before they cause harm in production environments. It draws on cybersecurity traditions to stress-test AI models and their surrounding infrastructure.

AI Regulation
AI Governance & Ethics

AI Regulation refers to the laws, rules, standards, and government policies that govern the development, deployment, and use of artificial intelligence systems. It encompasses mandatory legal requirements, voluntary guidelines, industry standards, and regulatory frameworks designed to manage AI risks while enabling innovation and economic benefit.

AI Retraining
AI Operations

AI Retraining is the process of updating an AI model with new data so that it continues to perform accurately as real-world conditions change over time. It addresses the reality that AI models degrade in performance after deployment because the patterns they learned from historical data may no longer reflect current conditions, customer behaviours, or business environments.

AI Risk Management
AI Governance & Ethics

AI Risk Management is the systematic process of identifying, assessing, mitigating, and monitoring risks associated with artificial intelligence systems throughout their lifecycle. It covers technical risks like model failure and bias, operational risks like data breaches, strategic risks like competitive disruption, and compliance risks from evolving regulations.

AI Risk Register
AI Safety & Security

AI Risk Register is a structured, living document that catalogues all identified risks associated with an organisation's AI systems, including their likelihood, potential impact, current mitigation measures, risk owners, and status, serving as the central tool for managing AI risk across the enterprise.

AI Risk Scoring
Business Applications

AI Risk Scoring is an automated system that uses machine learning to assess and assign numerical risk levels to entities such as customers, transactions, loans, suppliers, or projects. It analyses multiple data points simultaneously to produce consistent, objective risk assessments that support faster and more accurate business decisions.

AI Roadmap
AI Strategy

An AI Roadmap is a phased, time-bound plan that outlines the specific AI initiatives an organization will pursue, the sequence in which they will be implemented, the resources required, and the milestones that mark progress toward the organization's AI vision over a defined planning horizon.

AI Rollback Plan
AI Operations

AI Rollback Plan is a predefined set of procedures for reverting an AI system to a previous known-good state when a new deployment causes problems in production. It ensures that organisations can quickly undo problematic AI updates, restore stable operations, and minimise the business impact of failed deployments or unexpected model behaviour.

AI Runbook
AI Operations

AI Runbook is a documented set of standardised procedures for operating, monitoring, troubleshooting, and maintaining AI systems in production. It serves as the operational manual that enables teams to manage AI systems consistently, respond to incidents effectively, and maintain system health without depending on the specialised knowledge of any single individual.

AI Safety Testing
AI Safety & Security

AI Safety Testing is the systematic evaluation of AI systems to identify dangerous, unintended, or harmful behaviours before and after deployment. It involves structured test scenarios, stress testing, and adversarial probing to ensure AI systems operate within acceptable safety boundaries across a wide range of conditions.

AI Sales Forecasting
Business Applications

AI Sales Forecasting is the use of machine learning models to predict future sales revenue by analysing historical sales data, pipeline activity, market signals, and external factors. It produces more accurate and granular forecasts than traditional methods, enabling business leaders to make more confident decisions about resource allocation, hiring, budgeting, and growth strategy.

AI Sandbox
AI Governance & Ethics

An AI Sandbox is a controlled regulatory environment where organisations can test and experiment with AI systems under the supervision of a regulatory body, allowing innovation to proceed while managing risks and informing the development of appropriate regulations.

AI Scaling
AI Operations

AI Scaling is the process of expanding AI capabilities from initial pilot projects or single-team deployments to enterprise-wide adoption across multiple functions, markets, and use cases. It addresses the technical, organisational, and cultural challenges that arise when moving AI from proof-of-concept success to broad operational impact.

AI Security Audit
AI Safety & Security

AI Security Audit is a comprehensive, structured assessment of an AI system's security posture, examining its architecture, data handling, access controls, model integrity, deployment environment, and operational processes to identify vulnerabilities and verify compliance with security standards and regulations.

AI Service Level Agreement
AI Operations

An AI Service Level Agreement is a formal contract or internal commitment that defines measurable performance guarantees for an AI system, including availability, response time, accuracy, fairness, and support commitments. It adapts traditional IT SLA concepts to the unique characteristics of AI systems, where output quality and model behaviour matter as much as uptime.

AI Spend Tracking
AI Operations

AI Spend Tracking is the practice of monitoring, analysing, and optimising the costs associated with using AI APIs, cloud-hosted models, and related infrastructure across an organisation. It provides visibility into which teams, projects, and models are consuming resources so that businesses can control cloud AI expenses and maximise return on investment.

AI Strategy
AI Strategy

AI Strategy is a comprehensive plan that defines how an organization will adopt and leverage artificial intelligence to achieve specific business objectives, including which use cases to prioritize, what resources to invest, and how to measure success over time.

AI Supply Chain Security
AI Safety & Security

AI Supply Chain Security is the practice of ensuring that all third-party components used in AI systems, including pre-trained models, training datasets, software libraries, and cloud services, are trustworthy, uncompromised, and free from vulnerabilities that could affect the safety or performance of the final AI product.

AI Sustainability
AI Governance & Ethics

AI Sustainability is the practice of considering and minimising the environmental impact of artificial intelligence systems throughout their lifecycle, including the energy consumed during model training and inference, the carbon footprint of supporting infrastructure, and the broader ecological consequences of AI deployment at scale.

AI Talent Strategy
AI Strategy

AI Talent Strategy is a comprehensive plan for identifying, recruiting, developing, and retaining the human skills and expertise required to execute an organization's AI initiatives, encompassing technical roles like data scientists and ML engineers as well as AI-literate business professionals across the company.

AI Technical Debt
AI Operations

AI Technical Debt is the accumulated cost of shortcuts, workarounds, and deferred maintenance in AI systems that make future development, maintenance, and improvement more difficult and expensive. It arises from quick-fix decisions during AI development, inadequate documentation, tightly coupled components, and neglected infrastructure, and it compounds over time if not actively managed.

AI Testing Strategy
AI Operations

AI Testing Strategy is the systematic plan for validating that AI systems perform correctly, reliably, and fairly before and after they are deployed into production. It goes beyond traditional software testing to address the unique challenges of AI, including data-dependent behaviour, probabilistic outputs, model drift, and the need to test for bias and edge cases that can cause real-world harm.

AI Threat Modeling
AI Safety & Security

AI Threat Modeling is a systematic process for identifying, analysing, and prioritising security threats specific to AI systems throughout their lifecycle. It extends traditional threat modeling practices to address AI-unique vulnerabilities including data poisoning, model manipulation, adversarial attacks, and the novel risks introduced by machine learning systems.

AI Total Cost of Ownership
AI Strategy

AI Total Cost of Ownership is the comprehensive financial analysis that accounts for all direct and indirect costs of implementing, operating, and maintaining an AI system over its full lifecycle, including infrastructure, talent, data preparation, training, monitoring, and eventual decommissioning.

AI Training Data Management
AI Operations

AI Training Data Management is the set of processes and practices for collecting, curating, labelling, storing, and maintaining the data used to train and improve AI models. It ensures that AI systems learn from accurate, representative, and ethically sourced data, directly determining the quality and reliability of AI outputs.

AI Transformation Office
AI Strategy

AI Transformation Office is a dedicated organizational unit responsible for leading, coordinating, and accelerating the enterprise-wide adoption of artificial intelligence by aligning AI initiatives with business strategy, managing resources, and driving the cultural and operational changes required for successful AI integration.

AI Transparency
AI Governance & Ethics

AI Transparency is the principle and practice of openly communicating how artificial intelligence systems work, what data they use, how decisions are made, and what limitations they have. It encompasses both technical transparency about model behaviour and organisational transparency about AI policies, practices, and impacts.

AI Trustworthiness
AI Governance & Ethics

AI Trustworthiness is the degree to which an artificial intelligence system is reliable, fair, secure, transparent, and accountable across its entire lifecycle. A trustworthy AI system consistently performs as expected, treats all users equitably, protects data, and provides clear explanations for its outputs and decisions.

AI Upskilling
Business Applications

AI upskilling is the process of training employees to use artificial intelligence tools and techniques in their existing roles. Unlike reskilling (learning entirely new skills for a different role), upskilling enhances current capabilities with AI-powered methods and workflows.

AI Use Case
AI Strategy

An AI Use Case is a specific, well-defined business scenario where artificial intelligence can be applied to solve a problem or create value, describing the target process, the AI technique involved, the expected outcomes, and the measurable business impact it aims to deliver.

AI User Acceptance Testing
AI Operations

AI User Acceptance Testing is the process of validating an AI system with real end users in realistic conditions before deploying it to the full organisation or customer base. It verifies that the AI meets business requirements, produces acceptable outputs, integrates properly with workflows, and delivers a user experience that supports adoption.

AI Value Chain
AI Strategy

AI Value Chain is the complete sequence of interconnected activities through which artificial intelligence creates business value, from data collection and model development through deployment and continuous optimization, with each stage building on the previous one to deliver measurable outcomes.

AI Vendor Management
AI Operations

AI Vendor Management is the practice of selecting, contracting with, monitoring, and governing relationships with external companies that provide AI technologies, platforms, services, or expertise. It ensures that vendor relationships deliver value, that risks are managed, and that your organisation maintains appropriate control and understanding of AI systems that depend on third-party providers.

AI Vendor Selection
AI Strategy

AI Vendor Selection is the systematic process of evaluating, comparing, and choosing AI technology providers and solution partners based on criteria such as technical capabilities, cost, scalability, support quality, and alignment with your organization's specific business requirements and strategic goals.

AI Watermarking
AI Safety & Security

AI Watermarking is the practice of embedding imperceptible or subtle signals into AI-generated content — including text, images, audio, and video — that allow the content to be identified as machine-generated. It serves as a provenance mechanism to promote transparency and combat misinformation.

AI Whistleblowing
AI Governance & Ethics

AI Whistleblowing is the practice of establishing formal reporting mechanisms within organisations that enable employees, contractors, and stakeholders to raise concerns about AI ethics violations, safety risks, biased systems, or non-compliant AI practices without fear of retaliation.

AI Workflow Integration
AI Operations

AI Workflow Integration is the process of embedding artificial intelligence capabilities directly into existing business processes, tools, and systems so that AI becomes a natural part of how work gets done rather than a separate, standalone activity. It focuses on making AI accessible within the tools employees already use, reducing friction and maximising adoption.

AI-Assisted Decision Making
Business Applications

AI-Assisted Decision Making is the practice of using artificial intelligence to augment human decision-making by providing data-driven insights, predictions, and recommendations. It combines the analytical power of AI with human judgement, experience, and contextual understanding to produce better business outcomes than either humans or AI could achieve alone.

AI-First Strategy
AI Strategy

An AI-First Strategy is an organizational approach where artificial intelligence is treated as a primary driver of business decisions, product development, and operational processes rather than as a supplementary technology, fundamentally reshaping how the company creates value, serves customers, and competes in the market.

AI-Powered Analytics Dashboard
Business Applications

AI-Powered Analytics Dashboard is an interactive business intelligence interface that uses artificial intelligence to automatically surface insights, detect anomalies, generate narratives, and provide recommendations from business data. It goes beyond static charts and manual reporting by proactively highlighting what matters most and enabling users to explore data through natural language queries.

AI-Powered CRM
Business Applications

AI-Powered CRM is a customer relationship management system enhanced with artificial intelligence capabilities such as lead scoring, sales forecasting, sentiment analysis, and automated customer interactions. It helps businesses predict customer behaviour, personalise engagement, and improve sales and service outcomes by leveraging data-driven insights.

AI-Powered Chatbot
Business Applications

AI-Powered Chatbot is a conversational AI application that uses natural language processing and machine learning to interact with customers, employees, or other users through text or voice. Unlike rule-based chatbots that follow scripted responses, AI-powered chatbots understand intent, context, and nuance, enabling them to handle complex conversations, answer varied questions, and complete tasks autonomously.

AI-Powered Code Review
Business Applications

AI-Powered Code Review is an automated software analysis process that uses artificial intelligence to examine code for bugs, security vulnerabilities, performance issues, and style inconsistencies. It provides developers with actionable improvement suggestions in real time, reducing manual review effort and accelerating software delivery cycles.

AI-Powered Hiring
Business Applications

AI-Powered Hiring is the application of artificial intelligence to streamline and improve recruitment processes, including candidate sourcing, resume screening, skills assessment, interview scheduling, and hiring decision support. It helps businesses find qualified candidates faster while reducing bias and administrative burden.

AI-Powered Marketing
Business Applications

AI-Powered Marketing is the use of artificial intelligence to analyse customer data, automate campaign execution, and optimise marketing strategies in real time. It enables businesses to deliver personalised content, predict customer behaviour, and allocate budgets more effectively across channels.

AI-Powered Search
Business Applications

AI-Powered Search is an enterprise search technology enhanced by artificial intelligence that delivers more relevant, contextual, and personalised results compared to traditional keyword-based search. It uses natural language processing, semantic understanding, and machine learning to help employees and customers find the information they need faster and more accurately.

API
AI Infrastructure

An API, or Application Programming Interface, is a set of rules and protocols that allows different software applications to communicate with each other, enabling businesses to integrate AI services, connect systems, and build automated workflows without needing to build every capability from scratch.

Abstractive Summarization
Natural Language Processing

Abstractive Summarization is an advanced NLP technique that generates new, concise summary text by understanding and rephrasing the key points of a source document, as opposed to extractive summarization which simply selects and combines existing sentences from the original text.

Accent Adaptation
Speech & Audio AI

Accent Adaptation is the AI capability of adjusting speech recognition and synthesis systems to accurately handle the diverse accents and dialects spoken by different populations. It enables voice-enabled technology to work reliably for users regardless of their regional accent, native language influence, or speaking style.

Action Recognition
Computer Vision

Action Recognition is a computer vision technique that identifies and classifies human activities from video footage, such as walking, running, lifting, or operating equipment. It enables applications including workplace safety monitoring, customer behaviour analysis, security surveillance, and process compliance verification.

Active Learning
Machine Learning

Active Learning is a machine learning strategy where the model intelligently selects the most informative unlabeled examples for human experts to label, maximizing model improvement per labeled example and dramatically reducing the total amount of labeled data needed to train an accurate model.

Actuator
Robotics & Automation

Actuator is a device that converts electrical, hydraulic, or pneumatic energy signals into physical movement in a robotic system. Actuators are the muscles of a robot, driving every joint rotation, linear extension, and gripper action that enables the machine to interact with the physical world.

Adversarial Attack
AI Safety & Security

An Adversarial Attack is a technique where carefully crafted inputs are designed to deceive or manipulate AI models into producing incorrect, unintended, or harmful outputs. These inputs often appear normal to humans but exploit specific vulnerabilities in how AI models process and interpret data.

Adversarial Robustness
AI Safety & Security

Adversarial Robustness is the ability of an AI system to maintain correct and reliable performance when subjected to intentionally crafted inputs designed to deceive or manipulate it. It measures how well a model resists adversarial attacks without degrading in accuracy or safety.

Agent Benchmarks
Agentic AI

Agent Benchmarks are standardized tests and evaluation frameworks designed to measure AI agent capabilities across tasks such as reasoning, tool use, planning, and autonomous task completion, providing objective comparisons between different agent systems.

Agent Composition
Agentic AI

Agent Composition is the practice of building complex AI agent capabilities by combining simpler, specialized agent components together, much like assembling building blocks, so that each component handles a specific function and the composed system delivers sophisticated end-to-end behavior.

Agent Evaluation
Agentic AI

Agent Evaluation is the systematic process of testing, measuring, and benchmarking the performance of AI agents across dimensions such as task completion accuracy, reasoning quality, tool usage effectiveness, safety compliance, and end-to-end reliability in real-world scenarios.

Agent Framework
Agentic AI

An Agent Framework is a software library or platform that provides pre-built components, abstractions, and tooling for developers to create, configure, and deploy AI agents capable of reasoning, using tools, and completing multi-step tasks autonomously.

Agent Governance
Agentic AI

Agent Governance is the comprehensive framework of policies, controls, oversight mechanisms, and accountability structures that organizations put in place to manage the deployment, behavior, and impact of AI agents across the business.

Agent Grounding
Agentic AI

Agent Grounding is the practice of connecting AI agent outputs to verified, authoritative external data sources so that the agent produces responses based on real-world facts rather than relying solely on its training data, which may be outdated or incomplete.

Agent Guardrails
Agentic AI

Agent Guardrails are the safety constraints, rules, and boundaries specifically designed to control autonomous AI agent behavior, preventing agents from taking harmful, unauthorized, or unintended actions while allowing them to operate effectively within defined limits.

Agent Handoff
Agentic AI

Agent Handoff is the process of transferring an ongoing task, including its full context and conversation history, from one AI agent to another AI agent or to a human operator, ensuring continuity and avoiding the need for the user to repeat information.

Agent Loop
Agentic AI

Agent Loop is the continuous iterative cycle of perception, reasoning, and action that an AI agent follows to accomplish tasks, where the agent observes its environment, decides what to do, takes action, observes the result, and repeats until the objective is achieved.

Agent Marketplace
Agentic AI

Agent Marketplace is a platform or ecosystem where businesses can discover, evaluate, purchase, and deploy pre-built AI agents created by third-party developers, similar to an app store but specifically for autonomous AI agents that perform business tasks.

Agent Memory
Agentic AI

Agent Memory refers to the mechanisms that enable AI agents to store, retrieve, and utilize information from past interactions and experiences, allowing them to maintain context over time, learn from previous outcomes, and deliver increasingly personalized and effective results.

Agent Observability
Agentic AI

Agent Observability is the practice of monitoring, tracing, and analyzing the internal behavior of AI agents in production, including their reasoning steps, tool usage, decision paths, and performance metrics, to enable debugging, optimization, and reliable operation.

Agent Orchestration
Agentic AI

Agent Orchestration is the coordination and management of multiple AI agents working together, including task assignment, sequencing, resource allocation, error handling, and ensuring agents collaborate effectively to achieve a unified business objective.

Agent Persona
Agentic AI

Agent Persona is the defined role, personality, behavioral style, and communication characteristics assigned to an AI agent, shaping how it interacts with users, what tone it uses, and what boundaries it follows during conversations and task execution.

Agent Routing
Agentic AI

Agent Routing is the process of analyzing an incoming task or request and directing it to the most appropriate AI agent within a multi-agent system, based on factors such as agent capabilities, specialization, current workload, and the nature of the task.

Agent Sandbox
Agentic AI

An Agent Sandbox is an isolated, controlled environment where AI agents can be tested, evaluated, and experimented with safely, without the risk of affecting production systems, real data, real users, or incurring unintended consequences from agent actions.

Agent State Management
Agentic AI

Agent State Management is the practice of tracking, storing, and maintaining all relevant context and information about an AI agent's current situation, conversation history, and progress across multiple interactions, enabling the agent to provide coherent and continuous experiences.

Agent Trust
Agentic AI

Agent Trust is the set of mechanisms, frameworks, and practices used to establish, measure, and maintain confidence that an AI agent will behave reliably, safely, and in alignment with its intended purpose within a business environment.

Agent-to-Agent Protocol (A2A)
Agentic AI

Agent-to-Agent Protocol (A2A) is a standardized communication framework that enables different AI agents to exchange information, delegate tasks, and coordinate actions with each other, regardless of which vendor or platform built them.

Agentic Workflow
Agentic AI

An Agentic Workflow is a multi-step business process where AI agents autonomously plan, execute, and adapt a sequence of tasks to achieve a defined outcome, making decisions at each stage rather than following a fixed script.

Agricultural Robot
Robotics & Automation

Agricultural Robot is an AI-powered autonomous or semi-autonomous machine designed to perform farming tasks such as planting, weeding, harvesting, spraying, and crop monitoring. These robots help farmers increase yields, reduce labour dependency, and adopt more sustainable practices across diverse agricultural environments.

Algorithmic Accountability
AI Governance & Ethics

Algorithmic Accountability is the principle that organisations deploying AI and automated decision-making systems must be answerable for the outcomes those systems produce, including maintaining transparency about how decisions are made and accepting responsibility when those decisions cause harm.

Algorithmic Bias Audit
AI Governance & Ethics

An Algorithmic Bias Audit is a systematic, independent evaluation of an AI or automated decision-making system to identify, measure, and assess unfair discrimination in its outcomes, processes, or underlying data, providing actionable findings for remediation.

Anomaly Detection
Machine Learning

Anomaly Detection is a machine learning technique that identifies unusual patterns, outliers, or unexpected behaviors in data that deviate significantly from the norm, enabling businesses to detect fraud, equipment failures, security breaches, and other critical events in real time.

Artificial Intelligence
AI Strategy

Artificial Intelligence is the broad field of computer science focused on building systems capable of performing tasks that typically require human intelligence, such as understanding language, recognizing patterns, making decisions, and learning from experience to improve over time.

Aspect-Based Sentiment Analysis
Natural Language Processing

Aspect-Based Sentiment Analysis is an advanced NLP technique that identifies sentiment toward specific features, attributes, or aspects of a product or service within text, going beyond overall sentiment to reveal precisely what customers like or dislike about individual components of the experience.

Attention Mechanism
Machine Learning

An Attention Mechanism is a technique in neural networks that allows models to dynamically focus on the most relevant parts of an input when making predictions, dramatically improving performance on tasks like translation, text understanding, and image analysis by weighting important information more heavily.

Audio Captioning
Speech & Audio AI

Audio Captioning is an AI technology that automatically generates natural language descriptions of the sounds and events in an audio recording, going beyond speech transcription to describe non-speech sounds like music, environmental noise, and acoustic events. It enables accessibility, content indexing, and automated audio understanding at scale.

Audio Classification
Speech & Audio AI

Audio Classification is an AI technique that automatically categorises sounds and audio events into predefined classes, such as speech, music, environmental sounds, or specific noise types. It enables businesses to monitor, analyse, and respond to audio environments at scale across applications like security, quality control, and customer experience.

Audio Deepfake
Speech & Audio AI

Audio Deepfake is AI-generated synthetic audio that mimics a real person's voice with high fidelity, making it difficult to distinguish from authentic recordings. It poses significant risks including fraud, misinformation, and identity theft, while also driving innovation in detection technologies and voice authentication systems.

Audio Embedding
Speech & Audio AI

Audio Embedding is a numerical representation of an audio signal as a fixed-length vector of numbers that captures its essential characteristics. These compact mathematical representations enable AI systems to compare, search, classify, and cluster audio content efficiently without processing the raw audio waveform directly.

Audio Fingerprinting
Speech & Audio AI

Audio Fingerprinting is a technology that identifies audio content by extracting a compact, unique digital signature from its acoustic characteristics. Like a human fingerprint uniquely identifies a person, an audio fingerprint uniquely identifies a piece of audio, enabling applications such as music identification, broadcast monitoring, and content rights management.

Audio Segmentation
Speech & Audio AI

Audio Segmentation is the AI process of dividing a continuous audio stream into distinct, meaningful segments based on characteristics such as speaker identity, content type, acoustic properties, or temporal boundaries. It enables structured analysis of audio content by identifying where transitions occur between different speakers, topics, or audio types.

AutoML
Machine Learning

AutoML (Automated Machine Learning) is a set of tools and techniques that automate the process of building machine learning models, including data preprocessing, feature engineering, model selection, and hyperparameter tuning, making it possible for organizations without deep ML expertise to develop effective AI solutions.

Automated Decision-Making
AI Governance & Ethics

Automated Decision-Making is the use of artificial intelligence and algorithmic systems to make decisions that affect individuals or organisations with limited or no human intervention. These decisions can range from routine operational choices to high-stakes determinations about credit, employment, insurance, and access to services.

Automatic Speech Recognition (ASR)
Speech & Audio AI

Automatic Speech Recognition (ASR) is an AI technology that converts spoken language into written text, enabling applications like voice-controlled interfaces, transcription services, and call centre analytics. ASR systems use deep learning to interpret audio signals and produce accurate text output across diverse accents, languages, and environments.

Autonomous Agent
Agentic AI

An Autonomous Agent is an AI system that independently perceives its environment, makes decisions, and takes actions to achieve specified goals over extended periods with minimal or no human intervention, while adapting its behavior based on feedback and changing conditions.

Autonomous Navigation
Robotics & Automation

Autonomous Navigation is the AI-powered capability that enables robots, vehicles, and drones to plan and execute movement through an environment independently, without human control. It combines perception, path planning, and control algorithms to enable safe, efficient, and adaptive movement in both structured and unstructured environments.

Autonomous Vehicle
Robotics & Automation

An Autonomous Vehicle is a self-driving vehicle that uses artificial intelligence, sensors, and software to navigate and make driving decisions without human intervention. These vehicles range from partially assisted cars to fully driverless trucks and shuttles, with significant implications for logistics, transportation, and urban planning.

B

Backpropagation
Machine Learning

Backpropagation is the fundamental algorithm used to train neural networks by computing how much each weight in the network contributed to prediction errors, then adjusting those weights to reduce future errors, enabling the network to learn complex patterns from data through iterative improvement.

Batch Inference
AI Infrastructure

Batch Inference is the process of collecting multiple AI prediction requests and processing them together as a group rather than one at a time, enabling significantly higher throughput and lower per-prediction costs for workloads that do not require immediate real-time responses.

Batch Normalization
Machine Learning

Batch Normalization is a technique used during neural network training that normalizes the inputs to each layer by adjusting and scaling activations across a mini-batch of data, resulting in faster training, more stable learning, and the ability to use higher learning rates for quicker convergence.

Beneficial AI
AI Governance & Ethics

Beneficial AI is the principle and practice of developing artificial intelligence systems that are intentionally designed to maximise positive outcomes for individuals, communities, and society while actively minimising harm. It goes beyond risk mitigation to proactively direct AI capabilities toward solving meaningful problems and improving quality of life.

Bias-Variance Tradeoff
Machine Learning

The Bias-Variance Tradeoff is a fundamental concept in machine learning describing the balance between a model that is too simple to capture real patterns (high bias, underfitting) and one that is too complex and memorizes noise (high variance, overfitting), with the goal of finding the optimal middle ground.

Big Data
Data & Analytics

Big Data is a term describing datasets so large, fast-moving, or complex that traditional data processing tools cannot handle them effectively. It encompasses the technologies, practices, and strategies organisations use to collect, store, analyse, and extract value from massive volumes of structured and unstructured information.

Business Intelligence
Data & Analytics

Business Intelligence is the combination of technologies, practices, and strategies used to collect, integrate, analyse, and present business data in a way that supports better decision-making. It transforms raw data into meaningful dashboards, reports, and visualisations that give leaders a clear view of organisational performance.

C

Chain of Thought
Agentic AI

Chain of Thought is a reasoning technique where AI models break down complex problems into a sequence of intermediate logical steps before arriving at a final answer, improving accuracy and transparency in decision-making processes.

Chatbot
Natural Language Processing

A Chatbot is a software application that uses NLP and AI to simulate human conversation through text or voice, enabling businesses to automate customer interactions, provide instant support, answer frequently asked questions, and handle routine transactions around the clock.

Chunking
AI Infrastructure

Chunking is the process of splitting documents into optimally sized pieces for ingestion into vector databases and retrieval-augmented generation systems, directly affecting how accurately AI can find and use your organisation's information when answering questions or completing tasks.

Citizen AI Developer
AI Strategy

A Citizen AI Developer is a non-technical business professional who builds AI-powered solutions, automations, and workflows using low-code or no-code AI platforms without requiring formal programming skills, extending an organization's AI capabilities beyond the dedicated data science and engineering teams.

Classification
Machine Learning

Classification is a supervised machine learning task where the model learns to assign input data to predefined categories or classes, such as spam versus legitimate email, fraudulent versus normal transactions, or positive versus negative customer sentiment.

Cloud Computing
AI Infrastructure

Cloud computing is the delivery of computing services including servers, storage, databases, networking, and AI tools over the internet, allowing businesses to access powerful technology on demand without owning physical hardware, paying only for what they use.

Clustering
Machine Learning

Clustering is an unsupervised machine learning technique that automatically groups similar data points together based on shared characteristics, enabling businesses to discover natural segments and patterns in their data without requiring pre-defined categories or labeled examples.

Cobotic Workspace Design
Robotics & Automation

Cobotic Workspace Design is the discipline of creating safe, efficient shared work environments where humans and collaborative robots operate together. It encompasses physical layout, safety systems, workflow design, and ergonomic considerations that enable humans and robots to work side by side productively.

Code Generation AI
Generative AI

Code Generation AI is artificial intelligence that writes, completes, debugs, and translates programming code based on natural language instructions or code context, enabling faster software development and making programming more accessible to non-technical team members.

Cohort Analysis
Data & Analytics

Cohort Analysis is an analytical technique that groups users who share a common characteristic or experience during a defined time period and tracks their behaviour over subsequent periods. It reveals patterns in retention, engagement, and revenue that aggregate metrics obscure.

Collaborative Robot (Cobot)
Robotics & Automation

A Collaborative Robot, or Cobot, is a robot specifically designed to work safely alongside humans in a shared workspace. Unlike traditional industrial robots that operate behind safety cages, cobots use advanced sensors and force-limiting technology to detect and respond to human presence, enabling flexible automation in manufacturing, logistics, and service environments.

Compound AI System
AI Strategy

Compound AI System is an architecture that combines multiple AI components such as language models, data retrievers, code executors, and external tools working together to accomplish tasks that no single AI model could handle reliably on its own.

Computer Use (AI)
Agentic AI

Computer Use (AI) refers to AI agents that can directly control a computer — moving the mouse, clicking buttons, typing text, and navigating software interfaces — just like a human operator would, enabling them to automate tasks across any application without requiring custom integrations or APIs.

Computer Vision
Computer Vision

Computer Vision is a field of artificial intelligence that enables machines to interpret and understand visual information from the world, such as images and videos. It powers applications ranging from quality inspection in manufacturing to automated document processing, helping businesses extract actionable insights from visual data.

Computer-Aided Manufacturing (CAM)
Robotics & Automation

Computer-Aided Manufacturing (CAM) is the use of software and AI-driven systems to plan, manage, and control manufacturing processes, translating digital designs into precise machine instructions. CAM bridges the gap between product design and physical production, enabling automated machining, 3D printing, laser cutting, and robotic fabrication with high precision and efficiency.

Confusion Matrix
Machine Learning

A Confusion Matrix is a table that visualizes the performance of a classification model by displaying the counts of correct and incorrect predictions organized by actual and predicted categories, making it easy to identify exactly where and how the model makes mistakes.

Consent Management (AI)
AI Governance & Ethics

Consent Management (AI) is the set of processes, tools, and governance practices that organisations use to obtain, record, manage, and honour user permissions for AI-related data collection, processing, and automated decision-making. It ensures that individuals have meaningful control over how their data is used by AI systems.

Constitutional AI
AI Safety & Security

Constitutional AI is an alignment technique that trains AI models to follow a defined set of principles or rules, reducing the need for extensive human feedback by allowing the AI to self-critique and revise its outputs against these guiding principles.

Containerization
AI Infrastructure

Containerization is a technology that packages an application and all its dependencies into a standardised, isolated unit called a container, ensuring it runs consistently across any computing environment, from a developer laptop to cloud servers in Singapore or Jakarta.

Content Moderation AI
AI Safety & Security

Content Moderation AI is the use of automated systems powered by artificial intelligence to detect, classify, and filter harmful, inappropriate, or policy-violating content across digital platforms. It helps organisations manage user-generated content at scale while maintaining safety standards.

Context Window
Generative AI

A context window is the maximum amount of text that an AI model can process and consider at one time, measured in tokens. It determines how much information -- including your input, any reference documents, and the model's response -- can fit into a single interaction with the AI.

Contract Analytics
Business Applications

Contract Analytics is the use of artificial intelligence, primarily natural language processing, to automatically read, analyse, and extract key information from legal contracts and agreements. It identifies critical terms such as obligations, deadlines, pricing, renewal clauses, and risk factors across large volumes of contracts, enabling faster review, better compliance, and more informed business decisions.

Conversational AI
Natural Language Processing

Conversational AI is an advanced form of artificial intelligence that enables machines to engage in natural, human-like dialogue across text and voice channels, combining NLP, machine learning, and dialogue management to understand context, maintain multi-turn conversations, and deliver personalized interactions.

Conversational AI Platform
Speech & Audio AI

Conversational AI Platform is an integrated software solution that provides the tools, services, and infrastructure needed to build, deploy, and manage AI-powered voice and text conversation systems. These platforms combine natural language understanding, dialogue management, speech processing, and integration capabilities into a unified development environment.

Conversational Agent
Agentic AI

Conversational Agent is an AI agent specifically designed to engage in natural language dialogue with users, understanding their intent, maintaining context across a conversation, and providing helpful responses or completing tasks through interactive discussion.

Conversational Commerce
Business Applications

Conversational Commerce is the use of AI-powered chat interfaces, messaging apps, and voice assistants to enable customers to browse products, ask questions, and complete purchases through natural conversation. It merges the convenience of messaging with the full buying experience.

Convolutional Neural Network (CNN)
Machine Learning

A Convolutional Neural Network (CNN) is a specialized deep learning architecture designed to process grid-like data such as images by using convolutional filters that automatically detect visual patterns like edges, textures, and shapes, making it the foundation of modern computer vision systems.

Coreference Resolution
Natural Language Processing

Coreference Resolution is an NLP technique that identifies when different words or phrases in a text refer to the same real-world entity, such as recognizing that "the company," "it," and "Grab" all refer to the same organization within a document.

Corporate AI Training
Business Applications

Corporate AI training is structured education programmes designed specifically for company employees to learn AI skills within their business context. Unlike public courses, corporate training is customised to the company's industry, tools, use cases, and governance requirements.

Cross-Lingual NLP
Natural Language Processing

Cross-Lingual NLP encompasses Natural Language Processing techniques and models that work across multiple languages, enabling businesses to build NLP systems that transfer knowledge from one language to others, analyze multilingual content with unified models, and deploy language technology in markets where training data is scarce.

Cross-Validation
Machine Learning

Cross-Validation is a model evaluation technique that tests a machine learning model by systematically partitioning data into training and testing subsets multiple times, providing a more reliable estimate of real-world performance than a single train-test split.

Customer Churn Prediction
Business Applications

Customer Churn Prediction is an AI-driven technique that uses machine learning to analyse customer behaviour, engagement patterns, and transaction data to identify customers likely to stop using a product or service. It enables businesses to take proactive retention actions before customers leave, reducing revenue loss and improving customer lifetime value.

Customer Data Platform (CDP)
Data & Analytics

Customer Data Platform (CDP) is a packaged software system that creates a persistent, unified customer database accessible to other systems. It collects customer data from all channels and touchpoints, consolidates it into individual customer profiles, and makes these complete profiles available for marketing, sales, and service personalisation across the entire organisation.

Customer Lifetime Value Prediction
Business Applications

Customer Lifetime Value Prediction is an AI-driven method of forecasting the total revenue a business can expect from a single customer over the entire duration of their relationship. It uses machine learning to analyse purchase history, engagement patterns, demographics, and behavioural signals to predict future spending, enabling more strategic decisions about customer acquisition, retention, and resource allocation.

D

Data Annotation (Vision)
Computer Vision

Data Annotation (Vision) is the process of labelling images and video with structured metadata such as bounding boxes, pixel masks, keypoints, and classifications to create training datasets for computer vision models. It is the essential foundation for any supervised computer vision project, directly determining model accuracy and reliability across all applications from quality inspection to autonomous navigation.

Data Augmentation
Data & Analytics

Data Augmentation is a set of techniques used to artificially expand the size and diversity of training datasets by creating modified versions of existing data. It improves machine learning model performance and robustness, particularly when the original dataset is too small or imbalanced to train effective models.

Data Augmentation (ML)
Machine Learning

Data Augmentation is a technique that artificially expands training datasets by creating modified versions of existing data through transformations like rotation, flipping, cropping, or adding noise, enabling machine learning models to learn more robust patterns and perform better with limited original training data.

Data Catalog
Data & Analytics

A Data Catalog is an organised inventory of an organisation's data assets, enriched with metadata such as descriptions, ownership, quality scores, and usage statistics. It enables data consumers to discover, understand, and trust available data without relying on tribal knowledge.

Data Democratization
Data & Analytics

Data Democratization is the practice of making data accessible to all employees across an organisation regardless of their technical expertise, enabling everyone to use data in their decision-making. It combines self-service tools, governance, and a data-literate culture to distribute analytical capabilities beyond specialised data teams.

Data Drift
Data & Analytics

Data Drift is the gradual change in the statistical properties of input data that a machine learning model receives in production compared to the data it was trained on. It causes model performance to degrade over time as the real-world patterns the model encounters diverge from its training assumptions.

Data Fabric
Data & Analytics

Data Fabric is an integrated data management architecture that uses automation, metadata, and AI to unify data access across disparate systems and environments. It provides a consistent layer for discovering, governing, and consuming data regardless of where it physically resides.

Data Governance
Data & Analytics

Data Governance is the framework of policies, processes, roles, and standards that ensures data across an organisation is managed properly, securely, and in compliance with regulations. It defines who can access data, how data is maintained, and what rules apply to its use, enabling organisations to treat data as a strategic asset.

Data Labeling
Data & Analytics

Data Labeling is the process of annotating raw data with meaningful tags, categories, or descriptions that teach machine learning models to recognise patterns. It is a critical step in building supervised AI systems, as the quality and accuracy of labels directly determine how well the resulting model will perform.

Data Lake
Data & Analytics

Data Lake is a centralised storage repository that holds vast amounts of raw data in its native format until it is needed for analysis. Unlike traditional databases that require data to be structured before storage, a data lake accepts structured, semi-structured, and unstructured data, providing flexibility for diverse analytics use cases.

Data Lakehouse
AI Infrastructure

A data lakehouse is a modern data architecture that combines the flexible, low-cost storage of a data lake with the structured data management and query performance of a data warehouse, providing a single platform for both analytics and AI workloads without duplicating data across systems.

Data Lineage
Data & Analytics

Data Lineage is the practice of tracking data from its origin through every transformation, movement, and aggregation it undergoes until it reaches its final consumption point. It provides a complete audit trail that shows how data flows through an organisation's systems and processes.

Data Mesh
Data & Analytics

Data Mesh is a decentralised data architecture that treats data as a product owned by domain-specific teams rather than a central data team. It distributes data ownership, governance, and quality responsibilities to the business domains that generate and best understand the data.

Data Monetization
Data & Analytics

Data Monetization is the process of generating measurable economic value from an organisation's data assets. This can involve directly selling data or data-derived products to external parties, or indirectly using data to improve internal operations, enhance products, reduce costs, and create new revenue streams.

Data Observability
Data & Analytics

Data Observability is the practice of monitoring, tracking, and ensuring the health and reliability of data as it flows through an organisation's pipelines and systems. It applies the principles of software observability — monitoring, alerting, and root cause analysis — to data infrastructure, enabling teams to detect and resolve data issues before they affect downstream consumers.

Data Pipeline
Data & Analytics

Data Pipeline is a series of automated steps that move data from one or more sources through transformation processes to a destination system where it can be stored, analysed, or used. It ensures data flows reliably and consistently across an organisation without manual intervention.

Data Poisoning
AI Safety & Security

Data Poisoning is an attack on AI systems where an adversary deliberately introduces corrupted, misleading, or malicious data into the training dataset to compromise the behaviour and integrity of the resulting AI model. It undermines the foundation that AI systems rely on to make accurate decisions.

Data Privacy
Data & Analytics

Data Privacy is the practice of handling personal data in a way that respects individuals' rights to control how their information is collected, used, stored, shared, and deleted. It encompasses the legal, technical, and organisational measures that organisations implement to protect personal data and comply with data protection regulations.

Data Quality
Data & Analytics

Data Quality refers to the overall reliability, accuracy, completeness, consistency, and timeliness of data within an organisation. High data quality means that data is fit for its intended use in operations, decision-making, analytics, and AI. Poor data quality leads to flawed insights, failed AI projects, and costly business mistakes.

Data Sovereignty
AI Governance & Ethics

Data Sovereignty is the principle that data is subject to the laws and governance structures of the country in which it is collected or processed. For AI systems, this means that training data, model outputs, and personal information used by AI must comply with the legal requirements of each jurisdiction where the data originates or resides.

Data Strategy
AI Strategy

Data Strategy is an organizational plan that defines how a company will collect, store, manage, govern, and leverage its data assets to support business objectives, with particular emphasis on creating the data foundation necessary for successful artificial intelligence and analytics initiatives.

Data Version Control
AI Infrastructure

Data Version Control is the practice of tracking and managing changes to the datasets used in AI model training and evaluation, providing a complete history of data modifications that enables experiment reproducibility, collaboration between team members, and the ability to trace any AI model back to the exact data it was trained on.

Data Virtualization
Data & Analytics

Data Virtualization is a technology approach that allows users and applications to access, query, and combine data from multiple disparate sources in real time without physically moving or copying the data into a central repository. It creates a unified virtual data layer that sits on top of existing systems, providing a single point of access to information spread across the organisation.

Data Warehouse
Data & Analytics

Data Warehouse is a centralised repository designed to store, organise, and manage large volumes of structured data from multiple sources, optimised specifically for fast querying and business reporting. It transforms raw data into a consistent, analysis-ready format that supports decision-making across the organisation.

Data Warehouse Automation
Data & Analytics

Data Warehouse Automation is the use of software tools and processes to automate the design, deployment, population, and ongoing management of a data warehouse. It replaces the traditionally manual and time-intensive work of building data warehouse infrastructure, enabling organisations to get analytical capabilities running faster and with fewer specialised resources.

Data Wrangling
Data & Analytics

Data Wrangling is the process of cleaning, structuring, enriching, and transforming raw data from various sources into a consistent, usable format suitable for analysis. Also known as data munging or data preparation, it addresses the messy reality that raw data is rarely in the format needed for business analysis and typically requires significant effort to make it reliable and useful.

Datasheets for Datasets
AI Governance & Ethics

Datasheets for Datasets is a standardised documentation framework that records the provenance, composition, collection process, intended use, and known limitations of datasets used to train AI systems, enabling informed decisions about data quality and appropriateness.

Decision Tree
Machine Learning

A Decision Tree is a machine learning model that makes predictions by following a series of yes-or-no questions about data features, creating a tree-like structure of decisions that is highly intuitive and easy for business stakeholders to understand and interpret.

Deep Learning
Machine Learning

Deep Learning is a specialized subset of machine learning that uses multi-layered neural networks to automatically learn hierarchical representations from large datasets, enabling breakthroughs in image recognition, natural language processing, and other complex pattern-recognition tasks.

Deepfake Detection
AI Safety & Security

Deepfake Detection is the set of technologies and techniques used to identify AI-generated or AI-manipulated media, including synthetic video, audio, and images that have been created to convincingly impersonate real people or fabricate events. It is a critical capability for combating fraud, misinformation, and identity-based attacks.

Dependency Parsing
Natural Language Processing

Dependency Parsing is an NLP technique that analyzes the grammatical structure of sentences by identifying relationships between words, determining which words modify or depend on others, enabling machines to understand how sentence components connect to convey meaning.

Depth Estimation
Computer Vision

Depth Estimation is a computer vision technique that determines the distance of objects from a camera, creating three-dimensional understanding from two-dimensional images. It enables applications such as autonomous navigation, augmented reality, robotics, and spatial analysis without requiring specialised depth sensors.

Descriptive Analytics
Data & Analytics

Descriptive Analytics is the most foundational form of data analytics, focused on summarising and interpreting historical data to understand what has happened in the past. It uses techniques such as aggregation, data mining, and visualisation to transform raw data into meaningful summaries, dashboards, and reports that provide a clear picture of business performance.

Diagnostic Analytics
Data & Analytics

Diagnostic Analytics is a form of data analysis focused on understanding why something happened by examining historical data in depth. It goes beyond descriptive analytics, which shows what happened, to investigate the underlying causes, correlations, and contributing factors behind observed outcomes, enabling organisations to learn from past events and address root causes rather than symptoms.

Dialogue Management
Natural Language Processing

Dialogue Management is the AI component that controls the flow, logic, and state of conversations between users and automated systems, deciding what the system should say or do next based on conversation history, user intent, and business rules.

Differential Privacy
AI Governance & Ethics

Differential Privacy is a mathematical framework that enables organisations to extract useful insights and patterns from datasets while providing formal guarantees that no individual's personal information can be identified or reconstructed from the results. It adds carefully calibrated noise to data or query results to protect individual privacy.

Diffusion Model
Generative AI

Diffusion Model is an AI architecture that generates high-quality images, videos, and other content by learning to gradually remove noise from random data, reversing a process of adding noise to training examples. It is the technology behind popular AI image generators like DALL-E, Stable Diffusion, and Midjourney.

Digital Transformation
AI Strategy

Digital Transformation is the process of integrating digital technologies across all areas of a business to fundamentally change how it operates, delivers value to customers, and competes in the market, often serving as the essential foundation for successful AI adoption.

Digital Twin
Robotics & Automation

A Digital Twin is a virtual replica of a physical asset, process, or system that uses real-time data and simulation to mirror its real-world counterpart. Digital twins enable businesses to monitor performance, predict failures, test changes, and optimise operations without disrupting actual production or infrastructure.

Dimensionality Reduction
Machine Learning

Dimensionality Reduction is a set of machine learning techniques that reduce the number of input features in a dataset while preserving the most important information, making data easier to analyze, visualize, and process while often improving model performance.

Document Automation
Business Applications

Document Automation is the use of AI and software systems to automatically generate, process, review, and manage business documents such as contracts, invoices, reports, and compliance filings. It reduces manual document handling, improves accuracy, and accelerates business workflows.

Document Classification
Natural Language Processing

Document Classification is an NLP technique that automatically assigns predefined categories or labels to documents based on their content, enabling businesses to organize, route, and manage large volumes of text data such as emails, contracts, reports, and support tickets efficiently and consistently.

Document Intelligence
Computer Vision

Document Intelligence is an AI-powered capability that goes beyond basic OCR to understand the structure, context, and meaning of documents. It can extract specific data fields, classify document types, interpret tables and forms, and process complex multi-page documents, enabling businesses to automate document-heavy workflows with high accuracy and minimal manual intervention.

Drone AI
Robotics & Automation

Drone AI refers to the artificial intelligence systems that enable unmanned aerial vehicles to fly autonomously, perceive their environment, make real-time decisions, and perform complex tasks without continuous human control. It combines computer vision, navigation algorithms, and machine learning to power applications from agricultural monitoring to infrastructure inspection.

Dropout
Machine Learning

Dropout is a regularization technique for neural networks that randomly deactivates a percentage of neurons during each training step, forcing the network to learn more robust and generalizable features rather than relying on specific neurons, thereby reducing overfitting and improving real-world performance.

Dynamic Pricing
Business Applications

Dynamic Pricing is an AI-driven pricing strategy that automatically adjusts prices in real time based on factors such as demand, competition, inventory levels, customer segments, and market conditions. It enables businesses to maximise revenue and margins by setting optimal prices that reflect the current market environment rather than relying on static price lists.

E

ETL
Data & Analytics

ETL stands for Extract, Transform, Load, a three-step process used to move data from source systems, convert it into a usable format, and load it into a destination system such as a data warehouse. ETL is the backbone of data integration, ensuring that data from disparate sources is unified, clean, and ready for analysis.

EU AI Act
AI Safety & Security

The EU AI Act is the world's first comprehensive legal framework for regulating artificial intelligence, enacted by the European Union and effective from 2025. It classifies AI systems into risk tiers and imposes strict transparency, accountability, and safety requirements on high-risk applications across all industries.

Edge AI
AI Infrastructure

Edge AI is the deployment of artificial intelligence algorithms directly on local devices such as smartphones, sensors, cameras, or IoT hardware, enabling real-time data processing and decision-making at the source without relying on a constant connection to cloud servers.

Edge Analytics
Data & Analytics

Edge Analytics is the approach of collecting, processing, and analysing data at or near its point of generation, such as on IoT devices, sensors, factory equipment, or local gateways, rather than sending all data to a centralised cloud or data centre for analysis. It enables faster insights, reduced bandwidth usage, and real-time decision-making where immediate response is critical.

Edge Detection
Computer Vision

Edge Detection is a fundamental computer vision technique that identifies the boundaries and outlines of objects in images by detecting sharp changes in brightness, colour, or texture. It serves as a building block for more advanced visual analysis, enabling applications in quality inspection, document processing, autonomous navigation, and any task where identifying object boundaries is essential.

Embedding
Generative AI

An embedding is a numerical representation of data -- such as text, images, or audio -- expressed as a list of numbers (a vector) that captures the meaning and relationships within that data. Embeddings allow AI systems to understand similarity and context, powering applications like search, recommendations, and classification.

Embodied AI
Robotics & Automation

Embodied AI refers to artificial intelligence systems that possess a physical form, typically a robot, enabling them to perceive, interact with, and learn from the real world through direct physical experience. Unlike purely digital AI that processes text or images on servers, Embodied AI systems act upon their environment, combining sensing, reasoning, and physical action.

Emotion Recognition (Voice)
Speech & Audio AI

Emotion Recognition (Voice) is an AI technology that analyses speech patterns, tone, pitch, tempo, and vocal cues to detect the emotional state of a speaker. It enables businesses to gauge customer sentiment in real time during calls, interviews, and interactions, improving service quality and decision-making.

End Effector
Robotics & Automation

End Effector is the device or tool attached to the end of a robotic arm that directly interacts with the workpiece or environment. It functions as the robot's hand, and can take the form of grippers, welding torches, spray nozzles, suction cups, or specialised tools designed for specific manufacturing tasks.

Ensemble Learning
Machine Learning

Ensemble Learning is a machine learning strategy that combines multiple individual models to produce predictions that are more accurate and reliable than any single model alone, similar to how a panel of experts provides better advice than a single consultant.

Ethical AI Design
AI Governance & Ethics

Ethical AI Design is the practice of incorporating ethical principles, such as fairness, transparency, privacy, accountability, and human welfare, into every stage of the AI development process, from initial concept and data collection through to deployment, monitoring, and retirement.

Explainable AI
AI Governance & Ethics

Explainable AI is the set of methods and techniques that make the outputs and decision-making processes of artificial intelligence systems understandable to humans. It enables stakeholders to comprehend why an AI system reached a particular conclusion, supporting trust, accountability, regulatory compliance, and informed business decision-making.

F

Facial Recognition
Computer Vision

Facial Recognition is an AI technology that identifies or verifies individuals by analysing the unique features of their faces in images or video. It is used in business applications including access control, attendance tracking, and customer identification, though it raises significant privacy and ethical considerations that organisations must carefully navigate.

Feature Engineering
Machine Learning

Feature Engineering is the process of selecting, transforming, and creating the input variables that a machine learning model uses to make predictions, directly determining model performance and often representing the most impactful step in any ML project.

Feature Pipeline
AI Infrastructure

A feature pipeline is an automated system that transforms raw data from various sources into clean, structured features that machine learning models can use for training and prediction, ensuring consistent and reliable data preparation across development and production environments.

Feature Store
Data & Analytics

A Feature Store is a centralised repository that stores, manages, and serves machine learning features consistently across training and production environments. It ensures that data scientists and engineers share a single source of truth for the computed data inputs that power predictive models.

Federated Learning
AI Infrastructure

Federated learning is a machine learning approach where AI models are trained across multiple decentralised devices or servers holding local data, without transferring raw data to a central location, enabling organisations to build powerful models while preserving data privacy and complying with data sovereignty regulations.

Few-Shot Learning
Generative AI

Few-Shot Learning is an AI technique where a model performs a new task after being shown only a small number of examples, typically 2-10, enabling businesses to customize AI outputs for specific use cases without expensive model training or large datasets.

Few-Shot Object Detection
Computer Vision

Few-Shot Object Detection is a computer vision approach that enables AI models to learn to detect new types of objects from just a handful of example images, rather than the thousands typically required. It dramatically reduces the data and time needed to deploy custom object detection for specific business applications.

Fine-tuning
Generative AI

Fine-tuning is the process of further training a pre-trained AI model on a specific dataset to improve its performance for particular tasks or domains. It allows businesses to customize general-purpose AI models to understand their industry terminology, follow their guidelines, and produce outputs tailored to their needs.

Fleet Management AI
Robotics & Automation

Fleet Management AI is the use of artificial intelligence systems to coordinate, optimise, and monitor the operations of multiple robots, autonomous vehicles, or drones operating as a group. It handles task allocation, route optimisation, maintenance scheduling, and real-time coordination to maximise fleet productivity while minimising costs and operational disruptions.

Foundation Model
Generative AI

A foundation model is a large AI model trained on broad, diverse data that can be adapted for many different tasks and applications. Foundation models serve as the base layer upon which businesses build specialized AI solutions, reducing the cost and complexity of AI adoption significantly.

Fraud Detection
Business Applications

Fraud Detection is the use of AI and machine learning to identify suspicious activities, transactions, or behaviours that indicate fraudulent intent. AI-powered fraud detection analyses patterns in real-time across large volumes of data to flag anomalies, reducing financial losses and protecting businesses and customers from increasingly sophisticated fraud schemes.

Frontier Model
Generative AI

A frontier model is an AI model that represents the most advanced capabilities available at the current state of the art, pushing the boundaries of what artificial intelligence can do. These models set the performance benchmarks that all other AI systems are measured against and typically require enormous resources to develop.

Function Calling
Agentic AI

Function Calling is a mechanism that enables large language models to generate structured requests to invoke specific software functions or APIs, allowing AI systems to translate natural language instructions into precise, executable actions within business applications.

G

GPT
Generative AI

GPT (Generative Pre-trained Transformer) is a family of large language models developed by OpenAI that can generate human-quality text, answer questions, write code, and perform a wide range of language tasks. GPT models power ChatGPT and are widely used in business applications.

GPU
AI Infrastructure

A GPU, or Graphics Processing Unit, is a specialised processor originally designed for rendering graphics but now essential for AI and machine learning workloads, capable of performing thousands of calculations simultaneously, making it far more efficient than traditional CPUs for training and running AI models.

GPU Cluster
AI Infrastructure

A GPU cluster is a group of multiple GPUs connected through high-speed networking that work together as a unified system to train large AI models, enabling organisations to distribute massive computational workloads across many processors to dramatically reduce training time.

Generative AI
Generative AI

Generative AI is a category of artificial intelligence that creates new content such as text, images, code, and audio by learning patterns from large datasets. It enables businesses to automate creative and analytical tasks that previously required significant human effort and expertise.

Generative Adversarial Network (GAN)
Computer Vision

Generative Adversarial Network (GAN) is a machine learning architecture consisting of two neural networks that compete against each other to generate highly realistic synthetic images and other data. It enables businesses to create training data for AI models, generate product visualisations, enhance image quality, and produce realistic content for marketing and design without expensive photoshoots.

Geospatial Analytics
Data & Analytics

Geospatial Analytics is the practice of gathering, displaying, and analysing data that has a geographic or location-based component. It combines location data with business, demographic, and environmental information to reveal spatial patterns and relationships that are invisible in traditional tabular analysis, enabling better decisions about where to operate, invest, and serve customers.

Gradient Descent
Machine Learning

Gradient Descent is the fundamental optimization algorithm used to train machine learning models by iteratively adjusting model parameters in the direction that minimizes prediction errors, enabling the model to progressively improve its accuracy on real-world data.

Graph Database
Data & Analytics

Graph Database is a type of database that uses graph structures, consisting of nodes, edges, and properties, to store, map, and query relationships between data. Unlike traditional relational databases that use tables and rows, graph databases are purpose-built to traverse and analyse highly connected data efficiently, making them ideal for relationship-heavy use cases such as social networks, fraud detection, and recommendation engines.

Graph RAG
AI Infrastructure

Graph RAG is a retrieval-augmented generation approach pioneered by Microsoft that combines knowledge graphs with traditional RAG techniques, enabling AI systems to retrieve and reason over complex, interconnected data relationships rather than isolated text chunks, producing more accurate and contextually rich responses for business applications.

Grounding
Generative AI

Grounding in AI is the practice of connecting an AI model's outputs to verified, factual sources of information -- such as company databases, documents, or trusted external sources -- to ensure responses are accurate, current, and traceable rather than generated from the model's training data alone.

H

Hallucination (AI)
Generative AI

AI hallucination refers to instances where an artificial intelligence model generates information that sounds plausible and confident but is factually incorrect, fabricated, or not supported by its training data. Understanding and mitigating hallucinations is critical for businesses deploying AI in any context where accuracy matters.

Hardware-in-the-Loop Testing
Robotics & Automation

Hardware-in-the-Loop Testing is a validation method where real robot hardware components are connected to a simulated environment to test software, control algorithms, and system behaviour before full deployment. It bridges the gap between pure software simulation and physical testing, reducing development risk and cost.

Human Oversight of AI
AI Governance & Ethics

Human Oversight of AI is the set of governance mechanisms, processes, and organisational structures that ensure human beings maintain meaningful control over AI systems throughout their lifecycle. It encompasses the ability to monitor, intervene in, override, and ultimately shut down AI systems when necessary.

Human-in-the-Loop
AI Operations

Human-in-the-Loop is an AI design approach where human judgement is integrated into the AI decision-making process, ensuring that people review, validate, or override AI outputs before critical actions are taken. It balances the efficiency of automation with the accountability, ethical oversight, and contextual understanding that only humans can provide.

Humanoid Robot
Robotics & Automation

A Humanoid Robot is a robot designed with a human-like body shape, typically featuring a head, torso, arms, and legs, enabling it to operate in environments and use tools built for people. Humanoid robots are increasingly used in logistics, hospitality, and manufacturing to perform general-purpose tasks alongside human workers.

Hybrid Search
AI Infrastructure

Hybrid search is an information retrieval approach that combines traditional keyword-based search with modern semantic vector search, delivering more accurate and comprehensive results by matching both exact terms and conceptual meaning, making it the preferred method for enterprise AI and RAG systems.

Hyperparameter Tuning
Machine Learning

Hyperparameter Tuning is the process of systematically finding the optimal configuration settings for a machine learning model -- settings that are chosen before training begins and significantly affect model performance, accuracy, and generalization to new data.

I

Image Captioning
Computer Vision

Image Captioning is an AI technique that automatically generates natural language descriptions of the content in images, bridging computer vision and language understanding. It enables businesses to automate media cataloguing, improve digital accessibility, enhance content management, and create searchable visual archives without manual effort.

Image Generation
Computer Vision

Image Generation is an AI capability that creates new, original images from text descriptions, sketches, or other inputs using deep learning models. It enables businesses to produce marketing visuals, product prototypes, design variations, and creative content at scale without traditional photography or graphic design.

Image Recognition
Computer Vision

Image Recognition is an AI capability that enables computers to identify and classify objects, scenes, and patterns within digital images. It allows businesses to automate tasks like product categorisation, brand monitoring, and quality inspection by teaching machines to understand visual content with human-level or better accuracy.

Image Segmentation
Computer Vision

Image Segmentation is an AI technique that divides an image into distinct regions or segments, assigning a label to every pixel. Unlike object detection which draws boxes around objects, segmentation precisely outlines their exact shapes, enabling applications like medical image analysis, autonomous navigation, satellite imagery interpretation, and precision quality control.

Image Super-Resolution
Computer Vision

Image Super-Resolution is an AI technique that enhances the quality, detail, and resolution of images beyond what was originally captured. It uses deep learning models to intelligently reconstruct fine details, enabling businesses to extract more value from existing imagery for applications in surveillance, medical imaging, satellite analysis, and media production.

In-Context Learning
Generative AI

In-Context Learning is the ability of AI models to adapt their behavior and learn new tasks based on the information, examples, and instructions provided within the prompt itself, without any modification to the underlying model, enabling real-time customization of AI outputs for specific business needs.

Industrial IoT
Robotics & Automation

Industrial IoT, or IIoT, refers to the network of connected sensors, instruments, machines, and systems in industrial environments that collect, exchange, and analyse data to improve manufacturing efficiency, quality, and safety. It is the foundation of smart manufacturing and Industry 4.0, enabling real-time monitoring, predictive maintenance, and data-driven operational decisions.

Industrial Robot
Robotics & Automation

Industrial Robot is a programmable, multi-purpose automated machine designed to perform manufacturing tasks such as welding, painting, assembly, and material handling with high precision, speed, and consistency. These robots form the backbone of modern factory automation and are transforming production across Southeast Asia.

Inference
Generative AI

Inference in AI is the process of running a trained model to generate outputs -- such as predictions, text responses, image classifications, or recommendations -- from new input data. It is the production phase of AI where the model delivers value to end users, as opposed to the training phase where the model learns.

Inference
AI Infrastructure

Inference is the process of using a trained AI model to make predictions or decisions on new, unseen data in real time, representing the production phase where AI delivers actual business value by processing customer requests, analysing images, generating text, or making recommendations.

Information Extraction
Natural Language Processing

Information Extraction is an AI technique that automatically identifies and pulls structured data such as names, dates, monetary values, and relationships from unstructured text sources like documents, emails, and web pages, converting free-form content into organized, queryable information.

Instance Segmentation
Computer Vision

Instance Segmentation is a computer vision technique that identifies and precisely delineates every individual object in an image, distinguishing separate instances even when they belong to the same category. It enables businesses to count, measure, and track individual items in complex visual scenes for applications like inventory management, crowd analysis, and automated inspection.

Intelligent Automation
Business Applications

Intelligent Automation is the combination of artificial intelligence technologies such as machine learning, natural language processing, and computer vision with automation tools like robotic process automation to create end-to-end automated workflows that can handle complex, judgement-intensive business processes. It extends automation beyond simple rule-based tasks to processes that require understanding, reasoning, and adaptation.

Intelligent Document Processing
Business Applications

Intelligent Document Processing is an AI-powered technology that automatically extracts, classifies, and processes information from unstructured documents such as invoices, contracts, forms, and receipts. It combines optical character recognition, natural language processing, and machine learning to convert documents into structured, actionable data.

Intent Recognition
Natural Language Processing

Intent Recognition is an AI capability that detects what action or goal a user is trying to accomplish from their natural language input, enabling chatbots, voice assistants, and automated systems to understand requests like "book a flight" or "check my balance" and respond appropriately.

Inventory Optimization AI
Business Applications

Inventory Optimization AI is the application of artificial intelligence and machine learning to determine the ideal stock levels for every product across every location in a business. It analyses demand patterns, supplier lead times, seasonal trends, and external factors to minimise stockouts and overstock situations while reducing carrying costs and waste.

K

K-Nearest Neighbors
Machine Learning

K-Nearest Neighbors (KNN) is a straightforward machine learning algorithm that classifies new data points by looking at the K most similar examples in the training data and assigning the majority class among those neighbors, operating on the principle that similar things tend to be alike.

Keyword Extraction
Natural Language Processing

Keyword Extraction is an NLP technique that automatically identifies the most important and relevant terms or phrases from a document or collection of text, helping businesses quickly understand content themes, improve search functionality, and organize large volumes of unstructured information.

Knowledge Graph
Natural Language Processing

A Knowledge Graph is a structured representation of real-world entities and the relationships between them, organized as a network of interconnected nodes and edges that enables machines to understand context, answer complex queries, and power intelligent applications like search engines, recommendation systems, and conversational AI.

Knowledge Management AI
Business Applications

Knowledge Management AI is the application of artificial intelligence to capture, organise, retrieve, and share organisational knowledge across a business. It uses natural language processing and machine learning to make institutional knowledge searchable, accessible, and actionable for employees and customers.

Kubernetes for AI
AI Infrastructure

Kubernetes for AI is a container orchestration platform adapted for managing AI workloads, enabling businesses to automatically deploy, scale, and operate machine learning models and training jobs across clusters of servers with high reliability and efficient resource utilisation.

L

Language Detection
Natural Language Processing

Language Detection is an NLP capability that automatically identifies the language or languages present in a given text, enabling systems to route content to the appropriate language-specific processing pipeline, select the correct translation model, or assign multilingual content to qualified human agents.

Language Model
Natural Language Processing

A Language Model is an AI system trained on large amounts of text data to understand, predict, and generate human language, serving as the foundation for applications ranging from autocomplete and chatbots to content generation and code writing.

Large Language Model
Generative AI

A Large Language Model (LLM) is an AI system trained on vast amounts of text data that can understand, generate, and reason about human language. LLMs power popular tools like ChatGPT and Google Gemini, enabling businesses to automate communication, analysis, and content creation tasks.

Learning Rate
Machine Learning

The Learning Rate is a hyperparameter that controls how much a machine learning model adjusts its internal weights in response to errors during each training step, acting as the pace at which the model learns -- too high causes instability, too low causes painfully slow training or getting stuck.

LiDAR
Computer Vision

LiDAR (Light Detection and Ranging) is a remote sensing technology that uses laser pulses to measure distances and create precise three-dimensional maps of environments. It provides accurate spatial data for applications including autonomous vehicles, urban planning, agriculture, and infrastructure monitoring.

LoRA
Generative AI

LoRA (Low-Rank Adaptation) is an efficient fine-tuning technique that adapts large AI models to specific tasks by modifying only a small fraction of the model's parameters. This makes customizing AI models dramatically faster, cheaper, and more accessible for businesses that need AI tailored to their industry or use case.

Long Context Model
Generative AI

A Long Context Model is an AI model capable of processing extremely large amounts of text in a single interaction, ranging from 100,000 to over 1 million tokens. Models like Google Gemini 1.5 and Anthropic Claude can analyze entire books, codebases, or document libraries at once, enabling businesses to work with complete datasets rather than fragmented summaries.

Loss Function
Machine Learning

A Loss Function is a mathematical formula that measures the difference between a machine learning model's predictions and the actual correct answers, providing a single numerical score that guides the training process by quantifying exactly how wrong the model is so it can systematically improve.

M

MLOps
AI Infrastructure

MLOps, short for Machine Learning Operations, is a set of practices and tools that combines machine learning, DevOps, and data engineering to reliably deploy, monitor, and maintain AI models in production, ensuring they continue to perform accurately and deliver business value over time.

Machine Learning
Machine Learning

Machine Learning is a branch of artificial intelligence that enables computers to learn patterns from data and make decisions without being explicitly programmed for every scenario, allowing businesses to automate predictions, recommendations, and complex decision-making at scale.

Machine Translation
Natural Language Processing

Machine Translation is an AI technology that automatically translates text or speech from one language to another, enabling businesses to communicate across language barriers, localize content for international markets, and process multilingual documents without relying entirely on human translators.

Master Data Management
Data & Analytics

Master Data Management is the discipline of creating and maintaining a single, authoritative, and consistent source of truth for an organisation's most critical shared data, such as customer records, product information, supplier details, and financial reference data. It ensures that every department and system across the business works with the same accurate information.

Medical Imaging AI
Computer Vision

Medical Imaging AI is the application of computer vision and deep learning to analyse medical scans and diagnostic images such as X-rays, MRIs, CT scans, and pathology slides. It helps healthcare providers detect diseases earlier, reduce diagnostic errors, speed up radiology workflows, and extend specialist expertise to underserved regions where radiologists and pathologists are scarce.

Membership Inference Attack
AI Safety & Security

A Membership Inference Attack is a privacy attack against machine learning models where an adversary attempts to determine whether a specific data record was included in the model's training dataset. It poses significant privacy risks, particularly when models are trained on sensitive personal or business data.

Mixture of Experts
Generative AI

Mixture of Experts (MoE) is an AI model architecture that divides the model into multiple specialized sub-networks called experts, activating only the most relevant ones for each input. This enables models to be extremely large and capable while remaining computationally efficient, because only a fraction of the model processes any given query.

Model Alignment
Generative AI

Model Alignment is the process of training and configuring AI models to produce outputs that are helpful, honest, and harmless, ensuring the AI behaves in accordance with human values, follows instructions as intended, and avoids generating harmful, biased, or misleading content.

Model Cache
AI Infrastructure

Model Cache is a system that stores pre-computed AI model outputs so that repeated or similar requests can be served instantly from stored results rather than running the full model computation again, significantly reducing response times and infrastructure costs.

Model Card
AI Governance & Ethics

A Model Card is a standardised documentation framework that describes an AI model's intended use, performance characteristics, training data, limitations, and ethical considerations, providing stakeholders with the information needed to understand and responsibly deploy the model.

Model Compression
AI Infrastructure

Model compression is a set of techniques for reducing the size and computational requirements of AI models while preserving most of their accuracy, enabling faster inference, lower costs, and deployment on resource-constrained devices such as mobile phones and edge hardware.

Model Context Protocol (MCP)
Agentic AI

Model Context Protocol (MCP) is a standardized, open protocol that defines how AI models connect to and interact with external tools, data sources, and services, enabling agents to access real-world information and take actions beyond their training data.

Model Deployment
AI Infrastructure

Model deployment is the process of taking a trained AI model from a development environment and making it available in a production system where it can process real-world data and deliver predictions or decisions to end users, applications, or business processes at scale.

Model Distillation
Generative AI

Model distillation is a technique for transferring the knowledge and capabilities of a large, powerful AI model (the teacher) into a smaller, faster, and more cost-effective model (the student). This enables businesses to deploy AI with near-equivalent quality at a fraction of the computational cost and latency.

Model Extraction Attack
AI Safety & Security

A Model Extraction Attack is a technique where an adversary systematically queries a deployed AI model to reconstruct a functional copy of it, effectively stealing the model's learned knowledge, capabilities, and intellectual property without authorised access to its parameters, architecture, or training data.

Model Inversion Attack
AI Safety & Security

A Model Inversion Attack is a privacy attack where an adversary exploits access to a trained AI model to reconstruct or approximate the sensitive data used during training. It can reveal personal information, proprietary data, or confidential records that the model was trained on.

Model Marketplace
AI Strategy

Model Marketplace is a platform such as Hugging Face, AWS Marketplace, or Azure AI Gallery where organizations can discover, compare, download, and deploy pre-trained AI models, significantly reducing the time and cost of building AI capabilities from scratch.

Model Monitoring
AI Infrastructure

Model monitoring is the ongoing practice of tracking the performance, accuracy, and behaviour of AI models in production to detect issues like data drift, prediction errors, and degrading accuracy, ensuring models continue to deliver reliable business outcomes over time.

Model Registry
AI Infrastructure

A model registry is a centralised repository for storing, versioning, and managing machine learning models throughout their lifecycle, providing a single source of truth that tracks which models are in development, testing, and production across an organisation.

Model Serving
Generative AI

Model serving is the infrastructure and process of deploying trained AI models in production environments so they can receive requests and return predictions or outputs reliably, efficiently, and at scale. It encompasses the technical systems needed to make AI models available to applications and end users.

Model Sharding
AI Infrastructure

Model Sharding is the technique of splitting a large AI model into smaller pieces called shards and distributing them across multiple machines or GPUs, enabling organisations to run models that are too large to fit on a single device while maintaining performance and efficiency.

Model Training
Machine Learning

Model Training is the process of teaching a machine learning algorithm to recognize patterns in data by iteratively adjusting its internal parameters to minimize prediction errors, transforming raw data and algorithms into a functional AI system capable of making accurate predictions.

Model Versioning
AI Infrastructure

Model versioning is the practice of systematically tracking and managing different iterations of AI models throughout their lifecycle, recording changes to training data, parameters, code, and performance metrics so teams can compare, reproduce, and roll back to any previous version.

Motor Control (Robotics)
Robotics & Automation

Motor Control (Robotics) is the AI and engineering discipline focused on precisely controlling the motors and actuators that drive robot movement, enabling smooth, accurate, and adaptive motion for tasks ranging from high-speed assembly to delicate surgical manipulation.

Multi-Agent System
Agentic AI

A Multi-Agent System is an architecture where multiple specialized AI agents work together, each handling distinct roles or tasks, to solve complex problems that would be difficult or impossible for a single agent to address effectively on its own.

Multi-Cloud AI
AI Infrastructure

Multi-Cloud AI is the strategy of distributing AI workloads across two or more cloud providers such as AWS, Google Cloud, and Azure, enabling businesses to leverage the best AI services from each provider while avoiding vendor lock-in, improving resilience, and meeting diverse regulatory requirements across different markets.

Multilingual ASR
Speech & Audio AI

Multilingual ASR is a speech recognition technology capable of understanding and transcribing spoken language across multiple languages, often within the same conversation. Unlike single-language systems, multilingual ASR models are trained on diverse language data to handle the linguistic complexity of global and multicultural business environments.

Multimodal AI
Generative AI

Multimodal AI refers to artificial intelligence systems that can process, understand, and generate multiple types of content including text, images, audio, and video within a single model. This enables businesses to build AI applications that work with diverse data types, mirroring how humans naturally communicate and work.

Multimodal RAG
Generative AI

Multimodal RAG is an advanced form of Retrieval-Augmented Generation that retrieves and reasons over multiple data types including images, PDFs, tables, charts, and diagrams alongside text. This enables AI systems to answer questions using visual and structured information from business documents, not just plain text, delivering more complete and accurate insights.

Music Generation AI
Speech & Audio AI

Music Generation AI refers to artificial intelligence systems capable of composing, arranging, and producing music autonomously or collaboratively with human creators. These systems use deep learning models trained on vast musical datasets to generate original compositions across genres, enabling businesses to create custom audio content at scale.

N

Named Entity Recognition
Natural Language Processing

Named Entity Recognition is an NLP technique that automatically identifies and classifies key elements in text — such as people, companies, locations, dates, and monetary values — enabling businesses to extract structured data from unstructured documents like contracts, invoices, and news articles.

Natural Language Generation
Natural Language Processing

Natural Language Generation is an AI capability that automatically produces human-readable text from structured data or prompts, enabling machines to write reports, summaries, product descriptions, and other content that reads as though a person composed it.

Natural Language Processing
Natural Language Processing

Natural Language Processing is a branch of artificial intelligence that enables computers to understand, interpret, and generate human language in meaningful ways, powering applications from chatbots and document analysis to voice assistants and automated translation across multiple languages.

Natural Language Understanding
Natural Language Processing

Natural Language Understanding is a subfield of artificial intelligence that focuses on enabling machines to comprehend the meaning, intent, and context behind human language, going beyond simple word recognition to grasp nuance, ambiguity, and implied meaning in text and speech.

Neural Network
Machine Learning

A Neural Network is a computing system loosely inspired by the human brain, consisting of interconnected layers of artificial neurons that process information and learn complex patterns from data, forming the foundation of deep learning and many modern AI applications.

Noise Cancellation AI
Speech & Audio AI

Noise Cancellation AI is a technology that uses machine learning algorithms to identify and remove unwanted background noise from audio signals in real time. Unlike traditional noise reduction, AI-powered systems can distinguish between speech and specific noise types, preserving voice clarity while eliminating distractions in calls, recordings, and live communications.

O

OCR
Computer Vision

OCR (Optical Character Recognition) is an AI technology that converts text within images, scanned documents, and photographs into machine-readable digital text. It enables businesses to automate data entry, digitise paper records, and extract information from invoices, receipts, and forms, dramatically reducing manual processing time and errors.

Object Detection
Computer Vision

Object Detection is an AI technology that identifies and locates specific objects within images or video frames, drawing bounding boxes around each detected item. It enables businesses to count inventory, monitor safety compliance, track vehicles, and automate visual inspection by understanding both what objects are present and where they are positioned.

Object Grasping
Robotics & Automation

Object Grasping is the robotic capability of picking up, holding, and manipulating objects of varying shapes, sizes, weights, and materials. It combines AI-powered perception, grasp planning algorithms, and precise motor control to enable robots to handle items ranging from rigid industrial parts to soft, deformable objects.

Object Tracking
Computer Vision

Object Tracking is a computer vision technique that follows specific objects across consecutive video frames over time, maintaining their identity even through occlusions and appearance changes. It enables businesses to monitor movement patterns, measure speeds, analyse behaviour, and automate surveillance across applications from retail analytics to traffic management.

Open-Source AI Model
Generative AI

An open-source AI model is an artificial intelligence model whose underlying code, architecture, and trained weights are made publicly available for anyone to use, modify, and deploy. This gives businesses the freedom to run AI on their own infrastructure, customize models for specific needs, and avoid vendor lock-in.

Optical Flow
Computer Vision

Optical Flow is a computer vision technique that tracks the apparent motion of objects, surfaces, and edges between consecutive video frames. It calculates the direction and speed of movement at each pixel, enabling applications such as video stabilisation, motion detection, traffic analysis, and autonomous navigation.

Output Filtering
AI Safety & Security

Output Filtering is the process of screening, evaluating, and potentially modifying or blocking AI-generated content before it reaches end users, ensuring that harmful, inappropriate, inaccurate, or policy-violating material is intercepted and handled before it can cause damage.

Overfitting
Machine Learning

Overfitting is a common machine learning problem where a model learns the noise and specific details of training data too well, resulting in excellent performance on training data but poor generalization to new, unseen data, effectively memorizing rather than learning.

P

PEFT (Parameter-Efficient Fine-Tuning)
Generative AI

PEFT (Parameter-Efficient Fine-Tuning) is a collection of techniques for customizing large AI models to specific business needs while modifying only a small fraction of the model's parameters, dramatically reducing the computational cost, time, and data requirements compared to traditional full fine-tuning.

Panoptic Segmentation
Computer Vision

Panoptic Segmentation is a comprehensive computer vision technique that classifies every pixel in an image into either a "thing" (countable objects like people, cars, and products) or "stuff" (uncountable regions like sky, road, and vegetation). It provides complete scene understanding by combining instance segmentation and semantic segmentation into a single unified output.

Paraphrase Detection
Natural Language Processing

Paraphrase Detection is an NLP technique that determines whether two pieces of text convey the same meaning using different words or sentence structures, enabling applications like duplicate content detection, FAQ matching, plagiarism identification, and intelligent search that understands intent beyond exact keyword matches.

Path Planning
Robotics & Automation

Path Planning is the computational process of determining an optimal or near-optimal route for a robot or autonomous vehicle to travel from one point to another while avoiding obstacles and satisfying constraints. It is a foundational capability for mobile robots, drones, autonomous vehicles, and robotic arms operating in warehouses, factories, and outdoor environments.

Personalization Engine
Business Applications

A Personalization Engine is an AI-powered system that analyses user behaviour, preferences, and contextual data to deliver tailored content, product recommendations, and experiences to individual users in real time. It enables businesses to increase engagement, conversion rates, and customer loyalty through relevant, customised interactions.

Phoneme Recognition
Speech & Audio AI

Phoneme Recognition is the AI process of identifying individual speech sounds, or phonemes, within audio input. It serves as a foundational component of speech recognition systems, breaking continuous speech into its smallest meaningful sound units to enable accurate transcription and language understanding.

Pick and Place Automation
Robotics & Automation

Pick and Place Automation refers to robotic systems that use AI and computer vision to identify, grasp, move, and precisely position objects as part of manufacturing, packaging, or logistics operations. These systems combine robot arms, intelligent grippers, and vision systems to automate one of the most common and labour-intensive tasks in industry.

Planning Agent
Agentic AI

A Planning Agent is an AI agent that creates, manages, and executes multi-step plans to achieve complex goals, dynamically breaking down high-level objectives into ordered sequences of actions, adapting plans when circumstances change, and coordinating resources to reach the desired outcome.

Point Cloud Processing
Computer Vision

Point Cloud Processing is the analysis and manipulation of three-dimensional data sets composed of millions of individual spatial points captured by LiDAR, depth cameras, or photogrammetry. It enables businesses to create 3D models, detect objects, measure volumes, and monitor changes in physical environments with high precision.

Pose Estimation
Computer Vision

Pose Estimation is a computer vision technique that detects and tracks human body positions and joint locations from images or video. It enables applications such as workplace safety monitoring, fitness coaching, and gesture-based interfaces by mapping the skeletal structure of people in real time.

Precision and Recall
Machine Learning

Precision and Recall are complementary metrics for evaluating classification models, where Precision measures the accuracy of positive predictions (how many flagged items are truly positive) and Recall measures completeness (how many actual positives were successfully detected), together providing a balanced view of model performance.

Predictive Analytics
Data & Analytics

Predictive Analytics is the practice of using historical data, statistical algorithms, and machine learning techniques to forecast future outcomes and trends. It enables organisations to anticipate what is likely to happen next, moving beyond understanding past performance to proactively preparing for future events and opportunities.

Predictive Maintenance
Business Applications

Predictive Maintenance is an AI-driven approach that uses sensor data, machine learning, and analytics to predict when equipment or machinery is likely to fail, allowing businesses to perform maintenance proactively. It reduces unplanned downtime, extends asset lifespan, and lowers maintenance costs compared to reactive or scheduled maintenance strategies.

Predictive Maintenance (Robotics)
Robotics & Automation

Predictive Maintenance (Robotics) is the application of AI and sensor data analysis to forecast when robotic systems will need servicing or component replacement before failures occur. It shifts maintenance from fixed schedules or reactive repairs to data-driven interventions that minimise downtime and extend equipment life.

Prescriptive Analytics
Data & Analytics

Prescriptive Analytics is the most advanced form of business analytics that goes beyond predicting what will happen to recommending specific actions to take. It uses optimisation algorithms, simulation, and decision science to evaluate multiple possible courses of action and suggest the best option to achieve a desired business outcome.

Privacy-Preserving AI
AI Safety & Security

Privacy-Preserving AI is a collection of techniques and approaches that enable organisations to train, deploy, and use AI systems while protecting the privacy of the individuals whose data is involved, ensuring that sensitive personal information is not exposed, leaked, or misused during any stage of the AI lifecycle.

Process Mining
Business Applications

Process Mining is an AI-powered analytical technique that uses event log data from business systems to automatically discover, visualise, and analyse how business processes actually operate. It reveals the difference between how processes are designed to work and how they work in reality, identifying bottlenecks, inefficiencies, and compliance violations.

Prompt Caching
Generative AI

Prompt Caching is an API optimization technique that stores and reuses the processed form of repeated prompt content, reducing both cost and latency for AI applications that send the same instructions, system prompts, or context with every request. This allows businesses to save up to 90 percent on repetitive API calls while getting faster responses.

Prompt Engineering
Generative AI

Prompt engineering is the practice of crafting effective instructions and inputs for AI models to produce accurate, relevant, and useful outputs. It is a critical skill for businesses seeking to maximize the value of generative AI tools without requiring deep technical expertise.

Prompt Injection
AI Safety & Security

Prompt Injection is a security attack where malicious input is crafted to override or manipulate the instructions given to a large language model, causing it to ignore its intended behaviour and follow the attacker's commands instead. It is one of the most significant security challenges facing AI-powered applications today.

Prompt Leaking
AI Safety & Security

Prompt Leaking is a security vulnerability where attackers extract hidden system instructions, proprietary prompts, or confidential configuration details from an AI system by crafting specific inputs designed to make the AI reveal its underlying instructions.

Prompt Management
AI Operations

Prompt Management is the discipline of versioning, testing, and optimising the text instructions sent to AI models across an organisation. It treats prompts as first-class software artifacts with formal review cycles, performance benchmarks, and collaborative workflows so that AI outputs remain consistent, high-quality, and aligned with business objectives.

Prompt Template
Generative AI

Prompt Template is a pre-designed, reusable instruction format for AI models that includes placeholder variables for customization, enabling teams to get consistent, high-quality AI outputs across the organization without each user needing to craft prompts from scratch.

Proof of Concept
AI Strategy

A Proof of Concept is a small-scale, time-limited project designed to validate whether a proposed AI solution can technically work and deliver the expected results, typically completed in four to eight weeks before committing to a full-scale implementation.

Prosody
Speech & Audio AI

Prosody is the pattern of rhythm, stress, intonation, and timing in spoken language that conveys meaning beyond the words themselves. In AI, prosody analysis and generation are essential for creating natural-sounding speech synthesis and for understanding the emotional and contextual nuances of human communication.

Proxy Discrimination
AI Governance & Ethics

Proxy Discrimination is a form of AI bias where an algorithm produces discriminatory outcomes against protected groups by using seemingly neutral data features that are strongly correlated with characteristics such as race, gender, age, or religion, even when those protected characteristics are not directly included in the model.

R

RAG
Generative AI

RAG (Retrieval-Augmented Generation) is a technique that enhances AI model outputs by retrieving relevant information from external knowledge sources before generating a response. RAG allows businesses to ground AI answers in their own data, reducing hallucinations and keeping responses current without retraining the model.

RLHF (Reinforcement Learning from Human Feedback)
AI Safety & Security

RLHF is a machine learning training technique that uses human preference signals to fine-tune AI models, helping them produce outputs that are more helpful, accurate, and aligned with human values. It is a core method behind the safety and usability of modern large language models.

RPA
Business Applications

RPA is a technology that uses software robots to automate repetitive, rule-based tasks typically performed by humans, such as data entry, invoice processing, and report generation. RPA bots interact with applications the same way a person would, following predefined workflows to complete tasks faster and with fewer errors.

Random Forest
Machine Learning

Random Forest is a popular machine learning algorithm that builds many decision trees on random subsets of data and combines their predictions through voting or averaging, delivering highly accurate and robust results that are resistant to overfitting.

ReAct Pattern
Agentic AI

The ReAct Pattern is an AI reasoning framework that combines Reasoning and Acting in an interleaved loop, where the AI model thinks about what to do, takes an action, observes the result, and then reasons again about the next step, enabling more reliable and transparent problem-solving.

Real-Time Analytics
Data & Analytics

Real-Time Analytics is the practice of analysing data immediately as it is generated or received, enabling organisations to monitor conditions, detect events, and make decisions within seconds or minutes rather than hours or days. It combines stream processing, in-memory computing, and live dashboards to deliver instant insights.

Real-Time Object Detection
Computer Vision

Real-Time Object Detection is a computer vision capability that identifies and locates objects in live video streams with minimal delay, typically processing 15 to 60 or more frames per second. It enables businesses to automate monitoring, trigger immediate responses to events, and make instant decisions based on visual information in applications from manufacturing quality control to retail analytics and security surveillance.

Real-Time Translation
Speech & Audio AI

Real-Time Translation is an AI technology that instantly converts spoken language from one language to another, enabling live cross-language communication. It combines speech recognition, machine translation, and text-to-speech to allow people speaking different languages to converse naturally with minimal delay.

Reasoning Model
Generative AI

A Reasoning Model is a type of AI model designed to think step-by-step before producing an answer, breaking complex problems into logical stages rather than responding instantly. Models like OpenAI o1, o3, and DeepSeek R1 use internal chain-of-thought reasoning to deliver more accurate and reliable answers for challenging business and technical questions.

Recommendation Engine
Business Applications

A Recommendation Engine is an AI system that analyses user behaviour, preferences, and contextual data to suggest relevant products, content, or services to individual users. It powers the personalised experiences consumers encounter on e-commerce sites, streaming platforms, and content services, driving engagement, conversion rates, and customer satisfaction.

Recurrent Neural Network (RNN)
Machine Learning

A Recurrent Neural Network (RNN) is a type of neural network designed to process sequential data by maintaining an internal memory state, enabling it to recognize patterns in time series, text, speech, and other ordered data where context from previous steps influences current predictions.

Reflection (AI)
Agentic AI

Reflection (AI) is a technique where an AI agent evaluates its own outputs, identifies errors or areas for improvement, and iteratively refines its work to produce higher-quality results without requiring external feedback.

Regression
Machine Learning

Regression is a supervised machine learning task where the model predicts a continuous numerical value based on input features, enabling businesses to forecast quantities like revenue, demand, prices, customer lifetime value, and other measurable outcomes.

Reinforcement Learning
Machine Learning

Reinforcement Learning is a machine learning paradigm where an agent learns optimal behavior through trial and error, receiving rewards for good actions and penalties for bad ones, making it ideal for sequential decision-making tasks like robotics, game playing, and dynamic resource optimization.

Relation Extraction
Natural Language Processing

Relation Extraction is an NLP technique that identifies and classifies the semantic relationships between entities mentioned in text, such as people, organizations, locations, and events, enabling businesses to automatically map connections and build structured knowledge from unstructured documents.

Reranking
AI Infrastructure

Reranking is an AI-powered technique that re-scores and reorders search results after initial retrieval, using specialised models to evaluate the relevance of each result to the original query with much greater accuracy, significantly improving the quality of information provided to large language models in RAG systems.

Responsible AI
AI Governance & Ethics

Responsible AI is the practice of designing, building, and deploying artificial intelligence systems in ways that are ethical, transparent, fair, and accountable. It encompasses governance frameworks, technical safeguards, and organisational processes that ensure AI technologies create positive outcomes while minimising risks to individuals and society.

Responsible AI Strategy
AI Strategy

Responsible AI Strategy is an organizational framework that integrates ethical principles, fairness, transparency, accountability, and societal impact considerations into every stage of AI development and deployment to ensure that AI systems are trustworthy and aligned with stakeholder values.

Responsible Disclosure (AI)
AI Safety & Security

Responsible Disclosure (AI) is the ethical practice of reporting discovered vulnerabilities, safety issues, or harmful behaviours in AI systems to the affected organisation in a structured and confidential manner, giving them reasonable time to address the problem before any public announcement.

Retrieval-Augmented Agent
Agentic AI

Retrieval-Augmented Agent is an AI agent that dynamically searches and retrieves relevant information from external knowledge sources during its reasoning process, enabling it to provide accurate, up-to-date, and contextually grounded responses rather than relying solely on its training data.

Revenue Intelligence
Business Applications

Revenue Intelligence is the use of AI and machine learning to automatically capture, analyse, and derive insights from sales activity data, customer interactions, and market signals. It helps businesses forecast revenue more accurately, identify pipeline risks, and optimise their go-to-market strategies.

Right to Explanation
AI Governance & Ethics

The Right to Explanation is a legal and ethical concept that gives individuals the right to receive a meaningful explanation of how an AI or automated system arrived at a decision that significantly affects them, enabling them to understand, challenge, and seek redress for those decisions.

Robot Calibration
Robotics & Automation

Robot Calibration is the process of measuring and correcting the differences between a robot's actual physical parameters and its theoretical design specifications to achieve maximum positioning accuracy. It ensures that the robot moves exactly where it is commanded to go, which is essential for precision manufacturing and multi-robot coordination.

Robot Operating System (ROS)
Robotics & Automation

The Robot Operating System (ROS) is an open-source framework that provides libraries, tools, and conventions for developing robot software. Despite its name, it is not an operating system but a middleware layer that simplifies building complex robotic applications by offering standardised communication, hardware abstraction, and a vast ecosystem of reusable software packages.

Robot Vision
Robotics & Automation

Robot Vision is the field of artificial intelligence that enables robots to perceive, interpret, and understand visual information from their environment using cameras and image processing algorithms. It allows robots to identify objects, navigate spaces, inspect products, and adapt their actions based on what they see.

Robot-as-a-Service (RaaS)
Robotics & Automation

Robot-as-a-Service (RaaS) is a subscription-based business model that provides access to robotic automation through regular payments rather than large upfront capital purchases. It includes the robot hardware, software, maintenance, and support as a bundled service, making automation accessible to businesses that cannot or prefer not to make large capital investments.

S

SLAM (Simultaneous Localization and Mapping)
Robotics & Automation

SLAM, or Simultaneous Localization and Mapping, is a computational technique that enables robots and autonomous vehicles to build a map of an unknown environment while simultaneously tracking their own location within it. It is a foundational capability for any mobile robot, autonomous vehicle, or drone that needs to navigate without pre-existing maps.

Safety-Critical Systems
Robotics & Automation

Safety-Critical Systems are computer-controlled systems where a malfunction or failure could result in death, serious injury, significant environmental damage, or major financial loss. In robotics and automation, these systems require rigorous engineering practices including formal verification, redundancy, and certification to ensure they operate reliably and safely under all conditions.

Satellite Image Analysis
Computer Vision

Satellite Image Analysis is the application of AI and computer vision to process and interpret earth observation imagery from satellites and aerial platforms. It enables businesses and governments to monitor environmental changes, assess agricultural conditions, plan urban development, manage supply chains, and make data-driven decisions about physical assets and natural resources across large geographic areas.

Scene Understanding
Computer Vision

Scene Understanding is a computer vision capability that enables AI systems to comprehend the overall context, layout, and relationships within images or video. It goes beyond identifying individual objects to interpret what is happening in a scene, supporting applications like autonomous navigation and smart retail.

Self-Improving Agent
Agentic AI

Self-Improving Agent is an AI agent that automatically learns from its past performance, user feedback, and operational outcomes to enhance its own capabilities over time without requiring manual retraining or reprogramming by developers.

Semantic Search
Generative AI

Semantic search is an AI-powered approach to search that understands the meaning and intent behind a query rather than simply matching keywords. It uses embeddings and natural language understanding to deliver more relevant results, even when the exact words in the query do not appear in the matching documents.

Semantic Segmentation
Computer Vision

Semantic Segmentation is a computer vision technique that classifies every pixel in an image into a predefined category, enabling machines to understand the full composition of a scene. It powers applications from autonomous navigation and urban planning to agricultural monitoring, giving businesses granular visual understanding far beyond simple object detection.

Semantic Similarity
Natural Language Processing

Semantic Similarity is an NLP technique that measures how close in meaning two pieces of text are, regardless of whether they share the same words, enabling applications like intelligent search, content recommendation, duplicate detection, and question-answer matching that understand intent rather than relying on exact keyword overlap.

Semi-Supervised Learning
Machine Learning

Semi-Supervised Learning is a machine learning approach that trains models using a small amount of labeled data combined with a large amount of unlabeled data, significantly reducing the cost and effort of data labeling while still achieving strong predictive performance.

Sensor Fusion
Robotics & Automation

Sensor Fusion is the process of combining data from multiple sensors to produce more accurate, reliable, and complete information than any single sensor could provide alone. It is a foundational technology for autonomous vehicles, robotics, and smart manufacturing systems, enabling machines to perceive and respond to complex environments.

Sentiment Analysis
Natural Language Processing

Sentiment Analysis is an NLP technique that automatically determines the emotional tone behind text — whether positive, negative, or neutral — enabling businesses to understand customer opinions, monitor brand perception, and track market sentiment at scale across reviews, social media, and surveys.

Sentiment Monitoring
Business Applications

Sentiment Monitoring is the continuous, real-time tracking and analysis of opinions, emotions, and attitudes expressed about a brand, product, or topic across digital channels such as social media, news, reviews, and forums. It uses natural language processing to classify mentions as positive, negative, or neutral, enabling businesses to respond quickly to shifts in public perception.

Serverless AI
AI Infrastructure

Serverless AI is an approach to running artificial intelligence workloads where the cloud provider automatically manages all underlying infrastructure, allowing organisations to run AI models without provisioning, scaling, or maintaining servers, paying only for actual compute time used.

Shadow AI
AI Operations

Shadow AI is the use of artificial intelligence tools and applications by employees without the knowledge, approval, or oversight of IT departments and organisational leadership. It creates unmanaged risks around data security, compliance, and quality while also signalling unmet needs that the organisation should address through its official AI strategy.

Simulation-to-Real Transfer (Sim-to-Real)
Robotics & Automation

Simulation-to-Real Transfer, commonly known as Sim-to-Real, is the process of training robots or AI agents in virtual simulated environments and then deploying the learned behaviours on physical robots in the real world. This approach dramatically reduces training time, cost, and risk by allowing thousands of hours of practice in simulation before any physical deployment.

Singing Voice Synthesis
Speech & Audio AI

Singing Voice Synthesis is an AI technology that generates realistic singing voices from musical scores, lyrics, and style parameters. It enables the creation of vocal performances without a human singer, opening new possibilities for music production, content creation, and entertainment across creative industries.

Slot Filling
Natural Language Processing

Slot Filling is an NLP technique that extracts specific data values from user utterances in conversational AI systems, identifying key parameters like dates, locations, product names, and quantities needed to fulfill a user request or complete a task.

Small Language Model (SLM)
Generative AI

A Small Language Model (SLM) is a compact AI model, typically with fewer than 10 billion parameters, designed to run efficiently on devices like laptops, smartphones, and edge servers without requiring expensive cloud infrastructure. Models like Microsoft Phi, Google Gemma, and small Llama variants deliver practical AI capabilities at a fraction of the cost of large language models.

Smart Contract
Business Applications

A Smart Contract is a self-executing digital agreement where the terms and conditions are written in code and stored on a blockchain. When predefined conditions are met, the contract automatically enforces the agreed actions, such as releasing payment or transferring assets, without requiring intermediaries.

Sound Event Detection
Speech & Audio AI

Sound Event Detection (SED) is an AI technology that identifies, classifies, and timestamps specific sounds within continuous audio streams, determining both what sounds are present and precisely when they occur. It enables automated monitoring for security, industrial safety, environmental protection, and smart city applications.

Speaker Diarization
Speech & Audio AI

Speaker Diarization is an AI technology that automatically identifies and segments audio recordings by speaker, answering the question "who spoke when." It analyses voice characteristics to distinguish between different speakers in a conversation, enabling structured transcripts for meetings, calls, and interviews.

Speaker Recognition
Speech & Audio AI

Speaker Recognition is an AI technology that identifies or verifies a person based on the unique characteristics of their voice. It analyses vocal patterns including pitch, cadence, and tone to determine who is speaking, enabling applications like voice-based authentication, personalised customer service, and security systems.

Speech Enhancement
Speech & Audio AI

Speech Enhancement is a collection of AI techniques that improve the quality and clarity of audio recordings by removing background noise, reducing echo, compensating for poor microphone quality, and isolating the target speaker's voice. It ensures that speech is clear and intelligible for both human listeners and downstream AI systems.

Speech Recognition
Natural Language Processing

Speech Recognition is an AI technology that converts spoken language into written text, enabling voice-controlled applications, automated transcription, voice search, and hands-free interaction with software systems across multiple languages and accents.

Speech Synthesis Markup Language (SSML)
Speech & Audio AI

Speech Synthesis Markup Language (SSML) is an XML-based markup language that provides detailed control over how text-to-speech systems render spoken output. It allows developers to specify pronunciation, prosody, pauses, emphasis, speaking rate, and other speech characteristics that plain text alone cannot convey.

Stream Processing
Data & Analytics

Stream Processing is a data processing paradigm that analyses and acts on continuous flows of data in real time or near-real time, rather than storing data first and processing it in batches. It enables organisations to detect events, trigger actions, and generate insights as data arrives.

Streaming Inference
AI Infrastructure

Streaming Inference is the process of running AI predictions continuously on data as it arrives in real-time, enabling immediate analysis and decision-making on live data streams such as sensor readings, financial transactions, user interactions, and social media feeds.

Structured Output
Agentic AI

Structured Output is the capability of an AI model to generate responses in predefined, machine-readable formats such as JSON, XML, or typed schemas, enabling reliable integration with downstream software systems, databases, and automated workflows.

Style Transfer
Computer Vision

Style Transfer is a computer vision technique that applies the visual style of one image, such as an artistic painting, to the content of another image using neural networks. It enables businesses to create distinctive visual content, automate design workflows, build interactive customer experiences, and generate consistent brand aesthetics across marketing materials.

Supervised Learning
Machine Learning

Supervised Learning is a machine learning approach where algorithms are trained on labeled datasets containing input-output pairs, enabling the model to learn the mapping between inputs and correct answers so it can make accurate predictions on new, unseen data.

Supervisor Pattern
Agentic AI

Supervisor Pattern is a multi-agent architecture where a single managing agent oversees, delegates tasks to, and coordinates the work of multiple specialized worker agents, ensuring the overall objective is achieved efficiently and correctly.

Supply Chain Optimization
Business Applications

Supply Chain Optimization is the application of AI and advanced analytics to improve efficiency, reduce costs, and enhance resilience across the entire supply chain, from procurement and production to logistics and delivery. It uses data-driven models to forecast demand, manage inventory, optimise routes, and identify risks before they disrupt operations.

Support Vector Machine
Machine Learning

A Support Vector Machine (SVM) is a machine learning algorithm that classifies data by finding the optimal boundary -- called a hyperplane -- that best separates different categories, maximizing the margin between groups to achieve robust and reliable classification results.

Surgical Robot
Robotics & Automation

Surgical Robot is a robotic system that assists surgeons in performing minimally invasive procedures with enhanced precision, control, and visualisation. These systems translate the surgeon's hand movements into precise micro-movements of surgical instruments, enabling complex operations through small incisions with improved patient outcomes.

Swarm Intelligence (AI)
Agentic AI

Swarm Intelligence (AI) is an approach where multiple decentralized AI agents work together collectively, mimicking the cooperative behavior seen in nature — such as ant colonies or bird flocks — to solve complex problems that no single agent could handle alone.

Swarm Robotics
Robotics & Automation

Swarm Robotics is a field of robotics in which large numbers of relatively simple robots coordinate autonomously to accomplish tasks collectively, inspired by the behaviour of social insects like ants and bees. It enables scalable, resilient automation for applications such as warehouse logistics, agriculture, and environmental monitoring.

Synthetic Data
Data & Analytics

Synthetic Data is artificially generated data that mimics the statistical properties and patterns of real-world data without containing actual records from real individuals or events. It is created using algorithms, simulations, or generative AI models and is used to train machine learning models, test systems, and enable analytics when real data is unavailable, insufficient, or too sensitive to use.

Synthetic Data Generation
Generative AI

Synthetic Data Generation is the process of using AI to create artificial datasets that statistically resemble real-world data but contain no actual personal or proprietary information. Businesses use synthetic data to train AI models, test software systems, and conduct analysis when real data is insufficient, expensive to collect, or restricted by privacy regulations.

Synthetic Media Detection
AI Safety & Security

Synthetic Media Detection is the use of specialised tools and techniques to identify AI-generated or AI-manipulated images, videos, audio recordings, and text, distinguishing them from authentic content created by humans.

System Prompt
Generative AI

System Prompt is a set of hidden background instructions provided to an AI model that defines its behavior, personality, capabilities, and constraints before any user interaction begins, functioning as the foundational programming that shapes how the AI responds to all subsequent inputs.

System Prompt Protection
AI Safety & Security

System Prompt Protection is the set of techniques and practices used to secure the hidden instructions that define an AI system's behaviour, preventing unauthorised users from extracting, viewing, or manipulating these instructions to compromise the system's intended operation.

T

TPU (Tensor Processing Unit)
AI Infrastructure

A TPU, or Tensor Processing Unit, is a custom-designed chip built by Google specifically to accelerate machine learning and AI workloads, offering high performance and cost efficiency for training and running large-scale AI models, particularly within the Google Cloud ecosystem.

Talent Intelligence
Business Applications

Talent Intelligence is the use of AI and data analytics to provide deep insights into workforce capabilities, talent market trends, skills gaps, and competitive labour dynamics. It helps organisations make data-driven decisions about hiring, workforce planning, employee development, and organisational design by analysing internal employee data alongside external labour market information.

Task Decomposition
Agentic AI

Task Decomposition is the process of breaking down a complex task into smaller, manageable sub-tasks that an AI agent can plan, prioritize, and execute individually, enabling the agent to tackle problems that would be too complex to solve in a single step.

Task Planning (Robotics)
Robotics & Automation

Task Planning (Robotics) is the AI discipline of determining the optimal sequence of actions a robot should perform to achieve a given goal. It involves breaking complex objectives into ordered steps, allocating resources, handling dependencies, and adapting plans when unexpected situations arise during execution.

Technology Due Diligence
AI Strategy

Technology Due Diligence is the systematic evaluation of a company's AI and technology assets, capabilities, architecture, and risks conducted during mergers, acquisitions, investments, or partnerships to assess the true value and viability of its technology stack.

Teleoperation
Robotics & Automation

Teleoperation is the remote control of a robot or machine by a human operator from a distance, using communication links to transmit commands and receive sensory feedback. It enables skilled operators to perform tasks in hazardous, remote, or inaccessible environments, and serves as a critical fallback when autonomous systems encounter situations beyond their capabilities.

Temperature (AI)
Generative AI

Temperature is a parameter in AI model settings that controls the randomness and creativity of outputs, where lower values produce more predictable and focused responses while higher values generate more diverse and creative but potentially less accurate results.

Test-Time Compute
Generative AI

Test-Time Compute is an AI technique that allocates additional computational resources when a model is generating an answer rather than during training, allowing the model to spend more time thinking through difficult problems. This approach enables more accurate responses on complex tasks by scaling compute dynamically based on question difficulty.

Text Annotation
Natural Language Processing

Text Annotation is the process of labeling or tagging text data with structured metadata to train and evaluate Natural Language Processing models, serving as the essential bridge between raw text and machine learning systems that need labeled examples to learn patterns for tasks like classification, entity recognition, and sentiment analysis.

Text Classification
Natural Language Processing

Text Classification is an NLP technique that automatically assigns predefined categories or labels to text documents, enabling businesses to organize emails, route support tickets, categorize feedback, and sort documents at scale without manual effort.

Text Mining
Natural Language Processing

Text Mining is the process of using AI and statistical techniques to extract meaningful patterns, trends, and actionable insights from large collections of unstructured text data, transforming raw documents, emails, and social media posts into structured business intelligence.

Text Preprocessing
Natural Language Processing

Text Preprocessing is the foundational step in any Natural Language Processing pipeline that transforms raw, unstructured text into a clean, standardized format suitable for analysis by removing noise, normalizing variations, and structuring data for downstream NLP tasks.

Text Summarization
Natural Language Processing

Text Summarization is an NLP technique that automatically condenses long documents, articles, or conversations into shorter versions that capture the key information and main points, helping businesses process large volumes of text efficiently and make faster decisions.

Text-to-Image AI
Generative AI

Text-to-Image AI is a category of generative artificial intelligence that creates visual images from written text descriptions, also known as prompts. It enables businesses to generate marketing visuals, product concepts, social media graphics, and design prototypes without traditional graphic design expertise or expensive photo shoots.

Text-to-Speech (TTS)
Speech & Audio AI

Text-to-Speech (TTS) is an AI technology that converts written text into natural-sounding spoken audio. Modern TTS systems use deep learning to produce voices that closely mimic human speech patterns, intonation, and emotion, enabling applications from customer service automation to accessibility tools and content creation.

Text-to-Video AI
Generative AI

Text-to-Video AI is a category of generative artificial intelligence that creates video content directly from written text descriptions, enabling businesses to produce marketing videos, product demonstrations, training materials, and social media content without traditional video production equipment or expertise.

Time Series Analysis
Data & Analytics

Time Series Analysis is a statistical method for analysing data points collected or recorded at successive, equally spaced intervals over time. It enables organisations to identify trends, seasonal patterns, cyclical behaviours, and anomalies in time-ordered data, and to forecast future values based on historical patterns.

Token
Generative AI

In AI, a token is the basic unit of text that a language model processes. Tokens can be whole words, parts of words, or punctuation marks. Understanding tokens is essential for managing AI costs, context window limits, and performance, as most AI services charge and measure capacity in tokens.

Tokenization
Natural Language Processing

Tokenization is the foundational NLP process of breaking text into smaller units called tokens — such as words, subwords, or characters — which enables AI systems to process and understand language by converting human-readable text into a format that machine learning models can analyze.

Tokenizer
Generative AI

Tokenizer is the system that breaks down text into smaller units called tokens before an AI model can process it, determining how the model reads and interprets language and directly affecting pricing, context window usage, and multilingual performance in business AI applications.

Tool Use (AI)
Agentic AI

Tool Use in AI refers to the ability of AI models, particularly large language models, to invoke external tools such as APIs, databases, calculators, web browsers, and code interpreters to extend their capabilities beyond text generation and deliver accurate, actionable results.

Top-K Sampling
Generative AI

Top-K Sampling is a technique used in AI text generation that limits the model to choosing its next word from only the K most probable options, providing a way to control the diversity and quality of AI outputs by filtering out unlikely and potentially nonsensical word choices.

Topic Modeling
Natural Language Processing

Topic Modeling is an unsupervised machine learning technique that automatically discovers abstract themes or topics within large collections of documents, helping organizations categorize and understand vast amounts of unstructured text without manual labeling.

Toxicity Detection
AI Safety & Security

Toxicity Detection is the use of AI systems to identify harmful, offensive, abusive, or inappropriate language in text-based communications. It enables organisations to automatically flag or filter toxic content to protect users, maintain community standards, and comply with regulatory requirements.

Transfer Learning
Machine Learning

Transfer Learning is a machine learning technique where a model trained on one task is repurposed as the starting point for a different but related task, dramatically reducing the data, time, and cost required to build high-performing AI models for specific business applications.

Transfer Learning (Vision)
Computer Vision

Transfer Learning (Vision) is a machine learning approach that applies knowledge from pre-trained computer vision models to new visual tasks, dramatically reducing the data, time, and cost required to build accurate custom models. It enables businesses to develop effective computer vision solutions with hundreds rather than millions of training images, making AI accessible to organisations without massive datasets or deep machine learning expertise.

Transformer
Machine Learning

A Transformer is a neural network architecture that uses self-attention mechanisms to process entire input sequences simultaneously rather than step by step, enabling dramatically better performance on language, vision, and other tasks, and serving as the foundation for modern large language models like GPT and Claude.

Trustworthy AI
AI Safety & Security

Trustworthy AI is an overarching framework for developing and deploying AI systems that are reliable, fair, transparent, secure, and accountable, ensuring they consistently perform as intended while respecting human rights, ethical principles, and regulatory requirements across all conditions and contexts.

V

Vector Database
Generative AI

A vector database is a specialized database designed to store, index, and query high-dimensional vectors -- numerical representations of data such as text, images, or audio. It enables fast similarity searches that power AI applications like recommendation engines, semantic search, and retrieval-augmented generation.

Vector Index
AI Infrastructure

Vector Index is a specialised data structure designed to efficiently search through high-dimensional numerical representations of data, enabling AI systems to quickly find the most similar items among millions or billions of entries, powering applications like semantic search, recommendation engines, and retrieval-augmented generation.

Vertical AI
AI Strategy

Vertical AI refers to artificial intelligence models and products purpose-built for a specific industry such as healthcare, legal, or financial services, delivering deeper domain expertise and more accurate results than general-purpose AI tools applied to specialized business problems.

Vibe Coding
Agentic AI

Vibe Coding is a software development approach where you describe what you want to build in natural language and let an AI coding agent write the actual code, shifting the developer role from writing syntax to directing intent and reviewing output.

Video Analytics
Computer Vision

Video Analytics is the application of AI and computer vision to automatically analyse video feeds, extracting meaningful insights about people, objects, and events in real-time or from recorded footage. It transforms passive surveillance cameras into intelligent monitoring systems that can detect incidents, count visitors, measure dwell time, and trigger automated alerts.

Visual Inspection AI
Computer Vision

Visual Inspection AI is the application of computer vision to automated quality control, using cameras and deep learning models to detect defects, anomalies, and deviations in manufactured products. It replaces or augments manual inspection processes, delivering faster, more consistent, and more accurate quality assurance on production lines.

Visual Question Answering
Computer Vision

Visual Question Answering (VQA) is an AI capability that enables systems to answer natural language questions about the content of images or video. It combines computer vision and natural language processing to provide intelligent responses about visual content, supporting applications in accessibility, document analysis, and business intelligence.

Voice AI Agent
Agentic AI

Voice AI Agent is an artificial intelligence system that conducts real-time spoken conversations with humans, understanding natural speech, responding with human-like voice, and performing tasks like customer service, appointment scheduling, and sales outreach without requiring a human operator.

Voice Activity Detection
Speech & Audio AI

Voice Activity Detection (VAD) is an AI technique that determines whether a segment of audio contains human speech or only silence, background noise, or non-speech sounds. It serves as a critical preprocessing step in speech recognition, telecommunications, and voice assistant systems, improving accuracy and reducing computational costs.

Voice Assistant
Speech & Audio AI

A Voice Assistant is an AI-powered software application that uses speech recognition, natural language understanding, and text-to-speech to conduct conversational interactions with users through voice. Popular examples include Amazon Alexa, Google Assistant, and Apple Siri, but businesses increasingly deploy custom voice assistants for customer service and enterprise operations.

Voice Biometrics
Speech & Audio AI

Voice Biometrics is a security technology that uses the unique physical and behavioural characteristics of a person's voice to verify their identity. It analyses vocal patterns including pitch, frequency, cadence, and pronunciation to create a distinctive voiceprint, enabling secure, convenient authentication for banking, customer service, and access control systems.

Voice Cloning
Speech & Audio AI

Voice Cloning is an AI technology that creates a synthetic replica of a specific person's voice, enabling computer-generated speech that sounds like the original speaker. It uses deep learning models trained on recordings of the target voice to reproduce their unique vocal characteristics, intonation, and speaking style.

Voice Conversion
Speech & Audio AI

Voice Conversion is an AI technology that transforms the vocal characteristics of one speaker to sound like another while preserving the original speech content, intonation, and timing. It is used in entertainment, accessibility, privacy protection, and content localisation, though it also raises important security and ethical concerns.

Voice User Interface (VUI)
Speech & Audio AI

Voice User Interface (VUI) is a technology interface that allows users to interact with devices, applications, and services using spoken language rather than physical controls, keyboards, or touchscreens. It encompasses the design, technology, and interaction patterns that enable natural voice-driven communication between humans and machines.

Ready to put these AI concepts to work?

Understanding AI terminology is the first step. Let Pertama Partners help you turn knowledge into a practical AI strategy for your business.