Compiled by SafeQual Health Software Team 3/18/2024
Scope: Glossary of terms for understanding and reasoning about governance, selection, architecture, and implementation of robust AI systems.
A
Accuracy
Probability of receiving a desired output.
Actionable intelligence
Information that can be used in decision making.
Adversarial example
Prompt designed to cause a model to produce undesired completion, usually generated by injecting an insignificant perturbation into a clean example.
Adversarial training
Using adversarial as well as clean examples in model training to improve adversarial robustness.
Agency
Capacity of a model to exhibit a degree of initiative or independence.
Alignment
Steering a model toward or the quality of being consistent with a group’s intended goals, preferences, standards, and ethical principles for example safety culture or high reliability organization (HRO)
B
Backward alignment
Ensures practical alignment of trained models by evaluating their performance in realistic environments and implementing regulatory guardrails. (see also: Forward Alignment)
C
Categorization
Assigning a category to a text.
Chaining
Mechanism of structuring information flow between AI components to accomplish complex tasks.
Classification
Process or technique of assigning a finite set of categories to a text that meaningfully describes its contents.
Completion
Output of a generative model in response to a prompt. (see also: Prompt)
Controlled vocabulary
A curated set of words or phrases applicable to a specific industry, application, or task.
Conversational model
A model optimized for generating completions in the context of past interactions.
Co-occurrence
The presence of distinct semantic elements in the same context.
D
Domain model
A model that is optimized for a specific industry or a task.
E
Embedding
A high-dimensional numerical vector representation of text semantics that allows use of numerical algorithms in text analysis. (see also Semantics, Semantic Search, and Similarity).
Enrichment
Extracting meaningful or relevant information from text.
Extractive summarization
Identifies and groups together significant portions of a text to form a concise summary.
F
Few-shot
A prompting or leaning technique using a small number of in-context examples, in contrast to none or many. (see also: zero-shot, fine-tuning)
Fine-tuning
Modifying an existing foundational, pre-trained model by training on additional industry or context specific data to improve its performance. (see zero-shot, few-shot, training set)
Foundational model
A baseline model trained on unspecialized body of knowledge. (see also: fine-tuning, Domain model)
Forward alignment
Aims to produce trained models that follow alignment requirements. (see also: Backward alignment)
G
Generative AI
Techniques and algorithms that learn from existing artifacts to produce new ones.
Generative summarization
Distilling core content from long text for easier comprehension.
Governance
Creation and enforcement of rules that ensure safe development and deployment of AI.
Grounding
Mapping completion output to available factual sources.
H
Hallucination
Fabricated data in completions that appear plausible and convincing but, in fact, wrong or inaccurate.
Hullucitation
A special case of hallucinations where fabricated citations or references to sources that are wrong or inaccurate are presented as fact. (e.g., ‘New York lawyers sanctioned for using fake ChatGPT cases in legal brief’
L
LLM
Large language models are produced from vast amounts of textual data to achieve general-purpose generation and classification capabilities. (see also: Generative AI)
P
Prompt
Text used as an input to generative model. (see also: Completion)
Prompt engineering
Designing and optimizing input to produce effective and accurate completions, usually through extensive experimentation.
R
Retrieval-augmented generation (RAG)
Including trusted knowledge sources outside of the unspecialized body of knowledge to improve quality of and confidence in output, usually in the form of citations and attributions.
RICE (Robustness, Interpretability, Controllability, and Ethicality)
Principles that characterize the objectives of alignment.
ROAI
Return on investment in AI. A pun on ROI. 😊 S
Semantic Search
Using query intent and contextual meaning, in contrast to lexical properties. (see also Embedding, Similarity).
Semantics
Study of relations between linguistic forms and concepts and their mental representations. (see also Embedding, Similarity).
Sentiment
Overall disposition expressed in a text.
Sentiment analysis
Identifying sentiment in a text.
Similarity
A degree of semantic proximity of texts, usually expressed numerically. (see also Semantics Search, Embedding)
Summarization
Creating a short, accurate, and fluent digest of a longer text, which preserves its important information and overall meaning.
T
Temperature
A given degree of unpredictability of completions, often determining the balance between accuracy and potential usefulness.
Test set
A collection of sample prompts representative of the type of challenges used to evaluate the robustness and usefulness of a model.
Training set
A balanced collection of sample prompts and their desired completions used in fine-tuning models to recognize specific problem patterns. (see also fine-tuning, domain specific models)
Z
Zero-shot
Prompting ability or technique that does not rely on any examples. (see few-shot, fine-tuning)
Sources for this Document:
Created and maintained by SafeQual Health, www.safequal.net/gen-ai-glossary-for-healthcare-risk-management-leaders.html
Further Reading
For people with strong interest in Alignment, Ethics, and Robustness in AI.
Ji, J., Qiu, T., Chen, B., Zhang, B., Lou, H., Wang, K., … & Gao, W. (2023). AI alignment: A comprehensive survey. arXiv preprint arXiv:2310.19852.