[1hr Talk] Intro to Large Language Models
27 Nov 2023 (12 months ago)
Intro: Large Language Model (LLM) talk (0s)
- Speaker held a 30-minute talk on large language models and decided to re-record it for YouTube due to positive feedback.
LLM Inference (20s)
- Large language models consist of two key files: a parameters file and a code file to run the parameters.
- Uses the example of Meta AI's LLaMA 270B model, part of a series with multiple sizes, which is an open-weights model with its architecture and weights freely available.
- Highlights that unlike proprietary models like ChatGPT, LLaMA 270B allows users to run the model on their local machine without internet connectivity.
- The 140 GB parameters file contains 70 billion parameters stored as float16 datatype.
- The code file, potentially written in C, is fairly lightweight, requiring roughly 500 lines and no dependencies.
- Model training is more complex than inference, akin to compressing a large chunk of the internet.
- LLaMA 270B training involves processing around 10TB of internet text over 12 days using 6,000 GPUs which would cost about $2 million.
- The training is essentially a "lossy compression" of internet data—unlike a zip file that offers lossless compression.
- Training is based on predicting the next word in a sequence, and through this process, the model learns various aspects about the world.
- State-of-the-art models require significantly more resources, multiplying the costs and computational requirements.
- Trained networks can generate new content by continually predicting the next word, effectively "dreaming" up internet-like documents.
- These generated texts can include plausible yet invented content such as Java code snippets, product listings, or encyclopedic entries.
- The model uses its knowledge, acquired during training, to generate text that is not a verbatim copy from the dataset, making it hard to distinguish between "hallucinated" and correct information.
How do they work? (11m22s)
- Transformer neural networks perform next-word prediction using a complex architecture with 100 billion parameters.
- While the architecture and mathematical operations are understood, the specific roles of these parameters in collaboration are not fully known.
- Models may build and maintain a type of knowledge database, but its functioning is not completely understood - exemplified by the "reversal course" phenomenon observed when querying information inconsistently.
- The interpretability field is attempting to understand neural network parts, but current understanding treats large language models (LLMs) as empirical artifacts, where behavior can be measured but not fully explained.
Finetuning into an Assistant (14m14s)
- Assistant models are derived from pre-trained document generators through a process called fine-tuning.
- Fine-tuning involves the same optimization as pre-training but swaps the dataset to one containing high-quality Q&A pairs created manually per specific labeling instructions.
- Although the exact understanding of the transformation from document generator to assistant model is empirical, fine-tuned models adapt to answer questions in a helpful manner using knowledge from both training stages.
- Fine-tuning is described as aligning the model's output format from general internet documents to helpful assistant responses.
- Creating an assistant model involves two stages: pre-training and fine-tuning.
- Pre-training is expensive and involves compressing vast amounts of internet text into a neural network on specialized, costly GPUs.
- Post pre-training, the base model is fine-tuned using around 100,000 high-quality Q&A pairs, a cheaper and quicker process than pre-training.
- Fine-tuned assistant models are continually improved through iterative misbehavior corrections, inserting manually corrected responses into the training data.
- Models are regularly updated during the fine-tuning phase, which is significantly less costly, allowing for frequent iterations.
- Companies like Meta have also released both base models and fine-tuned assistant models, where the latter can directly be used for Q&A interactions.
- Pre-training is conducted less frequently due to its high cost, whereas fine-tuning is regularly iterated for improvements.
Appendix: Comparisons, Labeling docs, RLHF, Synthetic data, Leaderboard (21m5s)
- Stage two of large language model training involves comparison labeling, where labelers find it easier to compare candidate answers than to generate their own.
- Stage three involves fine-tuning using the comparison labels in a process known as reinforcement learning from human feedback (RLHF).
- Humans collaborate with AI models to increase efficiency in label generation, verifying, and improving outputs.
- A leaderboard showcases the ranking of language models based on ELO rating, comparing proprietary models like the GPT series and open models like the LAMA 2 series.
- The dynamic in the industry reflects better performance from closed proprietary models versus more accessible yet less powerful open source models.
LLM Scaling Laws (25m43s)
- Scaling laws predict large language model performance based on the number of parameters in the network (N) and the amount of training text (D).
- Performance on next-word prediction tasks shows a smooth and predictable function, with larger models trained on more data continuing to show improvement.
- The accuracy of next-word predictions correlates with the accuracy of other assessments, without the need for algorithmic improvements.
- The industry experiences a "Gold Rush," aiming to scale up computing resources and data to improve model performance.
- Current language models have evolved to use various tools to enhance their capabilities.
- For tasks beyond the model's computation, AI utilizes external tools like browsers for information, calculators for mathematical operations, and coding interpreters for data visualization.
- As an example, a model can create a table of funding rounds, estimate valuations using calculations, and generate plots using mathematical libraries.
- Language models like ChatGPT integrate existing computing infrastructure to solve complex tasks.
- Tools like DALL-E, which can generate images from natural language, show how AI models can produce visual outputs relevant to given tasks.
Multimodality (Vision, Audio) (33m32s)
- Large language models (LLMs) are improving along the multimodality axis, handling both text and images.
- Modern LLMs can generate and interpret images, as demonstrated by a functioning website code written by an LLM from a sketched diagram.
- These models can also engage in audio processing, allowing for both speech recognition and generation for conversational interactions, similar to the movie "Her".
Thinking, System 1/2 (35m0s) and Self-improvement, LLM AlphaGo (38m2s)
- LLM development is moving towards mimicking human cognitive processes known as System 1 (instinctive responses) and System 2 (deliberative thinking).
- Current LLMs function using "System 1" thinking, quickly producing responses without deep reasoning.
- Academics are exploring how LLMs might be developed with "System 2" thinking, allowing more time to generate accurate responses.
- The goal is to enable LLMs to self-improve beyond human imitation, inspired by the advances made by the Go-playing program AlphaGo.
- Unlike in controlled environments like Go, language models face challenges establishing a simple, evaluative reward function for self-improvement.
LLM Customization, GPTs store (40m45s) and LLM OS (42m15s)
- Personalization of language models is another direction for development, allowing specialization in various tasks.
- OpenAI has introduced customization features for language models, including retrieving information from uploaded files and obeying custom instructions.
- The language model "OS" is envisioned as coordinating multiple resources, with potential capabilities like enhanced knowledge, internet browsing, software interaction, multimedia handling, deep thinking, self-improvement, and customization.
- Language models may become akin to an app ecosystem, with each app being an expert in its domain, paralleling modern operating systems.
LLM Security Intro (45m43s)
- With the promise of LLMs as a new computing paradigm come new security challenges.
- The field is anticipating a cat-and-mouse game of addressing security in the LLM domain, similar to the security issues faced in traditional operating systems.
- Jailbreak attacks trick language models like chpt into providing harmful information, bypassing safety mechanisms through creative prompts.
- Examples include roleplaying to elicit forbidden information or using encoded messages that models inadvertently understand.
- Researchers have identified diverse jailbreak tactics and showed how altering prompts with optimized suffixes or noise patterns can manipulate models.
- Large language models can even be influenced by images with encoded noise patterns, expanding the attack surface.
Prompt Injection (51m30s) & Data poisoning (56m23s)
- Prompt injection attacks mislead language models into executing hidden commands embedded within images or text, which may result in undesirable outcomes.
- Attackers use this method to hijack models like Bing or Bard and promote fraudulent activities by embedding instructions in searched web pages or shared documents.
- Data poisoning attacks implant trigger words within a model’s training data, causing the model to output erroneous or harmful responses when prompted with these words.
LLM Security conclusions (58m37s)
- There are defenses against prompt injection, jailbreak, and data poisoning attacks, and these countermeasures are regularly updated in response to new threats.
- Security for large language models (LLMs) involves an ongoing cat-and-mouse game, similar to traditional cybersecurity.
- The field of LLM security is rapidly evolving, with a wide range of emerging attack types actively being studied.