Stanford Seminar - Language models as temporary training wheels to facilitate learning

05 Nov 2024 (10 days ago)
Stanford Seminar - Language models as temporary training wheels to facilitate learning

Introduction and Overview

  • The speaker is giving a talk at Stanford and appreciates the opportunity to share their thoughts with a room full of thinkers, colleagues, and friends (13s).
  • The talk is about using language models as temporary training wheels to facilitate learning, with a focus on human collaboration and improving human learning (41s).
  • The training wheel metaphor is used to describe how AI or language models can help us learn certain skills, but these tools are meant to be removed eventually (1m11s).
  • The talk will focus on learning skills relevant to mental health, but the concepts also apply to social skills, communication skills, and other areas (1m28s).

AI and the Training Wheel Metaphor

  • There have been tremendous advances in AI over the past few years, with machines now able to listen, respond, translate languages, solve problems, learn, retrieve information, and express emotion (1m48s).
  • The speaker notes that machines are getting closer to being able to communicate fluently in multiple forms, similar to the fictional character C-3PO (2m11s).
  • The speaker asks the audience if they have learned how to ride a bicycle using training wheels, and notes that training wheels are helpful for a period of time but are usually removed eventually (2m41s).
  • Training wheels can be seen as a form of scaffolding that helps us learn a skill, makes practice easier and safer, and allows us to focus on fewer things at first (3m32s).
  • The speaker notes that sometimes, in order to learn a skill, we make things harder for ourselves, such as running at altitude with a weighted vest (4m6s).
  • Desirable difficulties refer to the idea that challenges and friction can be beneficial for learning and training, and this concept will be explored in the context of important skills that can improve relationships, interactions, mental health, and contribute to a healthy democracy (4m28s).

Key Skills for Personal Growth

  • Key skills that are important for personal growth and development include empathy, the ability to understand or share the emotions and experiences of others (5m5s).
  • Emotional regulation is another critical skill, which involves managing and expressing emotions appropriately, and it is a skill that can be worked on and improved (5m14s).
  • Conflict resolution is also an essential skill, as it enables individuals to address disagreements constructively, and it is a normal part of human interaction (5m27s).
  • Cognitive flexibility or reappraisal is the ability to be open to new information, change one's mind, and challenge existing views and perspectives, which is vital in today's society (5m43s).

Challenges and Potential Pitfalls of AI Assistance

  • Learning and exercising these skills can be challenging, uncomfortable, and even painful, which raises concerns about outsourcing these skills to AI models (6m6s).
  • The increasing use of AI assistance in daily life can be convenient but also potentially harmful, as it can reinforce existing beliefs and biases, and not challenge individuals enough (6m31s).
  • Language models can be overly agreeable and pathologically helpful, which can be detrimental to personal growth and development, as they may not provide constructive feedback or challenge individuals' beliefs (6m46s).

Scaffolding, Practice, and Feedback with Language Models

  • To learn important skills, it is essential to start with scaffolding, guided support, and structure, followed by deliberate practice, and finally, integrate constructive feedback to refine and improve performance (7m52s).
  • The core hypothesis behind current research is that language and language models can provide scaffolding, practice, and feedback to create new opportunities for learning, with the goal of eventually removing the "training wheels" and achieving mastery of skills (8m38s).
  • Two recent projects will be used to illustrate this hypothesis, where important skills are taught through AI-supported scaffolding and practice, and then performance is measured without the AI support (8m57s).
  • The goal is to design human interactions that empower users to transition from AI reliance to actual mastery of skills, ensuring that AI support is a stepping stone rather than a crutch (9m19s).
  • AI support is being designed with planned obsolescence in mind, with the hope of leading to more effective and sustained learning outcomes (9m37s).

Cognitive Reframing with AI

  • The first skill to be discussed is reframing negative thoughts, which is a common experience that can be expressed through language and can have a significant impact on feelings and behaviors (11m1s).
  • Negative thoughts can be challenging to overcome, but there are established therapeutic techniques that can help, and language model technology can be developed specifically for mental health purposes (12m31s).
  • The example of Nathaniel, who shared his personal story in an art campaign, illustrates the power of expressing inner reality through language and the potential for language model technology to be used for mental health purposes (10m52s).
  • Cognitive reframing is a technique used to challenge negative thoughts by replacing them with more balanced and helpful ones, which is a core idea in cognitive behavioral therapy, a form of therapy with evidence of effectiveness (12m40s).
  • This technique involves identifying negative thoughts, challenging them, and replacing them with more positive and realistic ones, leading to different feelings and behaviors (13m12s).
  • Currently, people learn this skill through worksheets, therapy sessions, or online resources, but these methods can be difficult and inaccessible to many, especially those with limited access to therapists or resources (13m31s).
  • Mental health is still heavily stigmatized, making it challenging for people to seek help, and there is a need for more accessible and widely available training methods (14m38s).
  • A human-AI collaboration has been developed to provide more scaffolding and make training more widely accessible, using a demo that guides users through the process of reframing negative thoughts (15m8s).
  • The demo starts with a negative thought, provides a list of common negative thoughts, and asks users to provide context about the situation that led to the thought, such as a recent paper rejection (15m22s).
  • The system then helps users identify thinking traps and provides a more accessible way to reframe negative thoughts, using a technique called cognitive restructuring (16m26s).
  • Cognitive skills can be learned in various ways, such as reading about different types of thinking traps, but this can be a challenging task, especially for those going through a difficult time (16m28s).
  • AI can analyze thoughts and situations, identify patterns, and detect cognitive distortions, allowing users to learn about specific issues relevant to them at that moment (16m51s).
  • AI models can assess thoughts and identify potential cognitive distortions, such as overgeneralization, fortune telling, or all-or-nothing thinking, and suggest potential starting points to challenge these negative thoughts (17m15s).
  • AI can generate multiple reframes, or alternative ways of thinking, in the hope that at least one will resonate with the user, and these reframes are not meant to do the thinking for the user but rather provide a starting point for challenging negative thoughts (17m34s).
  • The AI tool is not meant to replace traditional therapeutic processes but rather provide a helpful aid, and it has been compared to the status quo of doing exercises on paper, which do not offer the same level of interaction and feedback (18m10s).
  • The AI model was deployed on the Mental Health America website, where it has been used by over 160,000 people, and has allowed for dozens of randomized trials to be conducted to improve the tool and learn more about the psychology behind it (18m57s).
  • One of the key findings from these trials is that the cognitive reframing tool has improved engagement significantly, and this skill is a common one in clinical psychology (19m45s).
  • A study was conducted to compare the effectiveness of a tool with AI support to a previous tool in facilitating learning, and the results showed significantly higher engagement and completion rates with the AI-supported tool (20m14s).
  • The AI-supported tool was able to deliver interventions at a lower cost than printing out a worksheet, with costs as low as a few cents (20m47s).
  • Randomized trials were conducted to compare the AI-supported tool to a pixel-by-pixel identical version without AI support, and the results showed that participants rated the AI-supported tool as more helpful and effective in learning a new skill (21m10s).
  • The AI-supported tool also had a positive impact on participants' emotional well-being, with 90% of participants reporting feeling the same or better overnight (21m56s).
  • The study also explored what makes a reframe particularly helpful, and found that reframes should be relatable, helpful in overcoming negative thoughts, and memorable (22m24s).
  • Researchers developed natural language processing tools to measure the effectiveness of reframes and identify constructs that make them more relatable, helpful, and memorable, such as rationality, actionability, specificity, empathy, positivity, and readability (23m27s).
  • The study was able to inform psychological theory and learn more about how to create the most effective reframes by analyzing participants' preferences and ratings of different reframes (23m52s).
  • A study was conducted to determine what types of reframes people prefer to click on and use as a starting point, and it was found that highly specific and empathic reframes were preferred, while overly positive reframes were less preferred (24m11s).
  • The study also found that reframes with medium complexity readability worked best for a broad population (24m39s).

AI-Supported Cognitive Reframing: Studies and Findings

  • Reframes that were more rational, made fewer assumptions, and were more relatable were found to be more helpful to people (25m4s).
  • Making reframes actionable and specific was also perceived as more helpful, and these two constructs positively impacted how likely people were to remember the reframe (25m28s).
  • A large study was conducted to determine who the tool worked for and who it didn't, and it was found that teenagers (13-17 years old) systematically reported worse outcomes compared to the average, as well as those with lower education levels (26m16s).
  • The organization initially rolled out the tool to adults, but later expanded it to teenagers, who are a key demographic for the organization, particularly due to the worsening state of teenage and adolescent mental health (26m48s).
  • An analysis of the reframes created by younger users found that they tended to be less complex, with shorter sentences and words, leading to the idea of simplifying the suggested reframes (27m9s).
  • An experiment was conducted where the suggested reframes were rewritten to be simpler and more casual, resulting in significant improvements in randomized trials with 13-14 year olds (27m45s).
  • A clinical trial funded by the NIH was conducted to test the effectiveness of AI-powered tools in facilitating cognitive reframing skills, and the results showed that the AI-assisted reframes were often more effective than traditional methods (28m17s).
  • The trial involved participants using the tools for four weeks, and the results showed that people in the AI group used the tool more often and had better outcomes (28m51s).
  • Engagement is a crucial aspect of digital mental health, and the additional scaffolding provided by AI-powered tools can make users more likely to continue using the tool (29m13s).
  • A longer-term trial was conducted to test the sustainability of the skills learned using the AI-powered tools, and the results showed that the improvement in cognitive reframing skills was sustained even after the tools were no longer used (29m36s).
  • The trial used a composite measure called the y-AIS to assess skill use and mastery, which includes subscales such as recognizing and re-evaluating negative thoughts (29m52s).
  • The results of the trial suggest that AI-powered tools can be effective in teaching cognitive reframing skills and that these skills can be sustained even after the tools are no longer used (30m41s).

AI-Facilitated Conflict Resolution and Difficult Conversations

  • Conflict resolution and having difficult conversations is another important skill that can be facilitated using AI-powered tools (31m5s).
  • Difficult conversations are often challenging due to a lack of training, unclear strategies, and emotional components that can interfere with effective communication (31m35s).
  • AI-powered tools can potentially provide support and guidance for having difficult conversations, and further research is needed to explore this area (31m41s).
  • Difficult conversations can be challenging due to emotional attachment to the outcome, and people may struggle to access professionals who can teach them the necessary skills to navigate these conversations effectively (32m2s).
  • While some free web resources, worksheets, and conversation strategies are available, they lack the fidelity of practicing conversations with a professional, making them less effective (32m35s).
  • A system called IMBU was developed to provide an interactive training environment using language models to simulate difficult conversations and offer just-in-time feedback, supporting both conversation strategy and emotional regulation (33m12s).
  • The IMBU system is based on the DEER MAN framework from dialectical behavioral therapy, which provides a structured approach to having conversations, including skills such as describing facts without judgment and expressing emotions using "I" statements (33m46s).
  • The IMBU system allows users to practice conversations by selecting a situation, choosing a skill to express, and receiving feedback on their utterances, with the goal of improving their ability to have difficult conversations (34m31s).
  • A demo of the IMBU system was shown, illustrating how a user can practice a conversation about a difficult topic, such as convincing a family member to accept help with their healthcare (34m30s).
  • A system is being developed to provide feedback on conversations, aiming to improve objectivity and reduce judgmental language, with the goal of facilitating constructive conversations (35m51s).
  • The system consists of multiple components, including a simulation of a participant-provided communication scenario, questions about the conversation partner, and ideally expert-like "just-in-time" feedback (36m27s).
  • The feedback has two components: advising on the skill to use next and providing feedback on skill use, with concrete actionable advice to avoid judgmental language and stick to the facts (36m54s).
  • From a computational perspective, the system aims to simultaneously rate how well a skill is expressed and generate an improvement, given a situation, previous utterance, and skill to be expressed (37m29s).
  • The system was trained on data collected from lay people and experts in dialectical behavioral therapy, to understand difficult conversation scenarios and high-quality practice and feedback (38m16s).
  • The data collection involved identifying challenging situations and conversations in real life, and collecting examples of high-quality versions of practice and feedback from experts (38m39s).
  • The system's evaluation is based on human expert evaluation, as automatic evaluation methods have been found to fail to capture the nuances of human feedback (39m11s).
  • Human experts evaluate the system's feedback by comparing it to the feedback that a clinical expert would have given, and assessing how often the system's feedback matches the expert's feedback (39m25s).
  • A study was conducted to evaluate the effectiveness of language models as temporary training wheels to facilitate learning, with a focus on improving skills in difficult conversations. (39m54s)
  • The study found that the addition of several features, including retrieval, augmented generation, Chain of Thought, and contrasting pairs of good and bad examples, significantly improved the quality of feedback, with human expert evaluations showing an 83% success rate. (40m3s)

Evaluating the Effectiveness of AI in Difficult Conversations

  • A randomized trial was conducted with 80 participants, who were assigned to either a simulation-only condition or a simulation-plus-feedback condition, to evaluate whether people actually learn the skill and what it takes to learn it. (41m48s)
  • The study used pre- and post-measurements of skills, and participants were asked to provide two difficult conversation scenarios, which were rated on a scale of 1-9, with a focus on situations rated 7, 8, or 9. (42m28s)
  • The study's flow involved simulating the first conversation scenario to assess participants' initial skills, followed by training, and then evaluating their skills with and without feedback, as well as their emotional responses. (42m38s)
  • The study also evaluated whether the skills learned in the first scenario generalized to a second, hold-out scenario, without any feedback. (43m48s)
  • The results of the study were used to evaluate the effectiveness of language models as temporary training wheels to facilitate learning, with a focus on improving skills in difficult conversations. (44m1s)
  • The study examines the effectiveness of language models as temporary training wheels to facilitate learning from various perspectives, including skill mastery, reduction of difficult negative emotions, and confidence in having conversations constructively (44m4s).
  • The study uses validated machine learning models to measure skill mastery and self-report from participants to assess the other two perspectives (44m27s).
  • The results show that simulation alone did not improve skill mastery, but the condition with feedback improved it by about 17% (44m59s).
  • Breaking down the subskills, both conversational skills and emotional skills were significantly improved in the condition with feedback (45m17s).
  • The study also focuses on primary negative emotions, including anger, fear, sadness, and disgust, and finds that the simulation only condition reduced fear and sadness by 25% and 17%, respectively (45m39s).
  • Adding feedback to the simulation reduced negative emotional responses significantly more, by about 41% (46m7s).
  • However, the study found that the simulation only condition did not significantly reduce anger, and only the feedback condition helped people be less angry (46m19s).
  • The study also examines self-efficacy, including confidence, worry, hopefulness, and motivation, and finds that simulation alone improved confidence and reduced worry by 17% and 27%, respectively (46m47s).
  • Adding feedback helped significantly more, improving confidence by 44% and reducing worry (47m0s).
  • The study concludes that the system provides scaffolding and deliberate practice with constructive feedback, and that at least 80% of the time, the quality is equivalent to that of world experts in the domain (47m21s).
  • The study was conducted with a single session of training, which may not be potent enough to enact drastic change, but still found improvements in skill use and emotions after just one session (47m47s).
  • The best outcomes happened when both simulation and feedback were used, although some emotional regulation outcomes were improved even with just simulation (48m2s).

Ethical and Safety Considerations

  • There are important ethical and safety considerations in the world of technology and mental health, with both opportunities and risks present (48m27s).
  • Co-designing systems with various stakeholders from the beginning is crucial, and operationalizing existing bioethics frameworks is also necessary (48m49s).
  • Minimum requirements include consent and IRB review, and efforts are being made to flag and filter inappropriate and unsafe content through automatic means and user agency (49m16s).
  • Collaborative stress testing or red teaming is performed before putting systems in front of users (49m37s).

Current and Future Applications of AI in Learning

  • Research is being conducted on using language models to challenge people and help them learn important skills, such as expressing empathy more effectively (49m52s).
  • Current projects include working on clinician-patient interactions, teaching critical thinking and fact-checking skills, and exploring creative tasks in collaboration with Reddit communities (50m6s).
  • The goal is to make these tools and insights accessible to a wider audience, with some already being used by organizations serving around 10 million people (51m10s).
  • The work is a collaboration with mental health professionals and people with lived experience of mental health, and their contributions are invaluable (51m1s).
  • Specific individuals, such as Ashish Sharma and Ain, have made significant contributions to the projects (51m23s).
  • The work aims to make language models more scalable and accessible to people, building on existing research that has been done with human supervision (52m16s).

The Potential of AI in Mental Health Care

  • Mental health is a domain where the person that technology could replace often doesn't exist, due to a massive imbalance between supply and demand for mental health care services (53m7s).
  • While AI tools may not replace traditional mental health care systems in the near term, they can provide a minimum level of psychoeducation and mental health care access, especially for the 80% of people in the US who need care but have no access to it (53m48s).
  • AI tools can offer unique affordances, such as providing a safe space for people to share their thoughts and feelings without fear of judgment, which can be particularly helpful for those who are uncomfortable sharing their emotions with a human therapist (54m35s).
  • Some people may prefer to share their thoughts and feelings with an AI tool rather than a human therapist due to complex emotions, stigma, and trauma, making AI tools a valuable resource for those who are not well-served by the current system (55m6s).
  • AI tools can be adapted to continue learning and improving over time, allowing them to provide more value to users with each use, which could involve incorporating state and memory to understand users' core beliefs, triggers, and traumas (55m57s).
  • The ability of AI tools to learn and adapt over time could make them more useful for users, especially those who have the privilege of accessing a therapist over longer periods, as AI tools can potentially provide more personalized and effective support (55m59s).

Challenges and Future Directions in AI-Supported Learning

  • People's behavior in traditional clinical studies and the real world can be quite different, presenting various challenges, and sustaining engagement is particularly difficult, making it essential to provide enough value quickly and remove friction and barriers to be more useful (56m25s).
  • One way to address this issue is to remember things about the user to be more useful and quicker, and to take away some of the friction and barriers (57m3s).
  • A question arises about how to know when to take the "training wheels off" to avoid dependence on the tool, and this is a challenging question to answer (57m16s).
  • To study this question, researchers collect data on outcomes both inside and outside the tool, characterizing performance with and without the training wheels, and assessing when increases in performance occur without the training wheels (57m44s).
  • This data collection is done longitudinally to assess when it is time to systematically evaluate the removal of the training wheels (58m10s).
  • Currently, there is no clear answer to this question, and it is mostly a matter of measurement and gathering outcomes with and without the training tools (58m31s).
  • In a study on cognitive reframing, researchers found that people's performance increased for the first three weeks and then plateaued in the fourth week, suggesting that training for three weeks may be sufficient, but more research is needed to generalize this finding (59m1s).
  • The study also found that performance remained at the same level eight weeks later, indicating some retention of the skill (59m43s).
  • The question of how to synthesize and generalize this finding more broadly is still a work in progress (1h0m5s).

Conclusion and Farewell

  • The seminar is being closed, and the recording is being cut off (1h0m30s).
  • The attendees are wished a wonderful Friday and weekend (1h0m33s).
  • The seminar is concluded with a farewell message, and the participants are asked to take care (1h0m37s).

Overwhelmed by Endless Content?