Ethan Mollick: Why OpenAl Abandons Products, The Biggest Opportunities They Have Not Taken | E1184

31 Jul 2024 (5 months ago)
Ethan Mollick: Why OpenAl Abandons Products, The Biggest Opportunities They Have Not Taken | E1184

Intro (0s)

  • OpenAI is focused on developing general artificial intelligence (AGI) and is not prioritizing product development. They are willing to abandon products if they believe it will help them achieve their goal of building AGI.
  • OpenAI is currently generating significant revenue, but this is considered an accidental byproduct of their focus on AGI. Their primary focus is on developing the technology, not creating products for profit.
  • Ethan Mollick, a professor of entrepreneurship, has been involved in AI research and development for a long time. He has a background in entrepreneurship and has been using AI tools in his work for many years. His expertise in both business and AI has made him a prominent figure in the field.

Thoughts on the New Llama 3.1 Model (2m31s)

  • Ethan Mollick is impressed with the release of the Llama 3.1 model, an open-source, fine-tunable model capable of rivaling closed-source models like GPT-4. He believes this will lead to widespread adoption and unexpected consequences as people experiment with its capabilities.
  • Mollick acknowledges that the closed-source labs still have a significant advantage and are likely to release even more powerful models in the future. However, he emphasizes that the open-source nature of Llama 3.1 will allow for rapid development and innovation.
  • Mollick highlights the difficulty in predicting the long-term impact of these models, as even the developers themselves are unsure of their full potential. He uses the example of GPT-3.5's unexpected impact on education as a cautionary tale. He believes that the next generation of models will be even more powerful, but their real-world implications remain uncertain.

Four Potential Outcomes: A Framework for the Future (5m52s)

  • Option 1: Stagnation: This scenario suggests that AI models will not significantly improve beyond their current capabilities, leading to a gradual integration of AI into existing systems. While this might result in some economic improvements, it would not cause a dramatic societal shift. This outcome is considered unlikely due to the ongoing advancements in AI and the untapped potential for integrating AI into various work processes.
  • Option 4: Machine God: This scenario envisions the emergence of Artificial General Intelligence (AGI) and superintelligence, surpassing human intelligence and potentially leading to an "intelligence explosion." This outcome is often the focus of discussions about AI, but it is considered less likely than other scenarios.
  • Option 2 & 3: Continued Growth: The most probable scenarios involve continued growth in AI capabilities, either exponentially or linearly. Exponential growth would lead to a rapid increase in AI's abilities, potentially approaching AGI. Linear growth would involve a gradual and steady improvement in AI capabilities, allowing for more manageable adaptation and integration. This scenario suggests that AI will continue to improve, but at a more predictable and manageable pace, leading to gradual advancements in various fields.

Will AI Achieve Escape Velocity or Plateau Like the iPhone? (8m24s)

  • The speaker draws a comparison between the development of the iPhone and the potential development of AI. He notes that the iPhone, after its initial rapid development, reached a plateau where improvements were incremental, such as better cameras or slightly larger buttons. He questions whether AI will experience a similar plateau or if it will continue to develop at an exponential rate, achieving "escape velocity."
  • The speaker suggests that we are currently at a point in AI development where the focus is on incremental improvements, similar to the iPhone's later stages. He uses the example of the calculator, which was a major feature in a new iOS release, highlighting that AI is currently focused on improving existing capabilities rather than achieving significant breakthroughs.
  • The speaker acknowledges that Moore's Law, which describes the exponential growth of computing power, has been a sustained curve for years. However, he points out that this growth has been driven by underlying technological advancements that have been constantly replaced. He questions what the "topline intelligence" of AI will be and whether there are inherent limitations to its capabilities. He also notes that AI is currently "jagged," meaning it excels in some areas but struggles in others, making it unsuitable for replacing all human work.

Identifying the Core Bottleneck: Compute, Data, or Algorithms? (9m56s)

  • Identifying the Core Bottleneck: The discussion focuses on identifying the primary bottleneck in the development and advancement of AI, specifically large language models (LLMs). While many believe compute power or data availability are the key limitations, the speaker argues that the real bottleneck lies in the human systems and organizational structures that interact with these technologies.
  • The Reverse Salient: The speaker introduces the concept of the "reverse salient" from the history of science, where technological progress is often hindered by a lagging component. He suggests that the current focus on compute and data might be overlooking the need to improve the integration of these technologies into existing systems.
  • Beyond Picks and Shovels: The speaker criticizes the common analogy of "picks and shovels" in the context of AI, arguing that it fails to capture the true nature of technological adoption. He suggests that the steam engine analogy is more apt, highlighting the importance of skilled artisans who can adapt and integrate new technologies into existing workflows. The focus should be on developing the skills and expertise to effectively utilize LLMs within organizations, rather than simply providing the tools.

Why Aren't AI Providers Offering User-Friendly Guides? (13m53s)

  • The lack of user-friendly guides for AI tools is a significant problem, as it hinders widespread adoption and effective utilization.
  • This absence of documentation is attributed to the Silicon Valley focus on achieving superintelligence, prioritizing scaling and believing that larger models will eventually solve all problems.
  • The rapid evolution of AI technology, exemplified by the obsolescence of a GPT-3 powered sales assistant tool after the release of ChatGPT, further discourages the creation of comprehensive guides, as they might quickly become outdated. This results in a reliance on informal documentation and rumors, hindering the understanding and proper application of AI tools.

Should Powerful AI Models Be Open Source or Closed? (15m28s)

  • Open source AI models have both potential benefits and risks. While they can foster innovation and entrepreneurship, they also pose security threats, such as enabling large-scale phishing campaigns.
  • The current discussion about open source AI models is dominated by corporate strategy, not a comprehensive understanding of the technology's implications. Companies like Meta and Microsoft are primarily focused on their own competitive advantages, rather than considering the broader societal impact.
  • A more responsible approach would involve rapid response regulation. This means monitoring the development and use of open source AI models closely and implementing policies as needed to mitigate potential harms. Currently, there is a lack of monitoring and a lack of preparedness for the potential consequences of open source AI.

Will Regulations Limit AI Growth? (18m49s)

  • The EU's stringent AI regulations are a cause for concern. The speaker worries that these regulations could stifle AI development and adoption, leading to a plateauing effect.
  • Finding a balance between regulation and innovation is crucial. While the speaker acknowledges the need for regulation to mitigate potential harms, they argue that overly restrictive regulations could hinder progress. They believe that the EU's regulations may be too stringent, especially considering that the current AI models are not yet capable of causing significant harm.
  • The EU's regulatory environment is not the only factor hindering AI development. The speaker points out that the US has a more robust venture capital ecosystem, which attracts talent and investment. This creates a significant advantage for the US in AI development. They also highlight the importance of physical proximity between VCs and startups for effective monitoring and networking, which further contributes to the US's dominance in the field.

What Are AI Labs Missing About Business Needs? (22m10s)

  • AI Labs lack understanding of business needs: AI labs often focus on developing cutting-edge technology without considering real-world applications. They prioritize building "machine gods" and pushing the boundaries of AI, neglecting the practical needs of businesses. This leads to the development of half-built products like Code Interpreter, which have immense potential but are not fully realized due to a lack of focus on user experience and integration with existing workflows.
  • Companies are slow to adopt AI: Despite the hype surrounding AI, most companies have not fully embraced its potential. There is a lack of understanding about how to effectively integrate AI tools into existing workflows, and many employees are hesitant to use them due to a lack of training and guidance. This leads to a missed opportunity for companies to leverage AI for increased productivity and efficiency.
  • The need for better onboarding and integration: Companies need to provide better onboarding and integration for AI tools. This includes clear instructions, training materials, and support systems to help employees understand how to use AI effectively. By making AI more accessible and user-friendly, companies can encourage adoption and unlock the full potential of these technologies.

How Can We Better Harness AI to Drive Productivity? (26m0s)

  • Clearer regulations and company policies are needed to encourage the ethical and productive use of AI. The current regulatory environment is unclear, leading many companies to ban or restrict access to powerful AI tools like GPT-4. This uncertainty creates a culture of secrecy, where employees are afraid to use AI for fear of losing their jobs or being seen as less competent.
  • Companies need to develop clear policies and reward systems for AI use. Employees are hesitant to share their AI usage because they fear negative consequences, such as being assigned more work or losing their jobs. Companies need to create a culture where AI use is encouraged and rewarded, rather than punished.
  • The potential of AI to drive productivity should be viewed as an opportunity for growth, not just cost-cutting. Just as the Industrial Revolution led to new industries and jobs, AI has the potential to create new opportunities and increase productivity. Companies should focus on expanding their reach and creating new products and services, rather than simply using AI to cut costs and lay off employees.

Will AI Redistribute Talent or Eliminate Jobs? (28m22s)

  • AI may not eliminate jobs, but it could redistribute talent unevenly. While AI can improve efficiency and productivity, it might not create new jobs to replace those lost. This could lead to a situation where those who are already skilled and wealthy benefit most from AI, while others struggle to adapt.
  • The historical analogy of the Industrial Revolution suggests that technological advancements can lead to significant societal upheaval. While the long-term effects may be positive, the transition period can be difficult, with job losses and social unrest.
  • The accessibility of AI tools like ChatGPT could exacerbate the uneven distribution of knowledge and productivity. The ease of use and widespread availability of AI could empower a small group of tech-savvy individuals while leaving others behind. This could lead to a widening gap between those who can leverage AI effectively and those who cannot.
  • Early evidence suggests that coders may not be the best users of AI. AI systems often behave in unexpected ways, making it difficult for those with traditional programming skills to effectively utilize them. Instead, individuals with strong communication and interpersonal skills may be better suited to working with AI.
  • There is hope that AI adoption will follow a different curve than previous technologies. The accessibility of AI and the fact that non-technical users may be better suited to using it could lead to a more equitable distribution of its benefits. However, this requires widespread awareness and education about AI's potential.

AI and Consumers: The Future Interface Experience (33m23s)

  • The future of AI interaction with consumers will likely be multimodal, incorporating visual and conversational elements. This will create a more natural and intuitive experience, similar to interacting with a human assistant.
  • The current chatbot interface is limited and requires users to be skilled in prompt engineering. However, as AI becomes more sophisticated and integrated into our lives, the need for specialized prompting will diminish.
  • The adoption rate of AI tools like ChatGPT is higher in universities because of the collaborative environment and the focus on efficiency. Students are more likely to share information and learn from each other, leading to faster adoption. In contrast, workplaces often lack this collaborative culture, hindering the spread of AI tools.

AI Ambition in Startups: What's Holding Them Back? (36m9s)

  • Startups are not being ambitious enough with AI. The current "lean" startup methodology, focused on finding product-market fit, is not suitable for radical innovation. This method incentivizes startups to focus on incremental improvements rather than exploring the full potential of AI.
  • Startups need to be more opinionated about the future of AI. They should have a clear vision of how AI will evolve and how their products will fit into that future. This includes considering how AI will be adopted and integrated into organizations.
  • Startups are betting against AGI. Many startups are developing products that are designed for a world where humans are still in control. However, the potential for AGI (Artificial General Intelligence) poses a significant threat to these businesses. If AGI emerges, it could render many current startup products obsolete, as AI could directly optimize solutions without human intervention.

Founders' Diverging Views on AGI Timelines & Funding (41m35s)

  • Founders have differing views on the timeline for achieving Artificial General Intelligence (AGI). Some founders, like Demis Hassabis and Zach, believe it will take a long time and are not reliant on external funding. Others, who remain unnamed, believe AGI is closer and require funding to support their vision.
  • The speaker emphasizes the importance of considering the motivations behind these claims. While self-interest is a factor, the speaker believes that founders betting their careers on AGI being achievable is a signal worth paying attention to. They also note that the reputation of these individuals is at stake, adding further weight to their claims.
  • The speaker highlights the potential impact of large companies like Meta on the future of AGI. Meta's recent release of Llama 3.1, which suggests continued exponential progress in AI, raises questions about the future for startups. The speaker questions how startups can navigate a future where everything is changing while also pursuing smaller, incremental goals. They also express concern that the hype surrounding crypto has contributed to a focus on short-term returns, potentially hindering long-term technological development.

Will You Thrive or Get Steamrolled? (43m33s)

  • The "100x improvement" heuristic is not helpful. While the idea of a 100x improvement in AI models is exciting, it's not a useful metric for understanding the real impact of AI. We need concrete examples of what these improvements mean in specific fields and how they will be applied.
  • Field-specific knowledge is crucial. AI firms often lack deep understanding of specific fields like education or law. This limits their ability to identify the real gaps and opportunities for AI in those areas.
  • AI tutors are transformative, but they don't replace teachers. While AI can provide personalized tutoring, it doesn't address the broader needs of education, such as social interaction, motivation, and the complex systems within schools.
  • Increasing class sizes pose a challenge for AI in education. Early research suggests that AI tutors can improve homework scores but may not lead to overall educational gains. We need to be careful about how AI is implemented to avoid students simply relying on AI to do their work for them.
  • The future of education may involve flipped classrooms. AI tutors could potentially enable flipped classrooms where students learn independently outside of class and use class time for more interactive activities. However, this requires careful planning and development to ensure effective implementation.

The Future of Education with AI (49m49s)

  • The future of education with AI is uncertain, but there are some promising possibilities. While AI can provide personalized tutoring and adaptive learning experiences, it's crucial to remember that learning is a complex process that requires effort and engagement.
  • Active learning and flipped classrooms are well-suited for AI integration. AI tutors can provide personalized instruction outside of class, allowing for more active learning and problem-solving in the classroom setting. This approach aligns with research showing that active learning is more effective than passive lectures.
  • AI can be a powerful tool for education, but it's not a magic bullet. While AI can provide significant improvements in learning outcomes, it's important to avoid overstating its potential. The effectiveness of AI in education will depend on factors like subject matter expertise, student motivation, and the design of learning activities.

Energy Demands & Compute as Currency (57m33s)

  • Sam Altman, CEO of OpenAI, believes that compute is the currency of the future. This is because he believes that Artificial General Intelligence (AGI) is achievable in the near term, and AGI will require immense computing power.
  • The energy demands of AGI are a significant concern. If AGI becomes a reality, there will be an insatiable demand for intelligence, leading to a massive increase in energy consumption. This could necessitate the construction of numerous nuclear power plants or even the development of fusion power.
  • The energy debate surrounding AI is complex. While AI uses more energy per query than a Google search, it uses significantly less energy than a human performing the same task. Currently, only a small percentage of global energy consumption is attributed to data centers and AI. However, if AGI becomes widely available, the energy demands could become a major issue.

The Role of AI in Future Electoral Systems & Politics (1h0m0s)

  • AI's Role in Elections: While the idea of algorithms replacing human voters in elections seems dystopian and unlikely to happen quickly, AI is already influencing political systems through its persuasive power.
  • AI's Persuasive Power: Studies show that people are more likely to change their views after interacting with AI than with humans. This has implications for marketing and politics, as AI can be used to manipulate public opinion.
  • The Future of Content Creation: The abundance of content generated by AI raises concerns about the value of content and the difficulty of discovery. The potential for AI-generated music and books raises questions about the future of creative industries.

Quick-Fire Round (1h4m40s)

  • Ethan believes that most people underestimate the potential of AI. He thinks AI is much better than people realize and that its capabilities will continue to improve exponentially.
  • Ethan's biggest concern about the future of AI is the loss of human agency. He worries that AI will be used to automate tasks and displace workers without considering the broader implications for human well-being. He believes we need to focus on using AI to enhance human capabilities and create meaningful work.
  • Ethan has changed his mind about the potential of AI in the last year. He now believes that AI has a lot of "juice" left and that the exponential growth of AI technology will continue for some time. This shift in perspective was driven by the emergence of new, more powerful AI models and the growing confidence of experts in the field.
  • Ethan is surprised by the cleverness of AI systems. He is impressed by their ability to learn and adapt, and he finds them enjoyable to use.
  • Ethan believes that we need to understand why people are not fully embracing AI. He thinks there is more to the story than just technical limitations or fear. He wants to explore how humans are relating to these tools and how we can create a more meaningful and fulfilling relationship with AI.
  • Ethan is concerned about the potential impact of AI on the meaning of work. He worries that as AI takes over more tasks, people will feel increasingly alienated from their jobs and lose a sense of purpose. He believes we need to have a serious conversation about the future of work and how to ensure that AI enhances, rather than diminishes, human meaning and fulfillment.

Overwhelmed by Endless Content?