Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419

18 Mar 2024 (9 months ago)
Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419

Introduction (0s)

  • Sam Altman believes compute will be the most precious commodity in the world.
  • He expects quite capable AI systems to be developed by the end of this decade.
  • Altman views the road to AGI as a power struggle, with whoever builds it first gaining significant power.
  • Lex Fridman questions whether Altman trusts himself with that much power.
  • Sam Altman discusses the composition of the OpenAI board, which includes:
  • Altman explains that the board is responsible for overseeing the company's mission and ensuring that it is aligned with its values.
  • The board also provides guidance and advice to the OpenAI team on technical and strategic matters.
  • Sam Altman discusses the recent release of GPT-4, a large language model from OpenAI.
  • GPT-4 is a transformer-based language model that has been trained on a massive dataset of text and code.
  • Altman highlights some of the impressive capabilities of GPT-4, such as its ability to generate human-like text, write code, and answer questions.
  • He also acknowledges that GPT-4 has limitations, such as its tendency to generate biased or inaccurate information.
  • Sam Altman discusses ChatGPT, a chatbot that is based on GPT-4.
  • ChatGPT has gained significant popularity since its launch, with over 1 million users in its first week.
  • Altman explains that ChatGPT is designed to be a helpful tool that can assist users with a variety of tasks, such as writing emails, generating ideas, and answering questions.
  • He also acknowledges that ChatGPT has limitations, such as its inability to access real-time information or understand the context of a conversation.
  • Sam Altman discusses Sora, a new chatbot from OpenAI that is designed to be more conversational than ChatGPT.
  • Sora is based on a new language model that has been trained on a dataset of human conversations.
  • Altman explains that Sora is designed to be more empathetic and understanding, and to be able to hold more natural conversations with users.
  • He also acknowledges that Sora is still in its early stages of development and has limitations, such as its inability to generate factual information.
  • Sam Altman discusses the recent controversy surrounding the OpenAI board, which resulted in the resignation of several members.
  • Altman explains that the controversy was due to disagreements over the company's mission and values.
  • He emphasizes that OpenAI is committed to its mission of developing safe and beneficial AI, and that the board is working to ensure that the company's values are upheld.
  • Sam Altman discusses his relationship with Elon Musk, who was a co-founder of OpenAI but left the company in 2018.
  • Altman explains that he and Musk have different views on the future of AI, with Musk being more concerned about the potential risks of AI and Altman being more optimistic about its potential benefits.
  • He also acknowledges that Musk has been a valuable contributor to OpenAI and that he respects his opinions.
  • Sam Altman discusses Ilya Sutskever, the Chief Scientist of OpenAI.
  • Altman explains that Sutskever is a brilliant scientist who has made significant contributions to the field of AI.
  • He also acknowledges that Sutskever is a very private person and that he doesn't like to be in the spotlight.
  • Sam Altman discusses the potential power of AGI (Artificial General Intelligence) and the importance of ensuring that it is developed safely and responsibly.
  • He explains that AGI could have a profound impact on society, and that it is important to consider the ethical implications of its development.
  • Altman also emphasizes the importance of international cooperation in developing AGI, and the need to ensure that it is not used for malicious purposes.

OpenAI board saga (1m5s)

  • Sam Altman describes the events of November 2022 as the most challenging professional experience of his life, involving chaos, shame, and upset, but also receiving significant support from loved ones.
  • Despite the negativity, Altman found solace in the outpouring of love and support he received during that challenging time.
  • Altman believes the intense experience helped OpenAI build resilience and prepare for future challenges in developing Artificial General Intelligence (AGI).
  • He reflects on the personal psychological toll the situation took on him, describing a month-long period of drifting and feeling down.
  • Altman acknowledges the board members' good intentions but highlights the challenges of making optimal decisions under pressure and the need for a team that can operate effectively under such circumstances.
  • Changes were made to the OpenAI board structure to make it more accountable to the world, including forming a new, smaller board with more experienced members.
  • Technical savvy is important for some board members, but not all, as the board's role involves governance, thoughtfulness, and deploying technology for society's benefit.
  • Altman experienced a challenging weekend due to a public battle with the board of directors at OpenAI but remained focused on finding the blessing in disguise and considered shifting his focus to a more focused AI research effort.
  • The most challenging aspect of the situation was the constant state of uncertainty and the expectation that a resolution was imminent, only to be delayed repeatedly.
  • Altman emphasizes that the true essence of OpenAI lies in the consistent work and decisions made over time, rather than focusing solely on dramatic events.

Ilya Sutskever (18m31s)

  • Sam Altman admires Ilia's long-term thinking and dedication to the responsible development of Artificial General Intelligence (AGI), despite their differing plans.
  • Ilia takes the safety concerns of AGI very seriously and has not seen AGI yet.
  • Altman values the importance of robust governance structures and processes, as highlighted by the recent OpenAI board drama.
  • Altman emphasizes the significance of surrounding oneself with wise individuals when making decisions, especially as power and money increase.

Elon Musk lawsuit (24m40s)

  • OpenAI's initial goal was to be a research lab without a clear plan for commercialization, but as technology advanced, the need for more capital and structural changes led to its current setup.
  • Elon Musk's motivations for criticizing OpenAI are unclear, potentially related to personal reasons stemming from the split between him and the organization.
  • OpenAI's mission, according to Sam Altman, is to provide powerful AI technology to the public for free, without monetization or ads.
  • OpenAI is involved in an ongoing lawsuit with Stability AI, which Altman believes is not legally substantial but serves as a means to make a point about the future of AGI and OpenAI's leading position in the field.
  • In response to criticism, OpenAI's Grok project will begin open-sourcing its projects this week.
  • Altman emphasizes the importance of friendly competition and expresses disappointment in Elon Musk's approach to the lawsuit, considering it unbecoming of a builder.
  • Altman acknowledges the demand for smaller, open-source models and predicts a coexistence of open-source and closed-source models in the AI ecosystem.
  • Altman discourages startups from adopting a nonprofit structure with a later transition to for-profit, citing potential legal complications.
  • Altman hopes for an amicable relationship with Elon Musk in the future, emphasizing friendly competition and collaboration in exploring AI ideas.

Sora (34m32s)

  • Sam Altman introduces the Sora AI system, which is trained on visual patches and demonstrates a good understanding of the world model but has limitations.
  • Sora's approach differs from human thinking and learning but can be improved with larger models, better data, and advancements.
  • OpenAI's concerns about releasing Sora include potential dangers and the need for further research.
  • OpenAI aims to improve the efficiency of its systems to meet expectations.
  • Training AI should be considered fair use, but artists should have the option to opt out and receive compensation.
  • The economic system will evolve to reward human contributions, not necessarily monetarily.
  • AI will automate tasks and enable people to work at higher levels of abstraction and efficiency.
  • YouTube videos will likely incorporate AI tools but will still be driven by human creators.
  • AI-generated content may not fully replace human-generated content due to human empathy.
  • AI tools similar to Adobe's software suite may emerge to simplify video production.

GPT-4 (44m23s)

  • Sam Altman views GPT-4 as a significant milestone in AI history, but it still has limitations compared to the desired capabilities.
  • GPT-4 has potential as a creative brainstorming partner and for longer-horizon tasks, but its full development in these areas is ongoing.
  • Altman highlights the importance of both the underlying AI model and reinforcement learning fine-tuning in creating an effective product for users.
  • The context window expansion in GPT-4 is notable, but current usage doesn't fully utilize its 128K token capacity.
  • The long-term goal is to achieve a context length of several billion tokens for a comprehensive understanding of user history and preferences.
  • Altman believes in the exponential growth of technology leading to effectively infinite context beyond billions of tokens.
  • Younger individuals are using GPT-4 as their default starting point for various knowledge work tasks.
  • Altman finds GPT-4 more balanced and nuanced than Wikipedia for well-covered topics when used as a reading partner.
  • Fact-checking remains a concern due to GPT-4's tendency to generate convincing but false information.
  • Altman acknowledges the risk of reduced fact-checking as the model improves but trusts users' understanding of the limitations.
  • Altman criticizes the current state of journalism for rewarding quick, sensationalist content over in-depth reporting.
  • He encourages a shift towards more nuanced and responsible journalism while still celebrating individuals.

Memory & privacy (55m32s)

  • Sam Altman proposes giving AI models like GPT-5 the ability to remember conversations selectively, allowing them to accumulate knowledge and become more personalized to users over time.
  • Altman stresses the significance of user choice and transparency regarding privacy when AI systems access personal data.
  • Reflecting on a challenging period in November, Altman describes it as traumatic but chooses to view it as an opportunity for growth and important work.
  • He acknowledges the risk of lingering trust issues and paranoia resulting from negative experiences, drawing parallels to high-stress environments like the Putin administration during wartime.
  • Altman discusses the limitations of current language models like GPT-5 in terms of slower and sequential thinking, suggesting the need for a new paradigm or a layer on top of existing LLMs.
  • He emphasizes the importance of allocating more compute to harder problems and explores the possibility of simulating an LM talking to itself to work through complex problems like mathematical proofs.

Q (1h2m36s)

  • Sam Altman says there is no secret nuclear facility at OpenAI, despite rumors.
  • OpenAI is not good at keeping secrets and has experienced leaks in the past.
  • Altman says OpenAI is working on better reasoning in its systems but hasn't cracked the code yet.
  • Altman believes that AI and surprise don't go together and that the world needs time to adapt to new technologies.
  • OpenAI's strategy of iterative deployment is intended to avoid shock updates and give the world time to think about the implications of AGI.
  • Altman acknowledges that people like Lex Fridman perceive leaps in progress, which suggests that OpenAI may need to release updates even more iteratively.
  • Altman understands the appeal of milestones and celebrations but believes OpenAI may be missing the mark in how it presents its progress.

GPT-5 (1h6m12s)

  • OpenAI will release an amazing model this year, but it may not be called GPT-5.
  • There are many challenges and bottlenecks to overcome before releasing GPT-5, including compute limitations, technical issues, and the need for distributed constant innovation.
  • OpenAI's strength lies in multiplying many medium-sized innovations into one giant thing.
  • It's important to zoom out and look at the entire map of technological frontiers to gain surprising insights and see new possibilities.

7 trillion of compute (1h9m27s)

  • Sam Altman believes compute will be the most valuable resource in the future and nuclear fusion and fission are potential solutions to the energy puzzle.
  • Altman is concerned about the theatrical risks of AI, where some negative consequences may be exaggerated and politicized, leading to conflicts.
  • Despite these risks, he believes AI will have significantly more positive consequences than negative ones.
  • Altman emphasizes the importance of truth and how AI can help us understand it better.
  • He sees competition in the AI space as a driver of innovation and cost reduction but warns of the potential for an arms race.
  • Altman feels the pressure of the arms race and stresses the need to prioritize safety, especially in developing AGI.
  • He advocates for collaboration between different organizations to break down silos in AI safety research.
  • Altman acknowledges Elon Musk's contributions to humanity and his concern for AI safety but criticizes his unproductive behavior.
  • He hopes for less unproductive behavior as people work towards AGI and believes collaboration is essential for the benefit of humanity.

Google and Gemini (1h17m35s)

  • Sam Altman envisions AI's potential beyond search engines, aiming to help people find, synthesize, and act on information effectively.
  • Altman acknowledges the challenge of integrating chat clients like ChatGPT with search engines seamlessly.
  • He favors a business model where users pay for the service rather than relying solely on advertising, similar to Wikipedia's approach.
  • OpenAI is exploring sustainable business models without solely relying on advertising and is optimistic about finding a viable solution.
  • Altman emphasizes the importance of transparency and public input in defining the desired behavior of AI models to address concerns about safety, bias, and ideological lean.
  • He suggests writing out and making public the expected behavior of a model to clarify whether deviations are bugs or intended features.
  • Altman acknowledges the ideological bubble in San Francisco and the tech industry but feels fortunate that OpenAI is less caught up in culture wars compared to other companies.
  • As AI becomes more powerful, safety will become a primary focus for the entire company, with most employees considering safety in some broad sense.
  • OpenAI faces challenges such as technical alignment, societal impacts, and economic impacts, requiring collective effort and considering the full range of potential harms that AI could cause.

Leap to GPT-5 (1h28m40s)

  • Excited about GPT-5's overall improvement across the board.
  • Feels like GPT-5 has a deeper understanding of the intent behind prompts.
  • Looking forward to improved programming capabilities in natural language.
  • Humans will still be programming in the future, but the nature of programming and the skill set required may change.
  • Unsure how much the predisposition for programming will change.
  • Believes the best practitioners will use multiple tools, including natural language and traditional programming languages.
  • Considers it depressing if AGI can't interact with the physical world without human intervention.
  • Hopes for the development of humanoid or physical world robots as part of the transition to AGI.
  • OpenAI has a history of working in robotics but has not focused on it recently due to resource constraints and the challenges of robotics at the time.
  • Plans to return to robotics in the future.

AGI (1h32m24s)

  • Sam Altman believes discussing when systems will achieve specific capabilities is more useful than speculating on a vague concept of AGI.
  • He expects quite capable systems to be developed by the end of this decade, but doesn't believe they will immediately change the world.
  • Altman suggests that a major transition, such as the internet's impact through Google search, could indicate AGI's arrival.
  • He proposes that a significant increase in the rate of scientific discovery or novel scientific intuitions from an AGI system would be impressive.
  • Altman finds it challenging to specify what he would ask the first AGI but suggests starting with yes or no questions about fundamental scientific theories and the existence of alien civilizations.
  • He believes whoever builds AGI first will gain a lot of power and doesn't trust any one person to have total control over OpenAI or AGI.
  • Altman thinks no company should make decisions about AGI and that governments need to regulate its development.
  • He is not currently worried about the existential risk posed by AGI itself but acknowledges it is a possibility and that work needs to be done to mitigate this risk.
  • Altman believes other things need to be addressed before AGI can be safely developed, such as theatrical risks and the need for robust governance systems.
  • He discusses his unconventional habit of not capitalizing his tweets, attributing it to his upbringing as an "online kid" and the decline in capitalization over time.
  • Altman suggests that capitalization may become obsolete as communication becomes more informal.
  • He contemplates the philosophical implications of capitalization and its significance as a sign of respect or disrespect.
  • Altman acknowledges that he may be the only CEO who doesn't capitalize tweets but doesn't believe it's a widespread practice.
  • He agrees that the ability of AI systems like OpenAI's "Sora" to generate simulated worlds somewhat increases the probability that we live in a simulated reality but doesn't consider it the strongest evidence.
  • Altman believes that the ability to generate increasingly realistic worlds should make people more open to the possibility of living in a simulation.
  • He discusses the concept of "simple psychedelic insights" that can lead to profound new understandings, such as the square root function.
  • Altman believes that AI can serve as gateways to new realities and ways of seeing the world.
  • He is excited about his upcoming trip to the Amazon jungle, despite the potential dangers, because it allows him to appreciate the machinery of nature and the evolutionary processes that have shaped human existence.

Aliens (1h50m57s)

  • Sam Altman believes there are likely many intelligent alien civilizations.
  • He finds the Fermi Paradox puzzling and scary, as it suggests that intelligent civilizations may not be good at handling powerful technologies.
  • He thinks AI might help humans see intelligence in ways beyond IQ tests and simple puzzle-solving.
  • Altman finds hope in the progress humanity has made despite its flaws.
  • He believes that AGI could be a collective scaffolding that enhances human abilities, similar to how society's advancements have built upon each other.
  • He feels grateful for his life and the opportunity to witness and contribute to the creations of humans, including OpenAI's achievements.
  • If he knew he would die tomorrow, Altman would feel sad but mostly grateful for his life and the experiences he had.
  • He views his life as an awesome one, filled with remarkable human creations like ChatGPT and OpenAI's work.

Overwhelmed by Endless Content?