Sam Altman: What Startups Will be Steamrolled by OpenAI & Where is Opportunity | E1223

04 Nov 2024 (11 days ago)
Sam Altman: What Startups Will be Steamrolled by OpenAI & Where is Opportunity | E1223

Intro (0s)

  • OpenAI aims to continually improve its models, making them better and better over time, which may render some current business models or patches to existing shortcomings less important in the future (11s).
  • The company believes it is on a steep trajectory of improvement, and current model shortcomings will likely be addressed by future generations (18s).
  • OpenAI encourages people to be aligned with this trajectory and be prepared for the future (26s).
  • The OpenAI Dev Day event is being hosted, with Harry Stebbings of 20 VC interviewing Sam Altman (40s).
  • The interview will cover questions from the audience, with many questions already submitted (58s).

Will OpenAI's Future Focus Be on Smaller or Larger Models? (1m1s)

  • The future of OpenAI is expected to involve both smaller models like A1 and larger models, with a focus on improving performance across the board (1m1s).
  • Reasoning models are of particular importance, as they are expected to unlock significant advancements in various areas (1m16s).
  • The ability of reasoning models to contribute to new science, write difficult code, and drive progress forward is seen as a key factor in their importance (1m27s).
  • Rapid improvement is expected in the O Series of models, which is of great strategic importance (1m40s).

Will No-Code Tools Empower Non-Technical Founders? (1m47s)

  • Developing no-code tools for non-technical founders to build and scale AI apps is a future plan, with the first step being tools that make people who know how to code more productive, and eventually offering high-quality no-code tools (1m53s).
  • Currently, there are some no-code tools available, but they are not yet capable of supporting a full startup, which will take time to develop (2m17s).
  • Open AI's position in the stack is a certain place, but it's unclear how far up the stack it will go, making it a brilliant question for founders who are spending time tuning their RAG system (2m24s).
  • Founders who are building businesses that patch current small shortcomings in Open AI's models may find that their work becomes less important in the future if Open AI succeeds in making its models better (2m51s).
  • On the other hand, founders who build companies that benefit from Open AI's models getting better and better may find that their businesses become more important and successful in the future (3m3s).
  • The general philosophical message to startups is that Open AI believes it is on a steep trajectory of improvement, and that current shortcomings of the models will be taken care of by future generations (3m45s).
  • Founders are encouraged to be aligned with this vision and to focus on building businesses that will benefit from Open AI's future improvements, rather than trying to patch current shortcomings (4m1s).

Where Will OpenAI Dominate & How Should Founders and Investors Prepare? (4m5s)

  • The potential for OpenAI to dominate certain markets and steamroll startups is a concern for founders and investors, who need to consider where OpenAI is likely to have an impact and where there are opportunities for growth (4m27s).
  • Many trillions of dollars of market cap will be created by using AI to build products and services that were previously impossible or impractical, and founders and investors should focus on building on top of this new technology (4m45s).
  • There are two sets of areas where OpenAI will have an impact: one where the models need to be improved to be more useful, and another where the models are already good enough to build incredible products and services (4m57s).
  • In the past, many startups were betting against the models getting better, but this has now reversed, and people have internalized the rate of improvement and are now betting for the models getting better (6m22s).
  • Founders and investors should focus on building products and services that deliver value, such as AI tutors or medical advisors, rather than trying to plug holes in the models (6m11s).
  • The rate of improvement in AI models has been rapid, and it's easy to forget how bad the models were just a couple of years ago, but this improvement is expected to continue (5m52s).
  • As an investor, it's essential to invest in opportunities that aren't going to get damaged by OpenAI's dominance, and to look for areas where OpenAI is not likely to have an impact (4m38s).

Is Masa Son's $9 Trillion AI Value Prediction Realistic? (6m42s)

  • Masa Son predicted that $9 trillion of value will be created every year, which will offset the $9 trillion capex needed, and this prediction is intriguing to consider. (6m49s)
  • The creation of trillions of dollars of value is expected to happen with the current technological revolution, similar to past mega technological revolutions. (7m21s)
  • The development of next-generation systems, such as no-code software agents, will play a significant role in unlocking economic value for the world. (7m30s)
  • The ability to describe a whole company's worth of software without requiring extensive coding knowledge will make it more accessible and less expensive, creating significant value. (7m48s)
  • The impact of AI on industries like healthcare and education, which are worth trillions of dollars, will be substantial if AI can truly enable new ways of doing things. (8m10s)
  • The exact number of trillions of dollars of value created is not the point, but rather the potential for significant value creation through AI is undeniable. (8m28s)

What Role Will Open Source Play in AI & Should Models Be Open-Sourced? (8m43s)

  • Open source is a prominent method for delivering value in AI, and there's a place for open source models in the ecosystem, with some already existing and being really good (8m48s).
  • There's also a place for nicely offered, well-integrated services, APIs, and people will pick what works for them as a delivery mechanism (9m17s).
  • Open source can be an option for customers and a way to deliver value, and agents can also be used to deliver value (9m31s).
  • The definition of an agent is something that can be given a long-duration task and provide minimal supervision during execution (9m47s).
  • People's intuition about agents is not well-developed, and they often get it wrong, with a common misconception being that an agent is something that can perform a simple task like booking a restaurant reservation (10m2s).
  • A more interesting example of an agent is something that can perform a task that a human can't, such as calling 300 restaurants to find the best option (10m54s).
  • Agents could fundamentally change the way SaaS is priced, as they can replace labor and provide value in a different way (11m55s).
  • The future of pricing for agents is uncertain, but it could be based on the amount of compute used to solve problems, rather than per seat or per agent (12m26s).
  • Specific models may need to be built for agentic use, but the current model is capable of doing great agentic tasks, and there's a huge amount of infrastructure and scaffolding to be built (12m42s).

Are AI Models Depreciating Assets or Becoming Exclusive Due to High Costs? (12m52s)

  • The notion that AI models are depreciating assets due to commoditization is acknowledged, but the idea that they are not worth as much as they cost to train is disputed (12m56s).
  • There is a positive compounding effect in training models, as the skills and knowledge gained from training one model can be applied to improve the training of subsequent models (13m27s).
  • The revenue generated from a model can justify the investment, but this may not be true for everyone, particularly those who are behind in the field or lack a sticky and valuable product (13m32s).
  • Having a large user base, such as the hundreds of millions of people using ChatGPT, can help amortize the high costs of training models across a large number of users (14m5s).
  • The increasing capital intensity required to train models may lead to a reversion, where only a few people or organizations can afford to do so, making it more exclusive (13m4s).
  • The normal rules of business, such as having a sticky and valuable product, still apply to AI models, and those who do not meet these criteria may struggle to get a return on their investment (13m50s).

How Will OpenAI Continue Differentiating Its Models? (14m10s)

  • AI models will continue to differentiate over time, with the focus on expanding differentiation through reasoning being the current most important area of focus, which is expected to unlock the next massive leap forward in value creation (14m11s).
  • To improve AI models, various methods will be employed, including multimodal work and adding other features that are considered super important to the ways people want to use these models (14m31s).
  • Multimodal work involves challenges, but the goal is to achieve reasoning in multimodality, which is possible as evidenced by babies and toddlers being able to do complex visual reasoning before they are good at language (14m47s).
  • The vision capabilities of AI models are expected to scale rapidly with the new inference time paradigm set by OpenAI, with progress anticipated in image-based models (15m9s).
  • Rapid progress is expected in AI model development, with the potential for significant advancements in the near future, although the exact timeline is not specified (15m17s).

How Does OpenAI Advance Core Reasoning? (15m23s)

  • OpenAI makes breakthroughs in core reasoning through a unique approach that is considered the company's "special sauce," and while it is easy to copy existing techniques, it is harder to come up with new and unproven ideas (15m44s).
  • The ability to repeatedly do something new and unproven is a key aspect of OpenAI's culture, and this is considered one of the most important inputs to human progress (16m36s).
  • This approach is not limited to AI research, but can be applied to any field, and it is something that the world could benefit from having more of (16m57s).
  • Human talent is often wasted due to various factors such as working at a bad company, living in a country that doesn't support good companies, or other limitations (17m20s).
  • The development of AI is expected to help people reach their full potential, and OpenAI hopes to contribute to this by creating opportunities for talented individuals to work in the field, regardless of their background or circumstances (17m37s).
  • There are many people in the world who have the potential to be phenomenal AI researchers, but may not have had the opportunity to pursue this path due to various factors (17m48s).

How Has Sam’s Leadership Changed Over the Last Decade? (17m53s)

  • Over the last decade, leadership has undergone significant changes due to the rapid growth of the company, which had to scale from zero to 100 million in revenue in a short period, unlike a normal company that would have more time to grow (17m54s).
  • One of the most significant challenges was learning how to focus the company on growing 10x rather than just the next 10%, which requires a lot of active work and change (19m1s).
  • The rapid growth made it difficult for people to get caught up on the basics, and it was underappreciated how much work it took to keep moving forward while not neglecting other important tasks (19m30s).
  • Internal communication played a big role in sharing information and building structures to think about bigger and more complex projects every 8-12 months (19m50s).
  • Planning was also crucial in balancing short-term needs with long-term goals, such as building out compute or planning for office space in a rapidly growing city like San Francisco (20m10s).
  • There was no playbook for managing this kind of rapid growth, and it was learned on the fly, with many challenges and lessons along the way (20m33s).

Is Hiring Under 30s the Best Way to Build Companies? (20m52s)

  • The idea of hiring incredibly young people under 30 as a secret to building great companies was mentioned, which was taught by Peter Thiel (20m59s).
  • This approach is seen as a mechanism to bring young, hungry, and ambitious people into a company, despite their lack of experience (21m13s).
  • However, it's also acknowledged that success can be achieved with both young and experienced people, as seen in the case of OpenAI (21m47s).
  • Young people in their low 20s can bring amazing fresh perspectives and energy to a company, as observed in a recent hire at OpenAI (22m0s).
  • On the other hand, when designing complex and expensive systems, it's not comfortable to take a bet on someone just starting out, and experience is preferred (22m35s).
  • The key is to have an extremely high talent bar for people of any age, rather than focusing solely on younger or older people (22m47s).
  • A strategy that only hires younger or older people is considered misguided, and it's better to look for high-potential individuals regardless of age (22m58s).
  • Inexperience does not inherently mean a lack of value, and there are incredibly high-potential people at the beginning of their careers who can create huge amounts of value (23m16s).
  • As a society, it's beneficial to bet on these young people and provide them with opportunities to grow and contribute (23m29s).

Are Anthropic Models Better for Coding? When to Choose OpenAI vs. Others? (23m34s)

  • Anthropic models have been cited as being better for coding, and they indeed have a model that excels in coding, which is impressive work (23m41s).
  • Developers often use multiple models, and it's unclear how this will evolve in a more AGI-fied world (23m58s).
  • The future may hold a lot of AI everywhere, and the current way of thinking and talking about AI might be incorrect (24m11s).
  • There may be a shift from discussing models to discussing systems, but this change will take time (24m24s).

How Much Longer Will Scaling Laws Hold for Model Iterations? (24m30s)

  • The trajectory of model capability improvement is expected to continue for a long time, following the current scaling laws, despite initial doubts about its sustainability (24m33s).
  • There have been instances where the model's behavior was not understood, training runs failed, and new paradigms had to be figured out, but these challenges were eventually overcome (25m7s).
  • One of the hardest challenges to navigate was when working on GP4, where issues caused consternation and it seemed unclear how to solve them, but a solution was eventually found (25m21s).
  • The shift to 0.1 and the idea of reasoning models was a long and winding road of research, but the excitement about building AGI has been a motivating factor for the team (25m34s).
  • Maintaining morale during difficult times, such as when training runs fail, is achieved by having a team excited to build AGI and a deep belief in the potential of deep learning, which eventually seems to work out despite stumbling blocks (25m55s).
  • The team's mindset is to "bet on deep learning" and believe that it is on the right side of progress, which helps to overcome challenges and stay motivated (26m21s).

What Unmade Decision Weighs on Sam’s Mind Most Often? (26m34s)

  • The heaviest things in life are not iron or gold but unmade decisions, and the unmade decision that weighs on one's mind can vary from day to day (26m35s).
  • Some big unmade decisions include whether to bet on a next product, how to build the next computer, or other high-stakes, one-way door decisions that can be delayed for too long (26m53s).
  • The hard part is the volume of new decisions that come up daily, making it challenging to make the best choice, especially when one doesn't feel particularly likely to do better than someone else would (27m11s).
  • When faced with difficult decisions, it's not ideal to lean on one person for everything, but rather to have a network of 15 or 20 people with good instincts and context in specific areas to consult with (27m44s).
  • Having multiple experts to consult with allows one to "phone a friend" to the best expert in a particular area, rather than relying on a single person across the board (27m56s).

Is Sam Worried About Semiconductor Supply Chains & Global Tensions? (28m3s)

  • Concerns about semiconductor supply chains and international tensions exist, but they are not the top worry, rather in the top 10% of all worries (28m8s).
  • The top worry is the generalized complexity of the entire field, which feels like a very complex system that works fractally at every level (28m36s).
  • This complexity is evident in balancing power availability, networking decisions, chip availability, research readiness, and product development to utilize the research and pay for the system (29m0s).
  • The overall ecosystem complexity at every level is unlike anything seen in any industry before, and some version of this is the top worry (29m35s).
  • This complexity affects not only the entire field but also individual teams, including OpenAI, and requires careful management to avoid being caught off guard or unable to utilize the system (28m51s).
  • The complexity of the ecosystem makes it challenging to manage the supply chain, which is critical for the development and utilization of complex systems (29m25s).

Is $100 Billion a Realistic Entry Cost for Foundation Models? (29m46s)

  • The current wave of excitement and exuberance in the AI industry is often compared to the internet bubble, but the amount of spending is different, with Larry Ellison stating that it will cost $100 billion to enter the foundation model race as a starting point (29m57s).
  • However, it is believed that the actual cost will be less than $100 billion, and the use of previous technology revolutions as analogies for AI is a bad habit (30m9s).
  • The internet revolution was different from AI in that it was relatively easy to get started, whereas building AI itself is a more complex and expensive process (30m50s).
  • For many companies, AI will be a continuation of the internet, with AI models being used as a new primitive for building technology, but building the AI itself is a different story (31m1s).
  • The example of electricity is also used as an analogy for AI, but it doesn't make sense for many reasons (31m15s).
  • A more suitable analogy for AI is the transistor, which was a new discovery of physics with incredible scaling properties that seeped into many industries and products, but is not typically thought of as a "transistor company" (31m31s).
  • The transistor industry involved a complex and expensive industrial process with a massive supply chain, but led to a gigantic uplift of the whole economy, and AI could have a similar impact (32m11s).
  • Most people don't think about the underlying technology, such as transistors or AI, when using products or services, but rather expect them to work and process information (32m31s).

Quick-Fire Round (32m35s)

  • If building a startup today, a good choice would be an AI-enabled vertical, such as the best AI tutoring product, AI lawyer, or AI CAD engineer, which could teach people to learn any category and unlock human potential (32m45s).
  • A book title idea could be something about human potential, as there is a need for a book that could help unlock human potential (33m23s).
  • An area in AI that deserves more focus is developing an AI that can understand a person's whole life, having access to all their data, and being able to provide assistance accordingly (33m36s).
  • A recent research result, which cannot be disclosed, has shown breathtakingly good results (33m50s).
  • The competitor that is most respected is not specified, but rather an acknowledgment of the amazing work being done by incredibly talented and hardworking people in the field (34m2s).
  • The favorite OpenAI API is the new real-time API, which is part of a large API business (34m26s).
  • The person most respected in AI is the Cursor team, who have built a product that delivers a magical experience and creates value in a way that others have not been able to (34m38s).
  • The tradeoff between latency and accuracy in AI should be user-controllable, allowing users to choose between faster responses or more accurate results depending on the situation (35m27s).
  • Insecurity and leadership are common issues, and an area for improvement as a leader and CEO is feeling uncertain and struggling with it, which is a challenge being faced currently (35m58s).
  • The current product strategy is considered a weakness, and the company needs a stronger and clearer vision from its leadership, despite having a great head of product and product team (36m4s).
  • Kevin, the newly hired product leader, is exceptional and world-class due to his discipline, focus, and ability to prioritize and make decisions on behalf of the user (36m34s).
  • In the next five years, OpenAI expects an unbelievably rapid rate of improvement in technology, leading to significant advancements in AI research and other scientific fields (37m26s).
  • Despite these rapid advancements, society itself is expected to change surprisingly little, with progress outperforming expectations but not necessarily leading to drastic changes in the short term (37m58s).
  • This expectation is based on past experiences, such as the passing of the Turing test, which did not lead to significant changes in society despite being a major milestone (38m11s).
  • In the long term, however, society is expected to undergo significant changes, but the exact nature and timing of these changes are difficult to predict (38m46s).

Overwhelmed by Endless Content?