Remember GPT store? We don’t. 2024 was awesome for AI startups
Coming up (0s)
- It is now possible to start a company that can make tens of millions of dollars in just 24 months, with initial investments potentially as low as $2-5 million (7s).
- Many startups in previous batches were able to secure Enterprise proof of Concepts or pilots, despite initial cynicism about whether these would translate into real revenue (14s).
- However, it has been observed that these pilots have now turned into real revenue, although it is still early days (26s).
- The current pace of progress in the field is extremely rapid, making it anyone's game (33s).
- The hosts of the episode, Gary, Jared Harge, and Diana, collectively have experience funding companies worth hundreds of billions of dollars from the beginning (54s).
What made 2024 great for startups (1m0s)
- 2024 has been a great year for AI startups, with the consensus being that everything has broken in favor of startups, allowing them to build successful applications on top of existing models (1m7s).
- When ChatGPT launched two years ago, the immediate consensus was that all the value would go to OpenAI, and the ChatGPT store would crush every single person trying to build an AI application, leaving no opportunity for startups (1m22s).
- However, the ChatGPT store itself turned out to be insignificant, and many big AI applications have been built by startups, such as Perplexity, Glean, Case Tech, Harvey, and PhotoRoom, showing that there are many opportunities for startups in the AI space (1m49s).
- It's now possible for a startup to make tens of millions of dollars in just 24 months from zero, with relatively low initial investment, as seen in the story of Opus Clip, which never had to raise a real Series A (2m21s).
- The emergence of companies like Anthropic and Claude led to a consensus view that all the value would go to foundation model companies, and that the only way to compete in AI was to raise huge amounts of money, but this turned out to be completely not true (2m56s).
- The open-sourcing of models, such as Meta's LLaMA, has driven the development of new models, like Vicuna, and has enabled people to do local development and run models on devices, increasing choice and competition in the AI space (3m33s).
- The turning point came in the summer of 2023, when LLaMA became the top foundation model in all rankings and benchmarks, showing that choice matters and that it's not just about the model, but also about product, sales, and other factors (4m26s).
- The ability to adjust to user feedback and reduce churn has become more important than capturing a light cone of all future value through a model, especially for startups working with large language models (LLMs) (5m8s).
- A year ago, startups were building model routers as an API to call specific models, mainly to reduce costs, but now it's clear that model routers have become a great entry point for building a new stack for LLM-powered apps (5m26s).
- Many applications don't want to be beholden to a specific model, and companies are now using multiple models for their applications, choosing the best one for speed, complexity, or specific tasks (6m27s).
- This multiple model architecture is similar to the concept of a model router but has evolved into more of an orchestration, where companies use the best model for the best task (6m51s).
- Examples of companies using this approach include Camper, which uses the fastest model for parsing PDFs and more complex models for other tasks, and companies doing fraud detection, which use a junior risk analyst model and a bigger model for more complex tasks (7m8s).
- Another company, Variant, is working on teaching state-of-the-art open-source LLM models aesthetics, starting with icon generation, by building a post-training workflow that can adapt to smarter and better models (7m59s).
- Variant's approach involves teaching models aesthetics at the SVG level, which can translate into various aesthetics, and is an interesting and newer approach to post-training (8m37s).
- The idea that all value is accumulated in the model, especially with open-source models, has been a topic of discussion, and there have been flashbacks to a year ago when many startups in a batch would get Enterprise proof of concepts or pilots, but there was cynicism about whether these pilots would translate into real revenue (9m3s).
- The parallels between new technologies, such as blockchain and crypto, and Enterprises wanting to run pilots and proof of concepts, have been noted, with the chief innovation officer needing to check off that they have explored the new technology (9m26s).
- However, it has been observed that these pilots have turned into real revenue, and startups in the YC batch are now selling into real Enterprises faster than before, ramping up revenue and reaching milestones like a million dollars ARR faster than ever seen (9m43s).
- The summer and fall batches have demonstrated this trend, with the fall batch being the first to notice it, and in aggregate, the batches have grown 10% a week, which is a rare occurrence (10m17s).
- The time it takes to reach $100 million in annual revenue is trending down, and according to Ben Horowitz, the number of companies that can make it to $100 million in revenue per year has increased by 10x every decade, from 15 companies 20 years ago to potentially 1,500 companies a year now (11m12s).
- The growth of vertical AI is expected to enable this trend, with the potential for over 1,000 companies to bloom, as the value proposition of these products is incredibly strong, and companies are making rational decisions based on ROI calculations (11m42s).
- The truisms about Enterprise sales cycles and the difficulty of getting big Enterprise deals are being disproven, as companies are smart and will make rational decisions when the ROI is fantastic (12m10s).
- The reliability of LLMs has improved, and the concern that they are not reliable enough to deploy in the Enterprise has decreased, as they are no longer seen as too risky a technology (12m28s).
- AI startups have seen significant growth in 2024, with advancements in technology leading to real-world deployments and revenue generation (12m41s).
- The reliability of AI agents has improved due to various techniques, enabling them to handle thousands of tasks daily and operate at a large scale (12m47s).
- A notable trend in 2024 is the concept of thinking of AI as "agentic," which refers to AI systems that can perform complex tasks and interact with their environment (13m11s).
- This shift towards agentic AI has led to the development of AI-powered agents for various applications, moving beyond simple chat-like interfaces (13m28s).
- The capabilities of AI models continue to advance, allowing them to perform complex, multistep tasks, and even take control of computers and interact with other applications (13m40s).
- The growth of AI has also led to increased interest in regulation, although it seems that the industry has avoided significant regulatory hurdles for the time being (13m52s).
Tech and gov’t intersecting more (13m55s)
- The Biden Executive Order (EO) may not survive the Trump White House, which could impact the tech industry, particularly with regards to the regulation of advanced math and AI development (13m56s).
- The intersection of tech and politics has become increasingly significant, with national politics affecting startups and companies less than a year old, causing concern about the potential for regulatory capture by large players like Open AI (14m24s).
- Startups were worried that they might not be able to build innovative AI applications due to potential regulations, but the current situation seems to favor them, allowing them to continue developing AI applications (14m43s).
- The platforms themselves, such as Windows, have access to APIs and statistics about what works on their platforms, giving them an advantage in building new features and potentially creating a monopoly like the win32 monopoly (15m8s).
- Despite the current favorable situation for startups, the tech landscape is rapidly changing, and it's essential to continue working on these issues to ensure that startups can remain competitive (15m27s).
- Overall, 2024 has been a great year for AI startups, but it's crucial to remain vigilant and continue advocating for a favorable regulatory environment (15m36s).
Who else in tech had W’s in 2024? (15m37s)
- 2024 has been a great year for AI startups, with several big funding rounds, including Open AI raising $6 billion, Scale AI raising $1 billion, and SSI raising $1 billion (15m45s).
- Scale AI is a classic startup story, where young programmers built a successful company through hard work and smart decisions, starting with a completely different idea and pivoting several times (16m22s).
- The founders of Scale AI, including Alex, initially applied to Y Combinator (YC) with a healthcare-related idea, a website for booking doctor's appointments, but later pivoted to data labeling for self-driving car companies (17m18s).
- The idea for data labeling came from Alex's experience working at Quora, where he used Amazon Mechanical Turk for moderation and realized it had limitations, leading him to build a better version (17m33s).
- Scale AI gained early traction from a single customer, Cruise, which needed to label large amounts of image data for its self-driving cars, and later caught a second wave with the rise of large language models (LLMs) (18m14s).
- The company has been able to ride two waves of demand for labeled data, first for computer vision in self-driving cars and then for LLMs, positioning itself well for future growth (18m49s).
- A multi-billion dollar business, referred to as "The LLM Wave," is expected to grow into a hundred billion dollar plus company, with many startups pivoting into AI ideas that are taking off (19m6s).
- Founders who waited out and found an idea they couldn't before are experiencing more success, such as a company that proved the whole batch but couldn't find a great idea until six months after, when they realized they could automate tasks at a dentist office (19m31s).
- This company built an AI back office for dentist offices and is now experiencing fantastic week-over-week growth (19m52s).
- Many cases like this are springing up, with young technical founders willing to bet on a glimmer of an idea that the future is going to be in AI (20m8s).
- Some teams have pivoted into different spaces, such as computer vision, and are finding success, although it's still early (20m28s).
- Trends and waves that startups have been riding coming out of the batches include voice AI, which has been discussed previously (20m44s).
Voice AI has a lot of potential applications (20m48s)
- Voice AI has a lot of potential applications across various verticals, and it's unlikely to be a "win or take all" situation, with multiple companies emerging in different areas (20m48s).
- There are numerous applications for voice AI, including language learning, remote work, and customer support, with multiple companies likely to emerge in each area (21m34s).
- Customer support is not a single vertical, but rather multiple "flavors" with different workflows and requirements per industry, making vertical AI agents well-suited for this area (21m54s).
- Vertical AI agents are likely to flourish due to the specific types of workflows required per industry, with companies like Parel emerging in this space (22m15s).
- There will likely be both horizontal infrastructure companies and vertical applications in the voice AI space, with companies like Stripe providing a useful analogy for the value of horizontal infrastructure (22m45s).
- Horizontal infrastructure companies will make it easy for others to build their own voice AI applications, while vertical apps will provide valuable solutions for specific industries (23m8s).
Robotics is on the rise (23m17s)
- There is an increase in the number of founders building robots, with a notable example being the Weave Robotics team, which is working on a robot that can be shipped in 2025 and costs around $65,000 to $70,000 due to the high cost of actuators and safety features required for home use (23m17s).
- The development of robots is driven by the idea that large language models (LLMs) can serve as the "consciousness" of the robot, enabling it to interact with its owner and other people in the household, and perform tasks such as folding laundry (23m48s).
- Robotics is considered to be half AI and half hardware, with the AI component starting to work well, but the hardware still being expensive and challenging to develop (24m23s).
- Startups may have the opportunity to build only the AI or software piece of a robot and run it on commodity hardware, achieving great results, but the opposite scenario is also possible, where companies need to excel in both hardware and software (24m42s).
- Tesla is expected to be a major player in the robotics space, but multiple companies are trying to find creative ways to run models on commodity hardware for specific use cases (25m0s).
- The robotics industry has not yet reached its "ChatGPT moment," but self-driving cars have been successfully deployed in San Francisco, with thousands of people using them daily (25m20s).
- Despite the limited deployment of self-driving cars, with only a few thousand units in the entire world, they have become a regular mode of transportation for some people, including Tony from DoorDash (25m48s).
What were the big flops of 2024? (25m57s)
- The Apple Vision Pro and Quest were mentioned as examples of AR hardware that have not gained significant traction, with the main challenge being the need for a more lightweight form factor to fit the required computer and optics. (25m58s)
- The development of AR hardware is constrained by physics, making it difficult to fit all the necessary components into a small form factor, and more engineering and physics breakthroughs are needed to overcome these challenges. (26m22s)
- The lack of AR hardware in people's hands makes it unattractive for app developers to build apps, which in turn makes it less appealing for people to buy the hardware, creating a chicken-and-egg problem. (26m44s)
- One of the few successful use cases for AR hardware is using it as a large monitor, which works well for watching movies. (26m57s)
- The Meta Ray-Ban is an exception, as it has been well-received for its audio and voice capabilities, and can be used in conjunction with voice modes for AI models like Chpt or Claude to have conversations on various topics. (27m15s)
AI coding really broke out in 2024 (27m54s)
- 2024 was the year AI coding really broke out, with the majority of YC Founders now using Cursor or other AI IDEs, which exploded in popularity over the summer (27m54s).
- Devon proved that large programming tasks could be fully automated, a significant development that occurred in 2024 (28m5s).
- Replit agents continued to improve, with anecdotal stories of people building Replit apps on their way home from work and being impressed by the technology (28m11s).
- Replit popularized the technology among non-technical people for the first time, making it more accessible (28m24s).
- A lower-technical version of this technology is Anthropic's Artifact, which allows users to prototype simple apps, chat with Claw to build simple front pages, and create working versions of their ideas (28m30s).
- Artifact enables one person to do much more, as they can prototype and show working versions of their ideas to their engineering team, potentially changing the nature of how startups are formed (28m40s).
Is startup hiring going to change? (29m0s)
- Some founders who recently raised their seed rounds are not following the classic advice of hiring more people to fill specific roles, instead, they are using software engineers to write processes that utilize large language models (LLMs) upfront, potentially delaying the need to hire specialized personnel until later rounds of funding (29m0s).
- Companies are looking for engineers who are familiar with AI coding stacks and can effectively use tools like LLMs, with some using pair programming as an interview check to assess a candidate's ability to work with these tools (30m2s).
- The use of AI coding agents has broken the standard programming interviews that companies have been using for years, sparking a debate about whether to penalize or prevent candidates from using these tools during interviews (30m42s).
- Some argue that the industry will adapt to the use of AI coding agents and that candidates will be measured on their absolute output, with the bar for productivity being raised (31m18s).
- Stripe is cited as an example of a company that shifted away from traditional computer science problems in interviews and instead focused on practical tasks, such as building a web application, and it is predicted that the industry will undergo a similar shift in response to the use of AI coding agents (31m26s).
- The impact of AI on startup hiring and scaling is still unclear, but it is noted that Amazon has a large number of internal LLM-powered applications, potentially indicating a future trend (32m18s).
- Jeff Bezos is mentioned as being involved in AI development at Amazon, and his comments suggest that the company is exploring the use of LLMs to improve its internal operations (32m21s).
- Amazon Web Services (AWS) has significantly changed how startups are built, and it is expected that they will release new applications and stacks for companies to build and scale on (32m46s).
- One of the applications Amazon has worked on is a giant migration from an old version of a programming language, which involved upgrading different versions of databases, and they used large language models (LLMs) to achieve this (33m11s).
- The migration involved changing hundreds of thousands of lines of code, which would have taken an engineering project of 6 months or more, but was completed in weeks using LLMs (33m21s).
- Amazon is a perfect use case for LLM-empowered agents doing back-office processes, and they have launched their own foundation model, which is starting to top some benchmarks (33m30s).
- Despite Amazon's advancements in LLMs, some employees, especially those right out of college, do not have access to LLMs or are barred from using them in their day-to-day work (33m54s).
- This highlights the uneven distribution of access to new technologies, even within the same organization, which could be beneficial for open-source and self-hosting LLMs (34m9s).
- There is interest in building personal stacks of LLMs, such as using Apple Minis to run Llama on a personal cluster, but this requires significant hardware and setup (34m25s).
YC in person Demo Day is back! (34m43s)
- Y Combinator (YC) has been operating in-person in San Francisco for some time, and they recently had a live demo day, marking the end of Zoom demo days and alumni demo days (34m43s).
- The in-person demo day was held at the Masonic Center, with 1,200 investors in attendance, creating a favorable ratio of around 10 investors per company, which was beneficial for the founders (35m4s).
- The energy of an in-person demo day is unique and cannot be replicated over Zoom, and the event also serves as a reunion for investors in Silicon Valley (35m37s).
- YC demo days are now held four times a year, attracting top early-stage investors from around the world to San Francisco, making it a celebration of the startup ecosystem (36m4s).
- The trend of remote work is declining, with late-stage startups prioritizing getting everyone back into the office, marking the end of the era of remote work being the norm (36m27s).
- There is a renewed sense of optimism in San Francisco, partly due to recent elections, contributing to the city's resurgence as a hub for startups and innovation (36m46s).
San Francisco optimism + outro (36m50s)
- There is a sense of optimism in San Francisco, particularly with the new mayor and a thin moderate majority on the Board of Supervisors, which could lead to positive changes in the city (36m50s).
- Some of the individuals who contributed to the "doom Loop" in San Francisco are no longer in power, which is seen as a step in the right direction (37m3s).
- It is acknowledged that progress is often overestimated in the short term but underestimated in the long term, with significant changes potentially taking 10 to 20 years to materialize (37m20s).
- The growth of startups in the city is cited as an example of this, with the number of companies reaching $100 million in revenue increasing from 15 to 1,500 per year (37m30s).
- San Francisco is seen as a potential beacon for the world's smartest people, and it is hoped that the city can continue to build and attract talent (37m42s).
- The message concludes with a holiday greeting and a look forward to the new year (37m53s).