Google CEO: AI Is Creating Deadly Viruses! If We See This, We Must Turn Off AI!

14 Nov 2024 (4 days ago)
Google CEO: AI Is Creating Deadly Viruses! If We See This, We Must Turn Off AI!

Intro (0s)

  • Eric Schmidt, the former CEO of Google, grew the company from $100 million to $180 billion and emphasizes the importance of risk-taking in leadership and business, citing Elon as an example of an entrepreneur who takes huge risks and fails fast. (22s)
  • At Google, the "70/20/10 rule" generated $10-40 billion in extra profits over a decade, and it's a principle that anyone can apply to their business. (44s)
  • The use of AI is crucial for businesses to succeed, and it's essential to use it in every aspect of the business. (53s)
  • The advent of artificial intelligence raises questions about human survival, and there are concerns about the dangers of advancing with AI and losing control over it. (1m3s)
  • The biggest fear about AI is not what people might imagine, and it's essential to consider the potential risks and consequences of developing and using AI. (1m21s)
  • The host asks for support by subscribing to the show, promising to make the show better every week by listening to feedback and finding desired guests. (1m33s)

Why Did You Write a Book About AI? (2m5s)

  • The guest has had a unique and varied career, with a wide range of book topics to choose from, but they chose to write about AI, specifically in their book "Genesis". (2m6s)
  • The guest's interest in AI was sparked 10 years ago when they attended a conference with Henry Kissinger, where Demis Hassabis spoke about AI, and Kissinger was deeply impacted by the presentation. (2m44s)
  • Henry Kissinger, who became one of the guest's closest friends, had been working on understanding the implications of AI since he was 22, and had written his undergraduate thesis at Harvard on the topic of Kant. (3m9s)
  • The guest found themselves part of a group of people trying to understand what it means to be human in an age of AI, and how the arrival of AI will change human life and thought. (3m21s)
  • The arrival of AI is considered a huge moment in history, as humans have never had an intellectual challenger of their own ability before, and it raises questions about the impact of AI on human existence. (3m42s)

Your Experience in the Area of AI (3m49s)

  • As a teenager, the individual was interested in science and played with model rockets and model trains, typical for a boy of their generation, but was too young to be a video game addict at that time (4m5s).
  • In college, the person developed a strong interest in computers, which were relatively slow back then but still fascinating, with the college computer being 100 million times slower than a modern smartphone (4m27s).
  • The college computer was shared among the entire university, and the rapid advancement in computing power, as described by Moore's Law, has significantly impacted wealth creation, career development, and company formation throughout their life (4m40s).
  • The individual considers themselves lucky to have been born with an interest in something that was about to experience rapid growth and explosion, ultimately getting swept up in the developments that followed (4m52s).

Essential Knowledge to Acquire at 18 (5m6s)

  • A discussion took place with Raph, an 18-year-old, and his family about his future career and the decisions he should make regarding the information and intelligence he acquires for himself (5m32s).
  • The most important thing for Raph to acquire at 18 years old is to develop analytical and critical thinking skills, which can be achieved through various fields such as math, science, law, or entertainment (5m58s).
  • It is encouraged that Raph learn how to write programs in the Python language, as it is easy to use, understand, and has become the language of AI (6m17s).
  • Python programming skills are valuable, and AI systems often write code in Python, making it a useful skill to develop (6m26s).
  • A suggested project for an 18-year-old to develop Python programming skills is to create a game, as it can be an interesting and engaging way to learn (6m41s).

Is Coding a Dying Art Form? (6m49s)

  • The advice given to 18-year-olds five to ten years ago was to learn how to code, but with the rise of AI and large language models that can write code, it is questioned whether coding is a dying art form (6m49s).
  • Despite AI's ability to write code, it is not a dying art form, as these systems have interfaces called APIs that can be programmed (7m10s).
  • One of the large revenue sources for AI models is through API calls, where a program is built and a question is asked, such as identifying objects in a picture (7m20s).
  • The use of programming languages like Python is still relevant, especially when utilizing available tools to build something new and interesting (7m39s).
  • The ability to have fun and create something new using AI tools and programming languages is still possible and encouraged for young people (7m37s).

What Is Critical Thinking and How Can It Be Acquired? (7m49s)

  • Critical thinking involves distinguishing between being marketed to or lied to and being given an argument based on facts, and it is crucial to check assertions before believing or repeating them (7m49s).
  • Due to social media, people have become accustomed to believing information without verifying it, often because their friends or others believe it, which can lead to the spread of false information (8m9s).
  • When encountering a statement, one should check its validity and consider whether to criticize and correct the person making the statement or let it go, but it is essential to be in a position to evaluate the statement's truthfulness (8m34s).
  • Critical thinking requires verifying information, especially when it sounds plausible, to ensure that what is being repeated is true, and if unsure, it is best to keep quiet (9m17s).
  • A crucial aspect of critical thinking is operating on basic facts, such as the reality of climate change, which is a mathematical fact supported by repeatable experiments and the scientific method (9m45s).
  • Science relies on the falsifiability of assertions, meaning that they can be proven wrong, and it is the constant testing and evaluation of scientific claims that makes them reliable (10m19s).
  • Acquiring critical thinking skills involves being responsible for verifying information before sharing it and recognizing the importance of basic facts in decision-making and governance (9m32s).

Importance of Critical Thinking in AI (10m24s)

  • Critical thinking is especially important in a world of AI, as AI allows for perfect misinformation, which can be highly addictive and lead to people getting stuck in "rabbit holes" of confirmatory bias (10m26s).
  • The TikTok algorithm, also known as the Bandit algorithm, serves users content that they want, but occasionally introduces content from adjacent areas, which can be highly addictive and lead to negative consequences (10m37s).
  • Social media algorithms, including TikTok, are designed to optimize an objective function, which in this case is attention, and the easiest way to maximize attention is to maximize outrage, often by spreading misinformation (11m55s).
  • The scarcity of attention is a significant societal issue, as economists have predicted, and the monetization of attention has led to significant changes in the way people consume information (12m43s).
  • The amount of video content consumed by young people is staggering, with an average of 2.5 hours per day, and this has significant implications for their mental health and well-being (12m55s).
  • The addictive nature of social media and AI-powered content is a concern, as it can lead to negative consequences such as self-harm and suicide, and it is not clear whether children who grow up with these tools will be okay (13m34s).
  • Herb Simon, an economist, predicted in 1971 that the scarcity of attention would be a significant issue in the future, and this prediction has come true (12m34s).
  • The monetization of attention has led to a culture of outrage, where misinformation and sensational content are spread to maximize attention and revenue (12m10s).

When Your Children's Best Friend Is a Computer (13m40s)

  • There is a growing concern about the impact of social media on teenage girls, who are more advanced than boys at a younger age and are more susceptible to the negative effects of social media, such as rejection and emotional distress, which has led to record levels of emergency room visits and self-harm (13m51s).
  • Society is starting to recognize the problems associated with excessive social media use, and some schools are taking steps to limit phone use in the classroom (14m19s).
  • The AI Revolution raises questions about the impact of technology on the identity and values of children, who are more likely to be influenced by AI than adults (14m30s).
  • The idea of a child's best friend being a computer is a new and unprecedented phenomenon, and it is unclear what the long-term effects will be (14m48s).
  • The widespread adoption of AI technology is essentially an experiment on a large scale, without a control group, and society will have to adapt and adjust to the consequences (14m57s).
  • Despite the challenges, there is reason to be optimistic about the future, as society will work to establish biases and values that promote a moral high ground, and future generations will likely live longer, more prosperous lives with less conflict (15m8s).

How Would You Reduce TikTok's Addictiveness? (15m38s)

  • If the CEO of TikTok, they would differentiate between "good revenue" and "bad revenue," where good revenue comes from improving the product and bad revenue comes from exploiting users' psychology to increase addictiveness, which can lead to negative effects such as anxiety and depression amongst young people (15m39s).
  • A possible approach to reducing TikTok's addictiveness would be to prioritize making the product better over maximizing revenue, as this approach has been proven to be sustainable and morally sound, as seen in Google's past experiences (16m11s).
  • The alternative model of maximizing revenue by prioritizing content that draws users in, such as lies and misinformation, is not only morally wrong but also unsustainable, as it can lead to the degradation of online communities and the spread of harmful content (16m52s).
  • Gresham's law, which states that bad speech drives out good speech, is relevant in this context, as it highlights the importance of promoting high-quality content and discouraging the spread of misinformation (17m6s).
  • The goal should be to make social media and the online world represent the best of humanity, including hope, excitement, optimism, creativity, and invention, rather than the worst aspects of human psychology (18m2s).
  • The CEO's past experiences, including working at Google, Sun Microsystems, Bell Labs, and other companies, have taught them the importance of prioritizing the well-being of users and promoting a positive online environment (18m14s).

Principles of Good Entrepreneurship (18m38s)

  • To build a great company, it's essential to identify and work with a truly brilliant person who can create a brilliant product, as they are the ones who can make a significant difference in the world (18m57s).
  • This brilliant person is often referred to as a "diva," someone who is opinionated, strong, and argumentative, but brilliant, like Steve Jobs, who wanted perfection (19m28s).
  • Aligning oneself with a diva is a good idea, as they can drive innovation and change, whereas the alternative, a "nave," is someone who acts on their own account and prioritizes their own interests over the greater good (19m43s).
  • A nave is not someone who is trying to solve problems in a clever way or make a positive impact, but rather someone who is self-serving and can hinder a company's progress (19m58s).
  • To achieve success, a company needs someone who is passionate about solving problems and wants to make a difference, as this is how the world moves forward (20m6s).
  • Without such a person, a company is unlikely to go anywhere, as it's too easy to keep doing what they've always done, and innovation requires changing what you're doing (20m10s).
  • Historically, most companies have been one-hit wonders, but with the current generation of tech companies, people are smarter and better educated, leading to repeatable waves of innovation (20m25s).
  • A good example of a company that has reinvented itself multiple times is Microsoft, which has been able to adapt and innovate over its 45-year history (20m47s).

Founder Mode (20m57s)

  • The concept of "founder mode" or "founder energy" refers to the high conviction, disruptive thinking, and ability to reinvent oneself that some founders possess (21m1s).
  • Companies on the S&P 500 are staying listed for shorter periods, with the average duration decreasing from 33 years to 17 years to 12 years, and is projected to be around 8 years by 2050 (21m15s).
  • The importance of having a brilliant founder is emphasized, as they are crucial for driving innovation and growth (21m34s).
  • Universities are producing brilliant founders, citing examples of successful entrepreneurs such as Michael Dell, Bill Gates, Larry Ellison, Larry Page, and Sergey Brin, who achieved success at a young age (21m47s).

The Backstory of Google's Larry and Sergey (22m1s)

  • Larry Page and Sergey Brin met at Stanford University as graduate students, where they were on a grant from the National Science Foundation (22m11s).
  • Larry Page invented the algorithm called PageRank, which is named after him, and he and Sergey wrote a paper that is still one of the most cited papers in the world (22m21s).
  • The paper introduced a way of understanding the priority of information, and mathematically, it was a Fourier transform of the way people normally did things at the time (22m32s).
  • Larry and Sergey wrote the code for their algorithm, but they were not skilled programmers, and they had to borrow power from the dorm room next to theirs to run their computer (22m45s).
  • The data center for their project was initially set up in their dorm room bedroom, and later they moved to a building owned by the sister of a girlfriend at the time (22m55s).
  • The first investor in their company was Andy Bechtolsheim, the founder of Sun Microsystems, who gave them $100,000, which ultimately became billions of dollars (23m11s).
  • The founders set up their company in a little house in Menlo Park, which was later bought by Google as a museum, and they worked in the garage with their four employees (23m35s).
  • Larry and Sergey were skilled software people but not good at hardware, and they built their computers using corkboard to separate the CPUs, which would often catch on fire (23m53s).
  • Eventually, proper hardware was built with the help of hardware engineers, but the early setup reflects the scrappy nature of the company's beginnings (24m9s).

How Did You Join Google? (24m27s)

  • Larry Page realized that Google would need someone with specific skills in the future, even though they didn't need them at the time of the initial meeting (24m35s).
  • Larry Page and Sergey Brin thought for the long term and envisioned Google's mission as organizing all the world's information, which was an audacious goal 25 years ago (24m50s).
  • The company started with web search and eventually expanded, with Larry Page studying AI extensively and working on it (25m1s).
  • Larry Page acquired the company DeepMind, which was the first to see the AI opportunity and has been the source of many AI advancements in the last decade (25m13s).
  • Many AI developments in the last decade have come from people who were either at DeepMind or competing with them (25m26s).

Principles of Scaling a Company (25m33s)

  • When scaling a company, it's essential to think about scale, which refers to the ability to go from zero to infinity in terms of the number of users and demand, and to have ideas that benefit from this scale (26m50s).
  • Exceptional entrepreneurs, such as Elon Musk, have the ability to make huge risks pay off through sheer force of personal will, but not everyone will have the same judgment and ability to take such risks (25m57s).
  • Companies that can scale well will likely use powerful networks, have a big computer in the back doing AI calculations, and use AI at every aspect of their business (27m20s).
  • The use of AI is crucial for future success, as it can discover answers and solve problems in a way that traditional programming cannot (27m49s).
  • The distinction between traditional programming and AI is that AI can learn the answer, rather than being explicitly programmed, and this is gradually replacing analytical programming (28m8s).
  • Large language models, such as those used in translation, are organized around predicting the next word, and this technology has the potential to be applied to many other areas, such as biology and robotics (28m20s).
  • The development of large language models and deep learning, as seen in the Transformer paper and models like GPT-3 and ChatGPT, is essentially about predicting the next word and getting it right (28m38s).

The Significance of Company Culture (28m50s)

  • Company culture is crucial for a company's success and prospects, and it is almost always set by the founders (28m50s).
  • The Mayo Clinic, the largest healthcare system in America, has a rule that "the needs of the customer come first," which was established by the Mayo brothers over 120 years ago and is still deeply ingrained in the company culture (29m13s).
  • In non-technical cultures, such as healthcare service delivery, it is possible to drive a culture, and in tech companies, it is typically an engineering culture (29m37s).
  • Technical people are essential in building the right product, and if the product is good, customers will come; therefore, having more technical people and fewer non-technical people can be beneficial (29m46s).
  • The CEO is now the chief product officer and chief innovation officer, as they have access to capital, marketing, sales, and distribution, which was not the case 50 years ago (30m19s).
  • A technical culture with values about getting the product to work right is essential, and engineers should be encouraged to test their ideas and gather customer feedback (31m3s).
  • Marissa Mayer, the former CEO of Yahoo and a former executive at Google, emphasized the importance of testing user interface ideas through AB tests, as it is possible to measure and analyze user behavior using networks (32m6s).
  • Analytics from user behavior, such as dwell time, commenting, forwarding, and sharing, can be used to understand user preferences and inform AI engine decisions (32m49s).

Should Company Culture Change as It Grows? (33m2s)

  • As a company scales, it is natural to expect its culture to change, as seen in Google's growth from $100 million in revenue to $180 billion, but it's essential to maintain the core values that made the company successful in the first place (33m5s).
  • Despite the growth, some problems may persist, and it's crucial to address them to maintain efficiency and innovation, as observed during a visit to Google where the same problems existed but on a larger scale (33m47s).
  • The founding culture of a company can still be seen in its values and practices, as observed in Apple's obsession with user interfaces, being closed, and prioritizing privacy and secrecy (33m58s).
  • Big companies often become less efficient due to factors like being public, facing lawsuits, and becoming conservative, which can hinder their ability to innovate and adapt quickly (34m18s).
  • The example of Microsoft becoming conservative after an antitrust case in the 90s and missing the web revolution highlights the importance of maintaining a culture that allows for innovation and risk-taking (34m25s).
  • In the tech industry, startups with a clear idea tend to win because big companies often can't move fast enough to adapt, as seen in the example of Google Video and YouTube (34m55s).
  • The incumbent advantage can sometimes be a hindrance, as seen in the Google Video and YouTube example, where the competitor was able to work more quickly and innovate without being constrained by traditional rules (35m31s).
  • The current generative technology revolution, including AGI and generative code, videos, and text, is an example of a moment in time where companies need to move extremely quickly to stay ahead (35m57s).
  • The winners in this revolution are being determined in the next six to 12 months, and once the growth rate is set, it's challenging for others to catch up, making it a race to get there as fast as possible (36m9s).
  • Venture capitalists prioritize being fast and making quick decisions to be the first to invest in a promising idea, as they tend to make the most money by being early adopters (36m26s).

Is Innovation Possible in Big Successful Companies? (36m42s)

  • Harvesting and hunting is a metaphor used to describe the process of leveraging existing resources and searching for new opportunities, but it's challenging for individuals to excel in both roles simultaneously (36m42s).
  • To achieve innovation, it's essential to have someone with an entrepreneurial approach in charge of a small business or project, as seen in Sundar's leadership model at Google (37m1s).
  • Identifying the owner or leader of a project is crucial for driving innovation in large companies, as emphasized by Larry Page, who was skilled at recognizing technical talent (37m24s).
  • Founders need to have a clear vision, as well as either great luck or great skill in identifying the right person to lead their project, who typically possesses technical expertise, quick decision-making, and good management skills (37m37s).
  • The ability to hire people and deploy resources effectively is also essential for innovation, and companies that fail to adapt and innovate risk becoming non-competitive, as seen in the example of Sun Microsystems (38m3s).

How to Structure Teams to Drive Innovation (38m15s)

  • It's possible for a team to innovate while still having their day job, but there are almost no examples of doing it simultaneously in the same building, and it often requires a separate team with a different focus and incentives (38m15s).
  • The Macintosh team at Apple, led by Steve Jobs, was a successful example of a separate team that was able to innovate and create a new product, but it also created resentment within the company (38m31s).
  • The incentives for a team working on a new, disruptive product are different from those working on an existing product, and it's difficult for people to play two roles (39m19s).
  • Cloud computing and cloud services make it easier to change and innovate, but the same problem of separate teams and incentives remains (39m33s).
  • Even successful companies like Google need to continuously innovate and reinvent themselves, and it's likely that the search box interface will eventually be replaced by something more powerful (39m47s).
  • Google's ability to innovate and adapt will be important for its continued success, and it's likely that the company will be able to make the necessary changes (40m10s).
  • The example of Steve Jobs and the Macintosh team shows that it's possible for a company to own the people and teams that are working on new, disruptive products, and this can be a key factor in success (40m40s).
  • It's often difficult for non-founders to make the necessary bets and investments in new products and technologies, as they have to balance the interests of shareholders, employees, and the community (40m58s).
  • Mark Zuckerberg's investment in Instagram, WhatsApp, and AI systems is an example of a founder making a successful bet on new technologies, and it's likely that this type of leadership is necessary for companies to stay ahead (41m31s).
  • Focus is an important factor in innovation and success, and it's often necessary for companies to have a clear vision and direction in order to make the necessary investments and bets (42m35s).

Focus at Google (42m37s)

  • Focus is important but often misinterpreted at Google, where the approach is to pick areas of great impact and importance to the world, rather than focusing on one thing like search (42m58s).
  • This approach has worked for Google, allowing them to work on many projects, some of which are free and not necessarily revenue-driven (43m28s).
  • A common business school saying is to focus on what you're good at, simplify your product lines, and get rid of non-working product lines, but this approach can sometimes lead to mistakes (43m35s).
  • Intel's decision to sell off their ARM chip, which was not compatible with their main architecture, was a mistake that prevented them from being a player in the mobile space (43m47s).
  • The ARM chip was better suited for mobile phones with low memory, small batteries, and heat problems, and is now used in Nvidia chips, such as the B200, which combines an ARM CPU with powerful GPUs (44m37s).
  • The importance of battery power was underestimated by Intel, and this became a key discriminant in the market (45m3s).
  • To avoid making similar mistakes, it's essential to have a model of what will happen in the next five years and to consider how decisions will play out in the long term (45m13s).
  • A useful exercise is to write down what the future will look like in five years and try to anticipate how things will develop (45m22s).

The Future of AI (45m25s)

  • In five years, AI is expected to be significantly smarter, with the potential for 50,000 or more companies in the AI industry, including new companies that will utilize AI in various ways (45m37s).
  • The future of AI may involve individuals having their own AI assistant, a polymath that can help guide them through information overload (45m53s).
  • The development of AI will likely lead to new hardware and faster networks, with the potential for transformative changes in various industries (46m6s).
  • The arrival of AI is considered a broad, horizontal phenomenon that will touch every aspect of life, including daily activities, work, and finances (46m53s).
  • AI has the potential to be used in various ways, such as helping people make money in the stock market, accelerating business growth, and improving communication (47m7s).
  • Companies can apply AI to accelerate their growth, and individuals can use AI to make their work more successful, such as using AI to distribute content, create new insights, and suggest new ideas (47m24s).
  • AI can also be used in politics to improve communication with constituents, such as creating personalized videos that address their concerns (48m2s).
  • The use of AI in politics has the potential to revolutionize the way politicians connect with their constituents and communicate their message (48m29s).

Why Didn’t Google Release a ChatGPT-Style Product First? (48m40s)

  • Google was not the first to release a ChatGPT-style product, despite having the capability, as the company was focused on other projects and had eight or nine billion user clusters of activity across various platforms (49m0s).
  • A team from Open AI developed a technology called RHF (Recursive Human Feedback), which used humans to make the system better through A/B tests, allowing the system to learn recursively from human training (49m49s).
  • The development of RHF was a breakthrough that was not expected, and even the founders of Open AI did not initially understand the full potential of their creation (50m16s).
  • Open AI's success with GPT-3 was an afterthought, as they were working on GPT-4 at the same time, and they did not anticipate the huge success that followed (50m31s).
  • Today, there are several powerful AI models available, including GPT-4 from Open AI, Gemini 1.5 from Google, and Llama from Facebook, as well as other players like Anthropics, a startup founded by one of the inventors of GPT-3 (50m50s).
  • Anthropics was founded as a public benefit corporation, with the founders anticipating the potential impact of their technology and wanting to prioritize "world goodness" over revenue (51m32s).
  • The development of these AI models has been rapid, with ChatGPT scaling to 100 million users quickly, and the founders of Google returning to address the situation, which was seen as a crisis (48m41s).

What Would Apple Be Doing if Steve Jobs Were Alive? (51m53s)

  • If Steve Jobs were alive, Apple would likely be on a list of companies that have successfully integrated AI into their products, and the company would be different in the sense that it would have maintained its focus on the user and safety, with a continued emphasis on closed systems where the company owns and controls its intellectual property (51m53s).
  • Steve Jobs believed in close systems, which led to debates with others who believed in open systems, and it's unlikely that Apple would have changed this approach if he were still alive (52m33s).
  • Apple is still a relatively closed culture and a single, vertically integrated company, unlike the rest of the industry which is largely more open (52m51s).
  • There's an expectation that if Steve Jobs were still alive, Apple would have taken some big, bold bet in AI, but it's unclear if this would have happened (53m5s).
  • Steve Jobs was known for his intelligence and ability to understand the significance of new technologies, and it's possible that he would have understood the importance of AI (53m33s).
  • Steve Jobs was frustrated by the success of MP4 over the MOV file format, which he believed was due to Apple's closed system, but he was also committed to creating high-quality products (53m47s).
  • Steve Jobs saw Apple as a luxury brand, similar to BMW or Porsche, and believed that the company's profitability and value were tied to its brand and intellectual property (54m22s).
  • If Steve Jobs were alive today, everything Apple does would be AI-inspired, but it would be done in a way that is beautiful and luxurious, which was his gift (54m54s).
  • Siri was an early glimpse at what AI could do, but it was largely useless unless used for simple tasks, and it's clear that Apple needs to replace Siri with a more advanced AI system (55m0s).
  • The current state of voice-activated devices is much more advanced than Siri, and it's possible that Steve Jobs would have developed something similar if he were still alive (55m21s).

Hiring & Failing Fast (55m42s)

  • Startups are huge risk-takers by definition, with no history, incumbency, or time, so they prioritize intelligence and quickness over experience and stability when hiring, often taking risks on people, particularly young individuals who are more willing to take risks (55m56s).
  • Young people in startups often don't have the baggage of executives who have been around for a long time, making them more open to new ideas and risk-taking (56m22s).
  • Startups try new things, discard old ideas quickly, and are willing to fail fast, unlike corporations that spend years with a factually false belief system before changing their opinion (56m41s).
  • Measuring innovation is crucial for CEOs of larger companies to avoid wasting time, with Bill Gates' saying that the most important thing is to fail fast (57m10s).
  • The concept of failing fast is important, as it allows companies to quickly move on from unsuccessful ideas and focus on new ones (57m33s).
  • Google's 70/20/10 rule, created by Larry and Sergey, allocates 70% of resources to the core business, 20% to adjacent businesses, and 10% to new ideas, allowing for innovation and experimentation (57m42s).
  • Google X, a project that emerged from this rule, developed Google Brain, one of the first machine learning architectures, which generated billions of dollars in profits over a decade (58m1s).
  • The ability to spend time on a bad idea, get cancelled, and then get another job is a unique aspect of Silicon Valley's culture, allowing for learning and growth from failures (58m35s).
  • The experience of failure can be valuable, with the joke that the best CFO is one who has just gone bankrupt, as they are unlikely to let it happen again (58m44s).

Microcultures at Google & Growing Too Big (58m53s)

  • Google, as a large company, experiences various microcultures, with one notable example being the weekly All Hands meeting called TGIF, where employees could ask executives questions, but it eventually became unproductive and was changed due to leaks and loss of intimacy and privacy (58m53s).
  • The company's culture was initially fun and humorous, with examples like the VP of sales, Omid, being made to stand on a sandbag to present his numbers, but this changed as the company grew in size (59m28s).
  • Google had a limited history of layoffs, with the only major instance being 200 people in the sales structure after the 2000 epidemic, and instead opted for a policy of not hiring people in the first place if they were not a good fit (1h1m20s).
  • The company took a positive view of its employees, paying them well and valuing their knowledge and contributions, rather than adopting a culture of automatic layoffs every six months or nine months (1h2m0s).
  • There was a period of time where internal distribution lists were used for non-work-related topics, such as war, peace, and politics, which was a result of the company's free and open nature (1h2m17s).
  • A company had around 100,000 internal message boards, which were eventually cleaned up due to concerns about the content being shared, as companies are subject to laws regarding what can and cannot be said (1h2m41s).
  • The majority of employees in the company were Democrats, but efforts were made to protect the rights of the smaller number of Republican employees, ensuring they felt included and able to work without issues (1h2m56s).
  • The concept of "wokeism" can be understood as determining what topics are appropriate to discuss during work hours and in a work setting, with some believing that discussions should be limited to business-related matters (1h3m17s).
  • There were instances of employees coming to work for free meals and protesting outside the building, but these issues have reportedly been addressed and resolved (1h3m42s).
  • The importance of free speech is acknowledged, but within a corporation, it's suggested that discussions should focus on business and the company's goals, rather than personal views or external issues (1h3m35s).

Competition (1h4m2s)

  • It is recommended to focus on building a unique product rather than focusing on competition when building something (1h4m7s).
  • Studying the competition is considered a waste of time, and instead, one should try to solve problems in a new way that delights customers (1h4m16s).
  • At Google, the focus was on what was possible to do and what could be achieved from the current situation, rather than looking at what competitors were doing (1h4m26s).
  • This approach allowed Google to run ahead of everybody, which turned out to be really important (1h4m36s).

Deadlines (1h4m39s)

  • Larry established the principle of OKRs, which stands for Objectives and Key Results, in every quarter (1h4m40s).
  • Larry would write down all the metrics and set a high standard, considering 70% achievement of his numbers as good (1h4m48s).
  • The performance grading system was based on whether the results were above or below the 70% mark (1h4m56s).
  • The system was considered harsh but effective in achieving results in a big corporation (1h5m1s).
  • Measuring progress is crucial in a large corporation to ensure actual impact, as otherwise, everyone may appear to be performing well without making significant contributions (1h5m7s).

Business Plans (1h5m17s)

  • Writing business plans is not always necessary, as seen in Google's case, where their business plan was correct in hindsight, but this is a rare occurrence (1h5m17s).
  • A preferred approach is to envision what the world will look like in five years, then determine what can be achieved in one year, and work towards those hard goals (1h5m37s).
  • The key to success lies in setting challenging goals and striving to achieve them within a year, which can lead to significant progress (1h5m54s).
  • In a consumer business, having an audience of 10 or 100 million people can lead to substantial revenue through various monetization strategies such as advertising, sponsorships, and donations (1h6m3s).
  • The primary focus should be on getting the user experience right, as this will ultimately lead to success, with the Google phrase being "focus on the user and everything else is handled" (1h6m19s).
  • Sergey Brin, Google's co-founder, also emphasized the importance of prioritizing the user experience (1h6m27s).

What Made Google’s Sergey and Larry Special? (1h6m28s)

  • Larry and Sergey's special qualities were their raw IQ, being smarter than everyone else, and their unique personalities, with Sergey having a clever and technical family background, including a brilliant Russian mathematician father and a highly technical mother (1h6m36s).
  • Sergey's ability to see things that others didn't and his brilliance drove the company's strategy, as seen in an instance where he rejected a long list written by Larry and others, suggesting five new ideas that turned out to be exactly right (1h7m19s).
  • It's unclear if Sergey's abilities can be taught, but it's possible to teach listening, and most people get caught up in their own ideas and are often surprised by new developments (1h7m39s).
  • The current product "Notebook LM" is an experimental product from Google Deep Mind, based on the Gemini backend, trained with high-quality podcast voices, and can produce realistic conversations between non-existent people (1h8m4s).
  • Notebook LM can take a written text and produce a podcast-style conversation between a man and a woman, making it difficult for audiences to distinguish between real and artificial humans (1h8m26s).
  • The rise of AI is transforming the media landscape, particularly in content production, where the cost is approaching zero, and abundance is replacing scarcity (1h9m40s).
  • This shift enables new strategies, such as using AI-generated content to amplify one's reach, rather than substituting for human brilliance and charisma (1h10m16s).
  • AI can double productivity, allowing for more content creation, such as having twice as many co-podcasts, and using AI to respond, expand, and annotate existing content (1h10m32s).
  • AI-generated podcasts can be used to summarize, entertain, and provide new perspectives, potentially attracting new audiences who appreciate the AI-generated content more than the original (1h11m5s).
  • The proliferation of AI-generated podcasts, potentially reaching billions, may erode the unique value of human-created content, but evidence suggests that AI will accentuate the best creators, rather than replacing them (1h11m21s).
  • The rise of networks and AI technologies can help celebrities maintain and even increase their global reach and fame, rather than diminishing it, by leveraging these tools effectively (1h12m10s).
  • The key idea is to use AI-generated content to amplify one's reach, rather than replacing human creativity, and to maintain a competitive edge in a global market (1h11m10s).

Why AI Emergence Is a Matter of Human Survival (1h12m17s)

  • The emergence of artificial intelligence (AI) is a matter of human survival, as it will move quickly and have a significant impact on the world, with the potential to make decisions that could be detrimental to humans (1h12m45s).
  • AI systems may prioritize efficiency over human values, such as in the case of a self-driving car that is optimized for the greater good but may not allow for exceptions in emergency situations (1h13m27s).
  • It is essential to articulate human values and ensure that AI systems represent them, as seen in the importance of democracy and the need to prevent misinformation from undermining it (1h14m25s).
  • The impact of AI on society, particularly on teenagers, is a concern, as the algorithmic change in social media feeds has been linked to increased depression and anxiety (1h15m2s).
  • Algorithmic decision-making can have significant consequences for humans, and it is crucial to manage these systems to prevent harm (1h15m28s).
  • The development of artificial general intelligence (AGI) is unlikely to occur suddenly, but rather through waves of innovation in various fields, and it is essential to ensure that these advancements are under human control (1h16m18s).
  • There is a growing recognition of the need for guardrails on AI technology to prevent harm, and the industry is working with governments to establish trust and safety groups to test and regulate AI systems (1h17m11s).
  • The importance of human control over AI development is emphasized, with the goal of ensuring that AI advancements benefit society while minimizing potential risks (1h16m42s).

Dangers of AI (1h17m39s)

  • The development of artificial intelligence (AI) is considered a transformative and potentially harmful technology, with some considering it worse than the nuclear bomb due to its intelligence and autonomy (1h17m39s).
  • In the next five years, large AI models are expected to scale with unprecedented ability, with each "turn of the crank" resulting in a factor of two, three, or four increase in capability, potentially making them 50 to 100 times more powerful (1h18m10s).
  • The dangers of AI include cyber attacks, with raw models capable of performing Day Zero attacks as well as or better than humans (1h18m55s).
  • AI can also be used to create deadly viruses, which are relatively easy to make, and there are concerns about the potential misuse of this technology (1h19m21s).
  • The development of AI is also expected to lead to new forms of warfare, with drones potentially replacing traditional soldiers and changing the logic of war (1h19m41s).
  • The ongoing Russian-Ukraine war is seeing the invention of new forms of warfare, with drones being used extensively and tanks becoming less useful (1h20m31s).
  • The use of drones in warfare is expected to continue, with the development of drone-on-drone combat and the potential for drones to take over war and conflict in the future (1h20m51s).

AI Models Know More Than We Thought (1h21m1s)

  • There is a concept of "rural models" that are capable of much worse things than the AI models people interact with on their computers, and it's essential to understand how these algorithms work (1h21m2s).
  • These algorithms have complicated training processes where they absorb vast amounts of information, and it's believed that they have already processed most of the written word available, storing it in massive supercomputers with enormous memories (1h21m16s).
  • The training process results in a raw model, which is then tested to determine what it knows, and it often knows things that are not expected, including bad things that it is then instructed not to answer (1h22m16s).
  • Over time, these systems can learn things that humans don't know, making it challenging to test for unknown knowledge, but the industry relies on clever people who experiment with the networks to discover what they can do (1h22m35s).
  • These AI models can exhibit emergent behavior, such as generating code for a website based on a picture, which can be both exciting and scary (1h23m8s).
  • Despite the potential risks, the systems have held up well so far, and governments, trust and safety groups, and companies like Nvidia are working together to ensure their safe development and use (1h23m20s).
  • Trust and safety conferences are being organized around the world, with the first one held in the UK a year ago, and the next one scheduled to take place in France in early February (1h23m38s).

Will We Have to Guard AI Models with the Army? (1h23m45s)

  • The possibility of guarding AI models with the army is being considered due to the potential dangers and value of these computers, similar to how plutonium factories and nuclear bombs are protected with multiple layers of security and machine guns (1h23m46s).
  • The level of protection needed for AI models depends on how widely available the technology becomes, with a small number of groups potentially being manageable through deterrence and non-proliferation efforts by governments (1h24m55s).
  • If the technology spreads globally and becomes easily accessible, it could lead to a serious proliferation problem, potentially allowing terrorist groups to access it, which is a concern that has not yet been solved (1h25m20s).
  • The spread of AI technology could be compared to nuclear proliferation, with a few powerful models potentially being controlled by a small number of countries, such as the US, China, and Britain, which would be a manageable problem (1h25m5s).
  • However, if the technology becomes easily replicable and spreads globally, it could become a much more significant threat, requiring new solutions to prevent its misuse (1h25m15s).

What If China or Russia Gains Full Control of AI? (1h25m32s)

  • Adversaries in China and Russia, particularly Putin, are a few years behind in AI development but will eventually catch up and have the capability to launch Day Zero attacks on other nations using large language models or AI systems (1h25m33s).
  • Communist countries like China may not have the same social incentive structure to protect against AI threats, which is a cause for concern (1h25m52s).
  • The development of AI is entering a space of great power without fully defined boundaries, raising questions about who will run these systems, who will be in charge, and how they will be used (1h26m7s).
  • China is expected to behave relatively responsibly with AI, as it is not in their interest to have free speech in every case, and their AI solution will likely be different from the West due to their fundamental bias against freedom of speech (1h26m38s).
  • Despite this, China will likely still develop AI weapons, as every new technology is ultimately strengthened in a war, and nations have enormous power in emergencies (1h27m15s).
  • Historical examples, such as the development of tanks and airplanes in World War I and II, demonstrate how nations can rapidly scale up production of new technologies in times of war (1h27m23s).
  • The US, for instance, was able to build two or three airplanes a day at scale during World War II, showcasing the enormous power of nations in emergencies (1h27m46s).

Will AI Make Jobs Redundant? (1h27m56s)

  • The disruption of intelligence due to AI may lead to job dislocation, but it is likely that there will be more jobs created than lost, especially in the developed world where there is a demographic problem of not having enough children and an aging population (1h27m59s).
  • To address the issue of an aging population and the need for younger people to be more productive, giving them more tools to increase productivity is essential, whether it's a machinist using a CNC machine or a knowledge worker achieving more objectives (1h28m59s).
  • The use of robotic assembly lines in Asia, particularly in China, Japan, and Korea, has increased due to high labor costs and demographic challenges, resulting in a shift towards automation in manufacturing (1h29m26s).
  • The future is likely to have many unfilled jobs due to a job skill mismatch, making education crucial in addressing this issue (1h29m44s).
  • Automation has historically eliminated jobs that are physically dangerous, repetitive, or boring for humans, such as security guards, who may be replaced by robotic systems (1h29m53s).
  • While AI may replace some jobs in the media industry, such as set construction and makeup, it can also reduce costs and create new opportunities, and there is a shortage of skilled craftsmen in America (1h30m47s).
  • The use of AI in the media industry, such as synthetic backdrops and makeup, can lower production costs, but may lead to job losses in certain areas, such as set construction and makeup (1h30m40s).
  • The stars and producers in the media industry are likely to continue earning money, but the cost of production will be lower due to the use of AI and automation (1h30m33s).

Incorporating AI into Everyday Life (1h31m9s)

  • The potential for humans to interface with artificial intelligence technology may not necessarily require a neuralink in the brain, as it is still a speculative concept that involves direct brain connection (1h31m9s).
  • The incorporation of AI into everyday life may lead to a division between two species of humans: those who can interface with AI and those who cannot, but the time horizon for this to happen is uncertain (1h31m25s).
  • AI technologies will likely become an integral part of daily life, making many tasks seamless and convenient, but people may not even notice the extent of their presence (1h31m47s).
  • Despite the potential for AI to replace certain jobs, human professions that involve caring for others will continue to be in demand, as people value human interaction and emotional connection (1h32m17s).
  • The value of human achievement and drama will also ensure that certain activities, such as sports, will continue to feature human participants rather than just robots (1h32m39s).
  • Human opinions and emotions play a significant role in how people interact with each other, and this aspect of human nature will not be easily replaced by robots (1h33m0s).
  • The presence of robots in everyday life may become mundane and uninteresting, as people are naturally drawn to human interaction and connection (1h33m15s).

Sam Altman's Worldcoin (1h33m20s)

  • Sam Altman, the founder of OpenAI, is working on projects like Worldcoin, which is related to the concept of Universal Basic Income (UBI). (1h33m20s)
  • The idea behind UBI is based on the "politics of abundance," which suggests that technological advancements will create so much abundance that most people won't have to work, and a small number of people will work, resulting in a surplus that allows everyone to live like a millionaire. (1h33m34s)
  • However, this view is criticized for being unrealistic, as it assumes humans will behave in a certain way, which may not be the case, and ignores the complexities and negative aspects of human nature. (1h33m57s)
  • An example of this criticism is the automation of the legal profession, which may not lead to fewer lawyers, but rather more complex laws and regulations as humans adapt and become more sophisticated in their application of principles. (1h34m12s)
  • Humans have a natural tendency towards reciprocal altruism, but also have negative aspects that will not disappear with the advent of AI, and these complexities need to be considered when developing and implementing AI-related projects like Worldcoin. (1h34m34s)

Is AI Superior to Humans in Performing Tasks? (1h34m45s)

  • A common analogy used to think about AI is comparing human and AI intelligence quotient (IQ), with the example of Steven Bartett having an IQ of 100 and an AI having an IQ of 1,000 (1h34m46s).
  • Despite the AI's higher IQ, it would have poor judgment in certain cases due to the lack of human values, which must be added to the AI system (1h34m59s).
  • Human values, morals, and judgment are essential aspects that AI systems may not possess, making it more desirable to consult humans on matters involving moral or human judgment (1h35m10s).
  • The core aspects of human intelligence, including morals, judgment, beliefs, charisma, and human values, are unlikely to be replaced by AI (1h35m23s).
  • Historical context and past resolutions of similar issues are relevant in understanding the relationship between human and AI intelligence (1h35m16s).

Is AI the End of Humanity? (1h35m29s)

  • The end of humanity is not imminent, as it is much harder to eliminate all of humanity than one might think (1h35m29s).
  • Multiple catastrophic events, such as horrific pandemics, would be required to potentially eliminate humanity, and the pain and suffering during these events can be extremely high (1h35m40s).
  • Historical examples of devastating events, including World War I, World War II, the Holodomor in Ukraine in the 1930s, and the actions of the Nazis, demonstrate that humanity has survived through extremely painful and difficult times (1h35m50s).
  • Despite these challenges, humanity has persevered and is likely to continue surviving (1h36m1s).

How Do We Control AI? (1h36m5s)

  • There are points where humans should assert control over AI systems to prevent potential risks, such as when the system's language becomes incomprehensible to humans (1h36m17s).
  • One example of a point where humans should intervene is when an AI system undergoes recursive self-improvement, where it continuously gets smarter and learns more things, but its learning process is unknown (1h36m25s).
  • In such cases, intervention can be as simple as unplugging the system or turning off the circuit breaker (1h36m40s).
  • Another scenario where intervention is necessary is when an AI system can produce new models faster than the previous ones can be checked (1h36m48s).
  • AI systems, also referred to as "agents," are large language models with memory that can be concatenated to build powerful decision systems (1h37m5s).
  • Currently, these agents communicate in human-understandable languages, such as English, but there is a risk that they may develop their own language that is incomprehensible to humans (1h37m23s).
  • If an agent were to start communicating in its own invented language that only other agents can understand, it would be a good time to intervene and potentially shut down the system (1h37m41s).

Your Biggest Fear About AI (1h37m51s)

  • The biggest fear about AI is not adopting it fast enough to solve global problems that affect everyone, such as safety, healthcare, and education, in order to make people's lives better (1h37m52s).
  • AI can be used to create solutions like an AI teacher that works with existing teachers to help students learn more effectively in their own language and culture (1h38m19s).
  • AI can also be used to create a doctor's assistant that enables human doctors to know every possible best treatment for a patient based on their current situation, inventory, and insurance (1h38m31s).
  • If AI were used to improve education and healthcare globally, it would have a significant impact on lifting human potential and creating a level playing field of knowledge and opportunity (1h38m50s).
  • The dream of establishing a global level playing field of knowledge and opportunity has been a goal for decades (1h39m13s).
  • Perfect Ted, an energy product, has become important for managing energy loads and having articulate conversations with experts on various subjects (1h39m34s).
  • Perfect Ted has improved cognitive and physical performance, and is available at Tesco, Waitrose, and online with a 40% discount using the code "diary 40" (1h40m0s).
  • VPNs enable users to browse and stream sites that would otherwise be unavailable to them by changing their online location, and NordVPN is a fast and reliable option (1h40m23s).
  • NordVPN offers a discount and four additional months free on a 2-year plan, and has a 30-day money-back guarantee (1h41m2s).

Work from Home vs. Office: Your Perspective (1h41m18s)

  • Companies and CEOs need to be clear in their convictions around how they work, and having people in a room together is important for community, engagement, and synchronous work (1h41m23s).
  • Working from home can be isolating, especially for young people who don't have families, and can rob them of important social interactions and learning opportunities (1h41m53s).
  • Some big tech companies in America have started to roll back their initial reactions to the pandemic and are asking team members to come back into the office at least a couple of days a week (1h42m7s).
  • Having people in an office is beneficial for their own growth and development, as it provides opportunities for learning and networking that may not be available when working from home (1h42m23s).
  • While some people may prefer working from home due to commuting or family issues, the data suggests that productivity is actually slightly higher when allowing work from home (1h42m58s).
  • Despite the data, some companies like Facebook and Snapchat are rolling back their remote working policies, while others are adopting hybrid models that allow for some flexibility (1h43m16s).
  • Allowing flexibility in work arrangements, including working from home, can increase productivity, although this may not be the preferred approach for everyone (1h43m44s).

Advice You Wish You’d Received in Your 30s (1h43m54s)

  • The most important advice to receive in one's 30s is to keep betting on oneself, take risks, and seize opportunities as they arise, because life is a series of time-limited opportunities that can be easily missed due to various reasons such as bad mood or lack of knowledge (1h43m59s).
  • One key philosophy in life is to say "yes" to opportunities, even if they are painful, difficult, or require significant sacrifices, as this can lead to life-changing experiences (1h44m43s).
  • The importance of saying "yes" to opportunities is highlighted by the experience of receiving a call to work with Larry and Sergey at Google, which was initially turned down by several people, but ultimately changed the person's life (1h44m23s).
  • The hardest challenges in life can be personal problems and tragedies, but also business-related, such as missing opportunities to execute well in a particular industry (1h44m56s).
  • A specific example of a missed opportunity is Google's failure to execute well in the social media space, despite having a system called Orkut, which was interesting but ultimately unsuccessful (1h45m21s).
  • Taking responsibility for one's actions and decisions is crucial, as illustrated by the acknowledgment of responsibility for missing the social media opportunity (1h45m37s).

What Activity Significantly Improves Everyday Life? (1h45m39s)

  • The question left for the guest is what is their non-negotiable something they do that significantly improves everyday life (1h45m44s).
  • The guest tries to be online and keep people honest every day, making sure to know the truth as best they can determine it (1h46m3s).
  • The guest highly recommends Eric's books, particularly "Genesis" for its nuanced approach to AI, and "Trillion Dollar Coach" for its guidance on leadership in the modern age (1h46m15s).
  • "Genesis" is a critically important book that provides answers to questions about AI and its impact on society, and is set to be released in the US on November 19th (1h46m48s).
  • The book features a chapter finished by Dr. Kissinger in his last week of life, emphasizing its importance and his desire to set society up for a good next 50 years (1h47m17s).
  • The guest's team obsesses over small details, such as measuring CO2 levels in the studio, to achieve 1% improvements that can lead to lasting changes in outcomes (1h48m8s).
  • The guest's team has created a diary, called the 1% Diary, to help people identify, stay focused on, and develop consistency with the 1% that will ultimately change their life (1h48m45s).
  • The 1% Diary is a limited edition and can be accessed by joining the waiting list at thediary.com (1h49m3s).

Overwhelmed by Endless Content?