If LLMs Do the Easy Programming Tasks - How are Junior Developers Trained? What Have We Done?

03 Oct 2024 (9 days ago)
If LLMs Do the Easy Programming Tasks - How are Junior Developers Trained? What Have We Done?

Introduction and Guest Introductions

  • The podcast "What Have I Done" explores the impact of technology on the future, with the first episode focusing on the effects of large language models (LLMs) on software development, featuring guests Anthony Alford and Roland Meertens (17s).
  • Anthony Alford is the director of development at Genesis, working on AI and ML projects, with over 20 years of experience in designing and building scalable software, and a PhD in electrical engineering with a specialization in intelligent robotic software (1m8s).
  • Roland Meertens is a tech lead at Wave, working on embodied AI for self-driving cars, and has experience in robotics, safety for dating apps, and predictive analytics (1m34s).

LLMs and the Future of Software Development

  • In a future where LLMs have solved the problems of writing code, the software development lifestyle may involve bots automatically finding issues, raising PRs, and accepting improvements, making code less readable (2m26s).
  • Anthony assumes that LLMs will automate tasks that people find dull, such as writing tests, documentation, and naming variables, freeing up human engineers to focus on important tasks (3m23s).
  • Roland notes that LLMs may take care of tasks like pull requests, code reviews, and writing documentation, but human engineers will still be needed for high-level decisions (3m31s).
  • The guests discuss their experience with GitHub Copilot, with Anthony using it for side projects but not for day-to-day work (4m10s).
  • Companies may not use tools like GitHub's Co-Pilot due to concerns about sending data to potential training datasets and copyright issues with the generated code (4m32s).
  • Traditionally, novice programmers are trained by working on easy and dull tasks, but with the automation of these tasks, the question arises of how to train programmers in the future (5m6s).
  • GitHub's Co-Pilot is a code-generating LLM that can produce entire code functions based on a comment describing the desired functionality (5m42s).
  • The term "Co-Pilot" can also refer to an LLM that assists programmers in real-time, potentially serving as a code reviewer or debugging assistant (5m56s).
  • One possible model for training novices is to have senior programmers mentor them, as LLMs may save senior programmers time, allowing them to focus on mentoring (6m14s).
  • Unlike other engineering fields, programming technology often involves working on new and unique projects, rather than repetitive tasks (6m30s).
  • However, programmers do frequently encounter similar problems, which is why resources like Stack Overflow are popular (8m1s).
  • Despite the repetition, new technologies and innovations are constantly being developed, requiring insight and creativity, and it's unclear how LLMs can take these into consideration if they've never seen them before (8m35s).
  • The question is raised about how often developers recall information from their early days of programming, such as punch card programming, in their daily work, with the conclusion that it's not often (9m0s).
  • The point is made that Large Language Models (LLMs) are unlikely to come up with something entirely new, but rather build upon existing ideas and concepts (9m20s).
  • The potential impact of LLMs on the training of future senior developers is discussed, with the question of whether they will learn faster by focusing on coding 100% of the time, or if they will lack a thorough understanding of code and machine functionality (9m40s).
  • It's noted that software developers often solve problems by putting together existing pieces in novel ways, and that LLMs can aid in this process by generating code for common patterns and tasks (10m30s).
  • The comparison is made between LLMs and frameworks like Rails and Django, which provide common patterns and tools for developers to work with (10m46s).

The Role of LLMs in Requirements Gathering and Prototyping

  • The requirements for a task can be broken down into three main things: inserting levels of interdirection, trading up space and time, and figuring out what customers really want (13m17s).
  • Product managers often write requirements that need to be translated into something that can be implemented, and this process can be time-consuming and require back-and-forth communication (13m36s).
  • Large language models (LLMs) could potentially be used to translate product managers' requirements into something that can be implemented, which would be a huge benefit (13m38s).
  • However, product managers and customers often don't understand the technology, and just because a requirement can be written in simple English doesn't mean it's easy to implement (14m8s).
  • There are two possible options for using LLMs to help with this process: either using them to generate examples or mockups that product managers can use to clarify their requirements, or using them as an advanced prototyping tool (14m41s).
  • LLMs could potentially be trained to generate prototypes or mockups based on samples and feedback, but this would require solving the problem of dealing with vagueness and uncertainty (16m10s).
  • Machine learning algorithms don't understand vagueness well, and when doing prototyping, there's often no certainty, so it's unclear how to solve this problem (16m22s).
  • This problem is similar to the one that used to exist with search engines, where people had to learn to use specific keywords to get the results they wanted, but ideally, the software should be able to deal with human vagueness (16m48s).

Reliability and Challenges of LLMs in Code Generation

  • Some people are already using LLMs like ChatGPT to answer questions and generate text, which raises questions about how this technology will be used in the future (17m20s).
  • Large language models (LLMs) can generate text that is 80% true and 20% fabricated, making it difficult to distinguish between fact and fiction, as seen in an example where an LLM generated a bio that was 80% true and 20% made the person look better than they actually are (17m34s).
  • LLMs can also perform retrieval tasks, such as the game "two truths and a lie," which raises concerns about the reliability of information generated by these models (18m16s).
  • Developers have been writing bugs since the early days of programming, and LLMs can potentially generate code that makes no sense, but this can be caught in test cases (18m32s).
  • One possible solution is to have one LLM generate test cases and another generate code, or to have a human code reviewer check the output of an LLM (19m0s).
  • Code reviews are a challenging task that requires understanding the assumptions of the code and spending time reading it, and it is hoped that LLMs can assist with this task in the future (19m25s).
  • However, there is a risk that LLMs could be used to automatically generate requirements and code, leaving the reviewer to spot any errors or flaws (19m57s).
  • Requirements analysis is an art form that involves asking open-ended questions to pull out the client's needs, and LLMs could potentially assist with this process (20m30s).

LLMs as Prototyping Tools and Simulators

  • LLMs could be used as rapid prototypers and simulators, generating code that may or may not be reusable, and training software developers in the process (21m11s).
  • This approach could involve using LLMs to pretend to be an API, for example, and could be a useful tool for software development (21m15s).
  • Prototyping with Large Language Models (LLMs) can be beneficial as it allows for rapid prototyping, but it's essential to be prepared to throw away the entire prototype if needed (21m41s).
  • When considering the next steps after prototyping, it's crucial to think about the "ilities" such as security, scalability, and reliability, which can be challenging to design and implement (22m0s).
  • LLMs can potentially be used to automate security tasks, such as reading logs to detect intrusions, and may be useful in generating load at scale to test scalability (22m16s).
  • Humans often rely on external resources, such as InfoQ, to learn from experts and find solutions to problems, and LLMs can summarize and provide this information, making them useful for brainstorming and idea generation (23m2s).

LLMs as Idea Generators and Checkers

  • LLMs can be used as "idea generators" and "checkers" to ensure that humans have considered all aspects of a problem, including the "ilities," and can provide a checklist of potential issues to address (23m43s).
  • The use of LLMs in code generation and review may lead to a future where junior developers rely on AI-generated code, potentially making human developers feel obsolete or humiliated (24m23s).
  • A potential risk of relying on generated code is that when something goes wrong, nobody may know how to solve it, leading to absurd and unpredictable errors (25m15s).
  • The use of LLMs in code generation may lead to a situation where overall errors decrease, but the ones that do occur are more severe and unpredictable, similar to the potential risks associated with autonomous vehicles (25m28s).

The Analogy of Self-Driving Cars and the Potential Risks of Over-Reliance on LLMs

  • The problem with self-driving cars and humans sharing the road is that there are nuances and local customs that can cause confusion, such as the "Pittsburgh left" where cars turning left have the right of way, and a self-driving car trained on data from another city may not understand this custom (26m1s).
  • Another example is when Sweden switched from driving on the left side of the road to the right side overnight, and humans adapted, but it's unclear how a self-driving car would handle such a change (26m55s).
  • The need for end-to-end learned driving is emphasized to capture all the nuances in different areas, but the easy case is when everything is automated and predictable (27m12s).
  • However, in a world where some things are automated and others are not, problems arise, and it's unclear if there's an analogy with self-driving cars and LLMs generating code (27m50s).
  • The problem with relying on LLMs to generate code is that humans may lose the ability to think about the code and the problem, and may not understand what they want to achieve (28m9s).
  • Using tools like JBT to generate code can lead to a loss of capability, similar to how people have lost the ability to remember 10-digit phone numbers because they store them in their phones (28m49s).

The Impact of LLMs on Programming Skills and Knowledge

  • The younger generation may not know how to remember phone numbers or perform simple calculations, and it's possible that in the future, people will be surprised that others can still do these things (29m10s).
  • The question is raised about how elegant the code generated by LLMs like ChaCha will be, and whether it will be equivalent to spaghetti code that is difficult to understand (29m51s).
  • A recent development in AI involved training a reinforcement learning agent to generate code, resulting in a faster sorting algorithm than the fastest human-written code, but it's unclear if the algorithm knows when to use this sort or if it will be applied blindly in all cases (30m10s).
  • Sorting algorithms are not universally used in all cases, and the choice of algorithm depends on the specific situation, such as the state of the data being sorted (30m47s).
  • The use of AI-generated code raises questions about what is being optimized and whether the AI is rejecting certain inputs or customers, highlighting the need for logging and understanding what the AI is doing (31m51s).
  • In a world where Large Language Models (LLMs) are part of the software development pipeline, it may be necessary to check in the prompts used to generate code into the source code, allowing for version control and tracking of changes (32m14s).
  • Archiving the prompts used to generate code, as well as the LLM itself, may be necessary to ensure reproducibility and accountability, similar to how the military archives software used to create designs (33m1s).
  • The use of LLMs as co-pilots in software development raises questions about responsibility and understanding of the generated code, and whether the LLM will become like compilers, assumed to work without needing to understand the underlying mechanics (33m43s).
  • The increasing use of AI in tools and databases may lead to a faster understanding of how these systems work, but also raises concerns about the need for understanding and accountability in AI-generated code (34m11s).
  • Using Large Language Models (LLMs) for programming tasks can lead to interactive learning, where developers learn through AI-generated code and commands, and can ask the AI to fix errors when something goes wrong (35m2s).
  • However, relying on LLMs may cause developers to lose knowledge of meta-problems, such as optimizing database queries by adding indices, denormalizing data, or using hints for performance (35m22s).
  • This loss of knowledge can lead to a lack of understanding of the context and limitations of the tools being used, making it difficult to troubleshoot and optimize code (35m53s).
  • The concern is that as LLMs become more prevalent, developers may lose essential skills, such as writing machine code or assembly, and may not be able to adapt to situations where the tools are not sufficient (36m3s).
  • The analogy is made to the introduction of compilers, which also raised concerns about the loss of skills, but ultimately increased productivity and efficiency (36m46s).

Economic Efficiency and the Adoption of LLMs

  • The key is to find a balance between using LLMs as tools to aid learning and productivity, while still maintaining essential skills and knowledge of the context and limitations of the tools (37m6s).
  • Economic efficiency will likely drive the adoption of LLMs, as the cost of programming projects is often the highest expense, and LLMs can help reduce this cost (38m36s).
  • However, it is essential to be aware of the potential risks of over-reliance on LLMs, such as the loss of essential skills and the inability to adapt to complex problems (38m28s).
  • The concept of "unknown unknowns" is mentioned, highlighting the importance of understanding the limitations of the tools being used and the potential risks of not knowing what you don't know (38m13s).
  • The cost of an outage of software can be significant, and companies that are shortsighted may not survive, but the question remains how companies will evolve to adapt to new technologies (38m56s).
  • The path to evolution may involve suffering, and it is uncertain how companies will know what they don't know, but economic efficiency may drive the adoption of new technologies (39m36s).
  • Many managers think programmers are interchangeable and may see an economic incentive to replace them with technologies like LLMs, which can automatically generate code (39m49s).
  • The use of LLMs may lead to the loss of tribal knowledge in companies, but managers may not care about this loss (40m46s).
  • The difference between a good senior developer and a bad one may be their ability to use restraint when accepting or rejecting automatically generated code proposals (41m19s).

The Future of Programming with LLMs

  • Reading code and quickly understanding what's happening will become a more important skill, and code should be written with the reader in mind, as Donald Knuth's concept of literate programming suggests (41m54s).
  • Programmers should take the time to read and learn from others' code, as this skill will be essential in the future world of programming (42m55s).
  • The conversation raises questions about the future of programming with the increasing use of Large Language Models (LLMs) and whether it's possible to restrict or stop their development, with the example of the atomic bomb being mentioned as a comparison (43m18s).
  • The Air Canada bot story is mentioned, where the company tried to claim they weren't responsible for the bot's actions, but the judge ruled that they were, highlighting the potential for lawsuits and accountability in the use of LLMs (44m1s).
  • A possible solution for a bright future with LLMs is to let machines write code, but to write really good tests to ensure the code is reliable and secure, and to have experienced developers (Oldtimers) to debug and make tweaks as needed (44m35s).
  • The division of labor between humans and LLMs is suggested, where humans focus on writing tests and LLMs generate code that can be tested, but this requires LLMs to learn to write testable code (45m23s).
  • The importance of testing is emphasized, including unit tests, scenario tests, and use case tests, and the need for LLMs to be able to generate code that can be tested in these ways (45m41s).
  • The value of human skills, such as testing user interfaces, is also highlighted as an important aspect of programming (46m4s).
  • Personal experience with co-pilot, an LLM, has shown that it can be a useful tool for learning new programming tricks and improving skills, but it's essential to use it responsibly and not accept every suggestion without thinking critically (46m11s).
  • The need for restraint and critical thinking when using LLMs is emphasized, as accepting every suggestion without evaluation can lead to writing bad code (46m47s).
  • The importance of being aware of the potential for addiction to LLMs and using them responsibly is also noted (47m20s).
  • Training new developers involves teaching them to exercise restraint and respect when working with technology, which can be achieved through experience and mentorship, allowing them to make mistakes in a safe environment (47m31s).
  • To teach the next generation of programmers, it's essential to give them an environment where they can make mistakes without catastrophic consequences, and this can be done by mentoring them and throwing them into challenging situations (48m27s).
  • The fear of what has been done with technology is related to losing control and letting automation take over, which can have severe consequences, such as financial losses for a company (50m11s).
  • The concern is not about robots ending life, but rather about the potential negative impact on companies and the world if technology is not used responsibly (50m19s).
  • The implementation of Large Language Models (LLMs) has changed the way developers work, making some tasks easier, but also taking away the fun and challenge of certain projects (49m23s).
  • The use of LLMs can automate tasks that previously took a long time, such as generating simple websites or building tech prototypes, which can be done in just a few minutes (49m44s).
  • The fear of losing control and the potential consequences of technology is a concern that should be addressed by exercising restraint and staying in control (50m7s).
  • The goal is to make the world a better place by getting people to think about the implications of technology and its use (50m57s).

Overwhelmed by Endless Content?