AI Integration for Java: To the Future, From the Past

07 Oct 2024 (4 days ago)
AI Integration for Java: To the Future, From the Past

Introduction to AI and Java Integration

  • The webinar focuses on the integration of artificial intelligence (AI) with Java, exploring how AI can be used and integrated into applications to create solutions that were not possible several years ago. (17s)
  • Sa, from JetBrains, leads the evaluation team responsible for implementing tools to measure and track system quality, with a focus on applying machine learning to improve development tools over the past four years. (38s)
  • As Sing from Microsoft is responsible for Java on Azure, helping developers and customers build and modernize AI applications. He has been with Microsoft for 20 years, focusing on Java and ensuring customers have the necessary resources on Azure. (1m7s)
  • Dov Katz from Morgan Stanley leads the Enterprise Application Infrastructure Group, focusing on developer experience and core programming language stacks. He has over 20 years of experience at Morgan Stanley, with a background in Java programming. (1m44s)
  • Jonathan Schneider, co-founder of Mad M, automates large-scale code changes to support strategic initiatives. He has a history with Netflix and the Spring team, contributing to projects like OpenRewrite and Micrometer. (2m19s)
  • Eric Castlo, an editor on Java's InfoQ team, also manages product management for AUL JVMs, running Intelligence Cloud to help identify unused code in applications. (3m4s)

AI Applications and Use Cases

  • AI is considered a new technology, and there is interest in understanding its applications. AI can be used for AI-assisted development and intelligent apps, which integrate AI to enhance functionality and deliver improved user experiences. (3m22s)
  • AI can be used for content generation, including creating blogs, articles, and social media posts, as well as for summarizing large amounts of information quickly and accurately. (4m34s)
  • AI is capable of generating code snippets and automating various tasks, enhancing productivity in software development. (4m50s)
  • Semantic search powered by AI improves search results by understanding the context and meaning of queries. (5m0s)
  • Mercedes-Benz uses AI, particularly on Azure, to develop connected cars with generative AI-powered in-car assistants, allowing for rapid software updates and enhanced driver experiences. (5m20s)
  • American Airlines has modernized its customer hub using AI on Azure with OpenAI, handling millions of real-time messages and service calls, reducing taxi times, saving fuel, and improving passenger experience. (6m1s)
  • AI is being used to create intelligent applications that deliver significant business outcomes and improve user experiences, making it an exciting time for developers involved in AI development. (6m42s)

AI Tools and Libraries for Java Developers

  • Java engineers are encouraged to explore AI development by using libraries and technologies such as LangChain for J, which supports various providers and recent techniques, and Spring AI, which simplifies AI integration for developers. (7m4s)
  • Microsoft offers Semantic Kernel to assist in building applications, and there is ongoing work to enhance AI integration capabilities for Java developers, ensuring they are not limited to Python. (8m17s)

Challenges of Maintaining Legacy Applications

  • Many Java engineers work on existing applications that have been around for a long time, and maintenance of these applications can be time-consuming due to technical debt that has built up over the years (8m58s).
  • Artificial intelligence can be used to benefit Java engineers directly, not just by integrating it into applications for users, but also by automating tasks such as migrations and reducing technical debt (9m22s).
  • Development teams often struggle with delivering enough software features due to being overworked by technical debt and security findings, but using AI in code authorship can increase developer productivity (9m51s).
  • The use of AI in code authorship has accelerated the trend of increased developer productivity, with the most significant step-function increase in productivity since the introduction of IDE rules-based refactors 20 years ago (10m16s).
  • OpenRewrite recipes can be used to automate tasks such as migrations, and AI can be used to generate new recipes instead of making changes directly (10m31s).
  • A single Java 8 to 17 migration at a banking customer affected 19,000 files, and using AI to generate recipes can help automate such tasks (10m51s).
  • AI can be used to generate step functions for tasks such as porting from one XML library to another, and these recipes can be deployed at scale across tens of thousands of repositories (11m26s).
  • Using AI to automate tasks can eliminate boring and time-consuming work for individual engineers, allowing them to focus on more exciting tasks (11m58s).
  • When deciding whether to use AI or allocate people to do maintenance tasks, considerations include the error-proneness of doing tasks inconsistently and the book of work that needs to be done versus business features that need to be implemented (12m28s).

AI-Assisted Code Refactoring and Re-platforming

  • The problem of code management is divided into two categories: refactoring, which involves upgrading dependencies, and re-platforming, which involves changing programming languages or entire implementations. (13m4s)
  • Re-platforming can include moving from proprietary systems to open-source solutions like Spring Integration or Apache Camel, and AI can assist in starting this process. (13m23s)
  • Generative AI is used to understand existing code requirements and implement them differently, but scaling this to large codebases is expensive. (13m48s)
  • A scalable, deterministic, and repeatable approach is necessary, requiring multiple tools in the toolchain to effectively solve the problem at scale. (14m11s)
  • Generative AI can help envision tasks, but known and trusted methods are needed to execute them at scale, leading to significant time savings. (14m33s)
  • A Java upgrade across 15,000 codebases saved approximately 50,000 hours, equivalent to 25 full-time consultants for a year, highlighting the efficiency of using AI and modern tools. (14m46s)
  • There are various motivations for updating codebases, such as upgrading frameworks and responding to new vulnerabilities, necessitating a combination of tools and knowledge to address these challenges cost-effectively and predictably. (15m24s)

Automating Tasks with AI

  • The open rewrite approach provides a way to measure time savings, making it easier to justify the efficiency of the process, even if estimates are not perfectly accurate. (15m51s)
  • The speaker values their developers' time and would rather have them focus on high-level tasks, even if it means using 90% inaccurate AI tools to handle mundane tasks, as it saves time in the long run (16m23s).
  • The Spring Boot 2 to 3 recipe has over 2,300 steps associated with it, including property changes and idiosyncrasies, which can be overwhelming for developers and highlights the need for AI integration (16m40s).
  • AI tools like GitHub Copilot can help developers focus on what matters by handling tasks such as searching documentation, finding problems, fixing issues, upgrading, and writing tests (17m12s).
  • Recipes and AI-assisted features can be combined to help developers upgrade Java, Spring, and Spring Boot, allowing them to focus on adding intelligence to applications (17m50s).

AI and Developer Productivity

  • Engineers have used AI capabilities in tools like IntelliJ to delegate tasks such as test generation, writing commit messages, and documentation, but many prefer to write code themselves as it is part of their passion (18m29s).
  • AI can be used to automatically summarize commits and write nice messages, but may require additional context to provide more information about the changes made (19m18s).
  • Future AI tools will likely be able to memorize more information and provide better assistance to developers, with potential applications in test generation and code coverage (19m48s).
  • There is ongoing research into using AI to generate tests that cover more code, with potential rewards for developers who use these tools, but this approach is still in the experimental phase (20m11s).
  • When generating code with AI, it is challenging to compare expected results with actual outputs due to the multiple ways to solve the same problem. Instead, executing the code, writing tests, or computing metadata about the code, such as syntactic correctness and method generation, is necessary. Test coverage is a crucial aspect of this process. (20m42s)

Testing in Large-Scale Code Refactoring

  • In large-scale code refactoring, the level of testing required is significant. Although the goal is often 100% test coverage, it is rarely achieved, with 90% being a more realistic target. A moderate level of testing is generally maintained. (21m21s)
  • For production systems like inventory or stock trading systems, existing tests are crucial during refactoring. These tests are important whether the system is being upgraded using open recipes or broken down from monoliths into microservices for cloud scalability. (22m3s)
  • Code-level unit testing changes during refactoring, requiring new test cases based on the new code. However, functionality, integration, and compliance test cases remain intact and must be completed before transitioning to a new production environment. (22m57s)
  • The final artifact of work can be considered as both the functioning code and the tests, which serve as documentation that does not require regular updates. (23m24s)

Improving Refactoring Confidence with Lossless Semantic Trees

  • Confidence in large-scale refactoring can be enhanced by using a data structure called the lossless semantic tree (LST), which includes more information than just the text of the code. It resolves dependencies and includes properties like statement coverage and reachability, which help assess the risk of refactoring operations. (23m54s)
  • Refactoring code over a common base can be done by calculating a risk score based on factors such as reachability and coverage, and then prioritizing the merging of repositories with the lowest risk scores for manual review effort (24m48s).

AI-Assisted Code Modularization and Componentization

  • Modularizing code, componentizing, and naming modules can be achieved using AI to identify cut points and break down large code into smaller pieces, with the goal of training AI systems on established naming conventions such as Spring's (25m28s).
  • AI tools like GitHub Copilot can help break down code into smaller functions, identify areas for modularization, and provide suggestions based on an organization's internal and private code base (26m14s).
  • GitHub Copilot can also assist with onboarding developers, generating documentation, and summarizing code changes to facilitate collaboration and project management (26m21s).

AI for Understanding and Documenting Legacy Code

  • AI can help with understanding legacy code by generating documentation, and tools like JetBrains' Assistant can aid in this process (28m26s).
  • AI can be used to back into requirements by analyzing code and generating documentation, which has been effective in repatforming and experimenting with approved connections (28m48s).
  • Generative AI can be used to reverse engineer code for the sake of forward engineering, allowing for the conversion of code from one language to another, such as from Java to Python, and even improving the code in the process (29m9s).

Automating Undesirable Tasks with AI

  • AI can be used to automate tasks that developers do not enjoy, such as naming, testing, and documentation, freeing up time for more enjoyable tasks (30m5s).
  • AI can also be used to assist in architecture disciplines, such as reviewing architecture decision records (ADRs) and ensuring they follow architecture principles (31m8s).

AI in Architecture and Code Review

  • In the future, AI can be used as a first line review for pull requests, providing feedback on code quality and allowing human reviewers to focus on more complex issues (31m31s).
  • AI can be integrated into the software development life cycle (SDLC) to provide a higher confidence level that code is worth reviewing, and to automate tasks such as code quality checking (31m45s).
  • AI can review code and provide context on whether it meets enterprise scope, similar to how a human would, and provide feedback on code quality (32m10s).
  • AI can be used to improve code quality by providing feedback on code written by humans, similar to how a human would provide feedback on code written by another human (32m23s).
  • AI integration can provide a qualitative review of code, calling out developers when they're not following set standards, and can be leveraged in the products space that developers typically use (32m30s).

AI in the Software Development Life Cycle

  • A principal engineer once compared reminding developers to follow standards to telling kids to pick up their socks, highlighting the need to make developers feel like they're creating, not just following rules (33m0s).
  • AI can review pull requests, write summaries, and even run offline before the pull request is opened, making it easier for developers to review and implement changes (33m27s).
  • AI can also generate tests, code to pass those tests, and then generate tests to fail the code, creating a loop to find results and improve the code (34m12s).
  • Research is being done on using AI to generate code and tests, and it's expected to see interesting results in the near future (34m38s).

Managing Shared Libraries with AI

  • OpenRewrite can be used to upgrade shared libraries across different projects, but it can be difficult to use on a one-repository-at-a-time basis, and a sequencing problem can occur when merging shared libraries (35m0s).
  • A solution to this problem is to use an impact analysis recipe to determine the sequence of dependencies between projects, and then run the recipe according to that order, merging projects accordingly (35m36s).
  • This approach can help avoid missing the forest for the trees when running a rescue on one repository, and can be used to manage shared libraries across different projects (35m55s).
  • A past example of the challenges of managing shared libraries is the Google Guava issue at Netflix, which brought engineering to a halt due to dependencies on the library (36m7s).
  • A common library was used by many applications, but it became a roadblock for progress due to its numerous transitive dependencies, making it difficult to update without breaking other applications or libraries (36m19s).
  • To resolve this issue, a series of recipes were created to eliminate the changed API surface area between different versions, allowing the library and applications to be updated independently (36m34s).
  • When taking a large organization with many repositories and trying to find a sequencing solution, it's essential to consider the software supply chain and the inter-relationships between repositories to determine the correct order of updates (37m10s).

Integrating AI Services into Applications

  • A clever way to break the linkage between shared libraries is to use institutional knowledge to determine the correct sequence of updates and roll them out in a way that minimizes impact (37m34s).
  • After solving maintenance problems, organizations can focus on integrating AI services to enhance their business features, and it's essential to consider how to take advantage of AI in a way that benefits the organization (38m0s).
  • When integrating AI into an application, it's crucial to consider what can be done locally on the machine and what requires a backend service, and to understand the audience and their potential concerns about using AI (38m36s).
  • AI is a polarizing topic, and some people are eager to try it, while others are hesitant, so it's essential to know the audience and understand their needs and concerns when integrating AI into a product (38m51s).
  • The key to successful AI integration is to understand who the target audience is and whether they will be happy to use AI or if it will make their work less enjoyable, and to consider the importance of privacy and security for certain customers (39m30s).

Local vs. Server-Based AI Solutions

  • Currently, it's not possible to have a general-purpose AI that can do state-of-the-art things locally on machines or laptops, but it's possible to train a model from scratch or tune an open-source model to solve a particular task or a couple of them, which can be done on a local machine (40m49s).
  • Two major features have been delivered: full LLM completion, which is available in Java and included in the JetBrains product, and semantic search integrated in search everywhere, which is based on embedding and a completely local solution (41m16s).
  • The full LLM completion feature is included in the regular subscription, and there's no additional running cost for it since it's already happening on the machine (41m27s).
  • Semantic search is integrated in search everywhere and is based on embedding, making it a completely local solution (41m40s).
  • While it's still not possible to have everything locally, server-based applications are being explored to move as much logic as possible to the server, making it easier to iterate and update logic and change models or model providers (41m53s).
  • The client cannot be made completely thin, as context is what makes the difference between regular and using AI, and this context is what happens inside the IDE (42m27s).
  • Differentiation comes from everything that is about a particular person working with the IDE, which happens inside the IDE, and things that can be shared between multiple users, which usually happen on the server (42m42s).
  • The server side includes more integration with language model providers, selecting proper models and prompts, hosting model inference for in-house models, and indexes for documentation (43m29s).

AI for Java Engineers: Accessibility and Use

  • For Java engineers, it's not necessary to know a lot about artificial intelligence to use AI services, and they can focus on using the library or service (44m10s).
  • The amount of information needed to know about artificial intelligence depends on the specific use case, but in general, Java engineers can learn just enough to use the library or service effectively (44m29s).
  • Developers are encouraged to start integrating AI into applications immediately, as AI is set to reinvent existing apps and enable the creation of new ones that were previously impossible. (44m37s)

Roles in Building Intelligent Applications

  • Three key roles are involved in building intelligent applications: AI Engineers, who develop and deploy AI models; Data Engineers, who manage and process data; and App Developers, who integrate AI into applications to enhance user experiences. (45m5s)
  • App Developers are particularly crucial as they are responsible for infusing intelligence into applications by building user interfaces and application logic that leverage AI models. (45m59s)
  • The integration process involves combining an app platform, a data platform, and AI services, which is especially relevant for Java developers due to the prevalence of enterprise applications in Java. (46m22s)

Integrating AI into Java Applications

  • Java developers, including those using frameworks like Spring, Quarkus, and Jakarta EE, can start integrating AI today using tools like Spring AI and LangChain Pro, with additional support from Microsoft’s Semantic Kernel. (46m53s)
  • While creating a working prototype with language models can be quick, transitioning to production-level quality is complex and unpredictable, often requiring a data-driven development approach similar to test-driven development. (47m32s)
  • It is important to measure and adjust prompts carefully to avoid unintended consequences, such as formatting issues, when integrating AI into applications. (48m41s)
  • The rapid development of AI models necessitates regular updates and evaluations to ensure compatibility and effectiveness, which can be streamlined through semi-automated processes. (49m21s)
  • User experience should be designed with the understanding that AI suggestions may not always be perfect, allowing users to tweak or reject suggestions rather than forcing them. (49m54s)

Keeping Up with AI Model Developments

  • Keeping up with new AI models involves evaluating them against benchmarks to identify potential issues and ensure they meet the application's needs. (51m14s)
  • AI engineers play a crucial role in selecting appropriate models based on business requirements, using tools like AI Studio for experimentation and ideation. (52m0s)
  • When integrating AI into applications, the focus should be on enhancing the end-user experience, such as in a retail store where the goal is to facilitate cart filling, order placement, and customer retention. (52m59s)
  • Azure AI Studio provides access to various models, allowing for ideation workshops and experimentation to improve user engagement and application experiences. (53m48s)
  • Simulations can be run to evaluate different AI models based on business requirements, such as increasing conversion rates or adding more items to a cart, to select the most effective model. (54m10s)

Selecting and Deploying AI Models

  • The process of selecting technology should start with defining outcomes, requirements, and use cases, and then choosing the technology that best meets these needs, as companies like Microsoft offer a wide range of options. (55m16s)
  • Often, multiple models are selected to create a pipeline that limits the search space and reduces computational requirements, such as in developing a feature for code base recommendations. (55m42s)
  • Embedding models are used to cluster method declarations, allowing for the selection of diverse samples and avoiding repetitive questions for similar methods. These models are efficient, requiring only 1 to 2 gigabytes of RAM, and can be run locally at zero marginal cost. More sophisticated models are reserved for later stages in the process. (56m42s)
  • A free 35-page book titled "AI for Mass Scale Code Refactoring and Analysis" is available from O'Reilly, providing detailed guidance on selecting, deploying, and evaluating models for code refactoring. (57m25s)

Conclusion and Future Plans

  • The discussion included the possibility of reverse engineering from code to use cases, particularly in the context of automated code refactoring at scale. (58m5s)
  • Future plans include publishing more developer and AI news articles on InfoQ and hosting events with experts, such as QCon. Attendees are encouraged to stay connected through social networks like LinkedIn and Discord. (58m31s)

Overwhelmed by Endless Content?