Democratizing AI at Thomson Reuters: Empowering Teams and Driving Innovation
06 Dec 2024 (2 months ago)
Maria's Background and Team Philosophy
- Maria is a mathematician and engineer at heart, and she approaches everything with logic, working at Thomson Reuters as part of the product engineering team, responsible for the AI platform, RBI platform, content platform, data platform, and the Co canel product (1m9s).
- Maria's career started as a quantitative analyst and developer, writing models for traders in C++, then moved into managing teams, creating platforms, and products, with roles at Paner and HSBC, and now has around 500 people in her team (1m43s).
- Maria believes that great teams are made up of a mix of diverse individuals, including those who are "hacky" and those who focus on stability, and that a sense of ownership is crucial for driving great quality in their work (2m51s).
- For Maria, a sense of ownership means that each team member takes responsibility for their work, writing high-quality code, creating great applications, and caring about the users of their code (3m13s).
Team Structure and Goals
- Maria also emphasizes the importance of equipping teams with the right structure, including an optimal team size of around 7-10 people, a clear goal, and defined deliverables broken down into manageable chunks (3m54s).
- Maria's team works on various applications that support the use of data, content, and AI, both internally and with customers, and she is excited about the possibilities of generative AI across the ecosystem (1m35s).
- Agile practices are used to ensure work is structured and effective, with individual ownership and a process perspective, allowing teams to be effective and have everything at their disposal (4m13s).
- To keep teams engaged and aligned, a principle is used where everyone has a goal they're driving towards, and goals are expressed in a way that includes both new and existing systems (5m7s).
- Goals are set to ensure reliability on existing systems, as well as driving AI, to recognize the importance of both (5m40s).
Maintaining Team Engagement and Communication
- Communication of releases and wins is done across all teams, to make them feel empowered and happy about their work, regardless of whether it's a legacy application or a new AI feature (6m10s).
- Teams working on historical applications are encouraged to think about how they would redo the application from scratch, and how AI can help them in that journey (6m49s).
- This approach helps teams shift to newer technologies and modern stacks, similar to those used for more modern applications (7m14s).
Democratizing AI within Thomson Reuters
- The goal of democratizing AI within Thomson Reuters is to make sure everyone in the company understands the importance of AI and has a goal to leverage it, not just in products but across the organization (7m49s).
- The democratization of AI is not just about having an AI feature in products, but about making AI a part of the company's overall strategy and goals (8m0s).
- To promote and go to market with AI products, the sales team needs to understand what AI is, and using it themselves is a good way to gain this understanding, which led to the development of a tool to make AI more accessible to non-technical teams (8m5s).
- The tool, initially built by the TI Labs Community, was taken and made enterprise-wide, starting with 400 users who were primarily on the technical side and had capabilities to ask GPT 3.5 (8m41s).
- To make the tool enterprise-wide, efforts were made to put in place data separation, storage, and reusable components, making it easy to integrate new models and extend capabilities (9m3s).
- The tool now allows users to create an application with just two to three clicks, and technology work was done to enable the organization to use it effectively (9m23s).
- In addition to technology work, training was provided to the organization, with over 50 trainings and workshops done to date, and at least one to two per week, on using large language models and typical use cases (9m56s).
- A community called "Idea Space" was created where users can share ideas and use cases, such as using the tool to create a C++ Guru or to write better QA testing (10m14s).
- The tool, called OpenArena, is also used by data scientists to connect to different databases and compare responses against various models (10m44s).
- Certifications, documentation, and workshops, including a "Pizza Workshop", were created to support users and promote the tool (10m59s).
- The goal was to shift the culture and provide support throughout the journey, and the community's success stories are used to promote the tool (11m13s).
- Today, the tool has 12,500 users, up from 400 roughly a year ago, and over 50% of employees access the tool monthly (11m29s).
Impact of LLMs on Development and Testing
- Teams and individuals are utilizing large language models in the development space, with benefits including faster code writing and easier testing, using tools like GitHub Copilot, which has increased testing coverage and code reliability (11m55s).
- GitHub Copilot has made testing easier and faster, allowing developers to focus on other tasks, and has also enabled teams to resolve bugs more efficiently by providing suggestions for fixes (12m10s).
- Developers are leveraging other tools like OpenArena, a wide application of different language models, to perform various activities, such as replacing historical queries to Stack Overflow (13m27s).
- Some models are considered better than others, and developers have the ability to compare and choose the best one for a given task, leading to a higher number of usage of tooling for AI development (13m47s).
- OpenArena is being used as a starting point for development, allowing users to create a simple system prompt and select a large language model to get a response, and then iterate and evaluate using other LLMs or human-in-the-loop (14m3s).
- QA work is also being done within this space, including using LLMs to write testing and QA testing, such as taking screenshots of applications and asking the LLM to write testing steps (14m31s).
- The technology is providing a wide variety of use cases, allowing users to be efficient in their workflow and work, regardless of their role, whether it's QA, development, writing, debugging, or testing (14m56s).
The Role of Humans in the Age of LLMs
- The use of LLMs is not seen as a replacement for human workers, but rather as a tool that can augment human capabilities, with humans always involved in some capacity, such as evaluating LLMs or writing code (15m29s).
- Humans are involved in the development of Large Language Models (LLMs), and as technology evolves, tasks become simpler, but people find more complicated things to do, which is a natural progression of work (15m46s).
- The definition of "complicated" changes over time, and tasks that were once considered complicated become simpler with advancements in technology (16m8s).
- As technology improves, things become simpler, but people find more complicated tasks to undertake, and this trend is expected to continue (16m42s).
- At Thomson Reuters, 12,500 people are using LLM tools, which provide benefits such as embedding in their day-to-day workflow, making tasks easier, and increasing efficiency (16m58s).
- The LLM tools are used for various tasks, including writing job specifications, creating presentations, and summarizing content, which saves time and provides a starting point for strategy and other types of work (17m18s).
- The tools are also used for customer support, providing assistance and improving the experience for both the support team and the end-user, making it a win-win situation for everyone (18m35s).
- There are over 3,000 distinct use cases for the LLM tools, demonstrating their versatility and value to the organization (19m22s).
Connecting with Maria and Learning More
- Maria can be found at Thomson Reuters in Switzerland, and she can also be contacted through LinkedIn for questions about her work or the company's approach to AI and development (19m38s).
- Maria and her team are working on creating a blog that discusses their technical approach to large language models, multicloud choices, and other relevant topics (19m58s).
- The blog aims to address common questions and concerns that people may have when starting their AI journey, such as choosing between one provider or multiple providers, dealing with security, and managing costs (20m13s).
- The team plans to publish articles on these topics, and a link to the article will be made available (20m31s).
- Maria and her team are open to continuing the conversation and sharing their knowledge and experiences with others (19m34s).