#HappyCoding

19 Nov 2024 (2 days ago)
#HappyCoding

Introduction (0s)

  • Amanda Silver is excited to be at GitHub Universe, where she has been working on behalf of developers at Microsoft for two decades, focusing on tools, programming languages, and platforms that developers use and love (14s).
  • Her mission is to unleash the creativity of developers so that they can bring their ingenuity to life, with a focus on accelerating each code, build, and debug cycle (30s).
  • Visual Studio Code and the Visual Studio family have become the most widely adopted and beloved tools in the industry, with Visual Studio Code having the second most contributors of any project on GitHub (48s).
  • Visual Studio Code is the most used and most admired open-source developer framework, with over 6.5 million developers using it monthly (59s).
  • TypeScript, an open-source project Silver worked on at Microsoft, has become the third most popular language on GitHub (1m8s).
  • Contributing to open source has become ubiquitous throughout Microsoft, with 25% of the top 20 open-source projects and over 68,000 employees on GitHub, many of whom regularly contribute to open-source GitHub repositories (1m17s).
  • Microsoft was the number two contributor to Kubernetes and the CNCF ecosystem last year, with many employees contributing to open-source projects (1m29s).

AI revolution in Development (1m37s)

  • AI-assisted development is transforming the way developers work, and GitHub co-pilot is an example of this technology (1m38s).
  • AI is changing everything in the development process, from roles and processes to team collaboration, and is disrupting industries (1m46s).
  • This technological shift is the most profound of the current generation, and broad adoption is still in its early stages (1m58s).
  • AI is becoming an essential tool for every developer, helping them onboard faster, learn more quickly, and accomplish repetitive tasks with little effort (2m8s).
  • With AI assistance, developers can focus on the creative side of coding, and every application will be reimagined with AI as a building block (2m26s).
  • The partnership between human ingenuity and artificial intelligence will lead to faster development cycles, higher quality software, and more satisfied end users (2m51s).
  • Next-generation developers will have AI as their co-pilot, guiding them at every step from idea to code to the cloud (2m35s).

Demo of GitHub Copilot Features, Testing Automation & Debugging (3m5s)

  • Burke Hollin demonstrates an application that uses GitHub Co-Pilot to help build and create content, including video descriptions with timestamps, by downloading timestamps and captions from YouTube (3m12s).
  • The application is a real project that Hollin has been working on, and he showcases how GitHub Co-Pilot can assist with the entire coding process, not just writing code (4m29s).
  • The application initially has zero tests, but Hollin uses GitHub Co-Pilot to set up tests, which recommends a testing framework and guides him through the process step-by-step (4m34s).
  • GitHub Co-Pilot recognizes the project's Vue.js front end and Azure Functions back end and recommends using Vest as a testing framework (5m7s).
  • Hollin applies the recommended changes, and GitHub Co-Pilot makes the necessary configurations, including installing V-Test and running it to create a test (5m22s).
  • After setting up the test, GitHub Co-Pilot recommends installing the Vest extension, which integrates testing with Visual Studio Code (6m2s).
  • Hollin installs the Vest extension, which enables testing within the editor, making it easier to set up and use testing without requiring prior knowledge (6m12s).
  • Co-pilot can help generate tests for a project, and when a test file is missing, it can create one and add it to the correct folder, in this case, the services folder, and the test file is named "offs.service.test" (6m50s).
  • The test Explorer in VSS code can show the status of tests and indicate if they are passing or failing, and having a failing test is better than having no test at all (7m30s).
  • Co-pilot can also help set up debugging for a project, and it can create a launch configuration for the back end or front end, in this case, a Vue.js front end running on Port 4280 (8m42s).
  • The launch configuration can be saved and added to the launch configurations, and a breakpoint can be set to start debugging (9m7s).
  • The debugger can be started, and a browser instance will open, allowing for live debugging and testing of the application (9m35s).
  • The demo is live, and everything shown can be tried at home, and the AI features are being demonstrated live in front of the audience (9m51s).
  • Visual Studio code is a useful tool for developers as it allows them to focus on writing code without worrying about writing launch configurations, and it also provides features like stepping through and stepping over code (10m18s).
  • The application being demonstrated lacks documentation, which is a crucial aspect before putting it into production (10m35s).
  • GitHub co-pilot's new editing experience in Visual Studio code can be used to document code, and it allows users to select from various models to achieve the best results (11m2s).
  • The 01 Mini model is chosen for documentation as it provides good results, but users can experiment with different models to find the one that works best for them (11m27s).
  • The documentation process involves dragging and dropping files, selecting a model, and reviewing the generated documentation, which can be accepted or discarded as needed (11m12s).
  • The new editing experience also allows users to work with single files and add new features, such as using the download service to add the ability to download comments from the YouTube API (12m46s).
  • Visual Studio code speech is a free, cross-platform feature that enables users to interact with the code editor using voice commands, and it can be used to add new features and functionality to the code (13m30s).
  • A review of code is being conducted, with the reviewer pointing out that the API key is missing and needs to be added securely from the process environment variable (13m55s).
  • The code is being iterated with a model that can also iterate, allowing for a collaborative process (14m12s).
  • After accepting the changes and saving, the application is tested again to see if it works (14m25s).
  • The application is checked for comments, and it is confirmed that comments are available and can be analyzed for insights (14m57s).
  • A feature to analyze comments for insights is added, and it is noted that the model can handle misspellings, such as the word "analyze" being misspelled (15m3s).
  • The entire process, including testing, debugging, documentation, and adding a feature, is completed within a short span of time, approximately 8-10 minutes (15m26s).
  • The reviewer expresses gratitude and appreciation for the opportunity to demonstrate the coding process (15m29s).

Deploying with GitHub Copilot for Azure (15m37s)

  • GitHub Co-Pilot allows developers to focus on the creative aspects of coding, such as problem-solving, by automating repetitive tasks like documentation, testing, and configuration, enabling them to work more efficiently and fluidly (15m38s).
  • Modern cloud-native applications often have dependencies on managed resources, continuous integration, and continuous deployment pipelines, making local debugging insufficient, and GitHub Co-Pilot aims to make it easier to build, deploy, and manage these applications in the cloud (16m22s).
  • GitHub Co-Pilot provides deep integration with code editors, allowing developers to access actions and information about their code, project context, and other relevant code, making it easier to work on complex applications (15m57s).
  • Shane Boer demonstrates GitHub Co-Pilot for Azure, showcasing its capabilities in building and deploying cloud-native applications, including multifile editing, comments, and more (16m57s).
  • The demonstration involves deploying a complex application with Open AI infrastructure, static web apps, and functions from VS Code, using GitHub Co-Pilot for Azure (17m26s).
  • The application uses GPT-40 as its model, and the demonstration checks for region availability and access to the model in the Azure subscription (17m47s).
  • The demonstration also involves searching for templates and code samples to help with the application, using a prompt to create a chat app with GPT-40 (18m49s).
  • GitHub Co-Pilot for Azure provides a template for a chat plus Vision using Azure Open AI, which is close to the required application, and offers a quick start using the Azure Developer CLI (19m14s).
  • A GitHub repository is initialized with a name, "GitHub testing app", and a validated sample from the AI app gallery is used, which is a full app architecture that follows Azure best practices and uses managed identity, Azure container apps, and the Azure developer CLI (19m35s).
  • The sample code is reviewed, which includes documentation, infrastructure code using Bicep and Terraform, and is deployable to Azure (20m18s).
  • The application is deployed to Azure using the Azure developer CLI, which prompts for subscription and location information, and the deployment process takes around 4 minutes to complete (20m39s).
  • The deployed application is a full cloud-native application architecture, including roles, Azure container apps instances, an Open AI instance, and logging analytics (21m44s).
  • The application is demonstrated to be running, with a vision app that can answer questions about images, such as counting the number of plants in an office background image (22m6s).
  • The sample is attributed to Pamela Fox, and is one of the templates available using the API (22m27s).
  • After deployment, the cost breakdown of the services is reviewed using the GitHub Co-Pilot for Azure, which looks at the current logged-in subscription and provides a breakdown of the services and costs for a specified time period, such as August of the current year (22m59s).
  • A demonstration showed how to deploy an application in about four or five minutes and view costs on the cloud, with cognitive search being 27% of the cost (23m25s).
  • The cost management feature can be used to look at a specific app by specifying the app name, allowing for easy tracking of costs (23m45s).
  • Azure can be easily learned, and resources can be provisioned, deployed, and diagnosed, with issues resolved quickly and cost information obtained (23m58s).
  • GitHub Co-Pilot for Azure is available at aka.ms/getGitHub, providing a link to get started with Azure (24m7s).
  • Azure AI samples offer a ton of samples across Python, JavaScript, Java, .NET, and examples integrating common AI building blocks, making it easy to get started building an AI app (24m20s).
  • GitHub Co-Pilot can be used to start building an AI app, with guidance at every step, and is available at aka.ms/whackAIapps (24m42s).
  • GitHub Co-Pilot can be augmented and grounded with an Azure environment defined in a GitHub repository with infrastructure as code, and deployed via GitHub Actions for Azure (24m52s).
  • The Azure resource graph, Azure CLI, and diagnostic and cost information can be used to enlighten GitHub Co-Pilot with dynamic knowledge of an Azure tenant and subscription (25m6s).
  • This integration makes it easier to learn, navigate, and modify an Azure environment, all within the context of a coding editor (25m19s).

Developer experience for building Intelligent apps (25m33s)

  • Intelligent apps provide a level of personalization and usability that traditional apps can't match, being dynamic and adaptive with natural user interfaces like natural language input and voice, and ambient awareness (25m46s).
  • These apps use integrated data analysis to learn from user behavior, anticipate needs, and continuously improve, making them more responsive, efficient, and user-centric (26m8s).
  • The development of intelligent apps requires a shift in coding and development, evolving from translating requirements into code and manual testing to evaluating models, curating data sets, testing on unseen data, and continuously optimizing models and user experience (26m23s).
  • To achieve the promises of intelligent apps, developers need to adopt a strategic mindset, enhance their technical skills, and focus on creating solutions that have maximum impact (26m57s).
  • Traditional apps are static and costly to update, whereas intelligent apps are dynamic and adaptive, using natural user interfaces and integrated data analysis to improve the user experience (25m55s).
  • The development of intelligent apps involves a paradigm shift, requiring developers to move away from manual coding and testing and towards a more strategic and data-driven approach (26m55s).

Social Assistant with VS Code features (27m6s)

  • Sujin Choy is welcomed to the stage to demonstrate how VS Code and GitHub Co-Pilot can help build more intelligent applications (27m7s).
  • The goal is to create an app that not only generates content based on a blog post or YouTube video but also has knowledge of VS Code features (27m28s).
  • To achieve this, Sujin is using notebooks inside VS Code, which provide an interactive coding environment for fast iteration and testing (27m50s).
  • The first step is to mimic a simplified version of Burk's app by generating code to fetch the contents of a blog post given a URL and output it in JSON using Beautiful Soup (28m10s).
  • GitHub Co-Pilot is used to generate some of the code, which is then accepted and saved for later use (28m46s).
  • The model is given the role of a social assistant that generates LinkedIn posts, and GPT 40 mini is chosen from the Hugging Face models (29m5s).
  • To use the model, an API key is needed, but since the code is connected to Code Spaces, the API key is already injected, providing a more secure way to access the model (29m27s).
  • The model is then used to generate a LinkedIn post based on the blog post content (30m7s).
  • The process starts with a text search using BM25, which is one of the most accurate forms of text searches (30m19s).
  • To improve the search, key topics are extracted from the blog post using the model, and these topics are used as a search phrase (30m36s).
  • The search base is created using VS Code release notes that have been saved (31m7s).
  • Code is being written with the assistance of a co-pilot that anticipates and completes code based on comments, making the process more efficient (31m21s).
  • Data Wrangler is a tool used to view and manipulate data in a sandbox environment, allowing for safe data preparation and experimentation (31m42s).
  • The data loaded includes features implemented in the last two years, specifically those containing "2023" and "2024", which are filtered using Data Wrangler (32m20s).
  • The filtered data is then exported back into the notebook, and a cell is run to apply the filter (32m53s).
  • A text search function is created using co-pilot to perform a search over the filtered data using the BM25 algorithm, which is a common method for text searches (33m22s).
  • The function takes input variables from the notebook context and returns the top 10 documents based on the search key terms (33m40s).
  • The top 10 documents are generated quickly, demonstrating the efficiency of the process (34m7s).
  • To further refine the results, a re-ranking of the data is performed to select the top three most relevant features to link in a LinkedIn post (34m40s).
  • A model is used again to re-rank the data, with a different message, to select the most relevant features (34m53s).
  • The top three features are selected and can be linked in the LinkedIn post (35m10s).
  • A blog post is being used to test a social assistant, passing the blog post and retrieved features to the assistant, expecting an error in return (35m15s).
  • The error is then fixed with the help of Co-Pilot, which suggests the correct function to call despite not being trained on the specific G models being used (35m32s).
  • Co-Pilot is able to deduce the correct fix by looking at the code from previous cells and suggesting the right function to call (36m1s).
  • The corrected code is then run, and the output is displayed, showing the post and features that can be tried out (36m10s).
  • Different models from the G models page are compared and contrasted to evaluate their performance in generating social content (36m28s).
  • The models are evaluated in the context of the application being built to determine which one is best suited for the task (36m45s).
  • The optimization process for the chosen model is started, and its behavior in the app is fine-tuned (37m1s).
  • The entire process is done within the context of Visual Studio Code (37m5s).

New feature for Azure (37m14s)

  • In August, developers gained the ability to quickly discover, learn, and experiment with Advanced AI models, and GitHub developers can access Azure AI models through GitHub Marketplace for their gen AI apps and APIs, with the option to experiment with them for free (37m15s).
  • Developers can now seamlessly set up and log into their Azure account to deploy their AI apps at scale with Enterprise-level security and monitoring (37m29s).
  • A new capability has been introduced to the Azure AI platform: native model evaluation and experimentation that can be used directly in GitHub and Visual Studio code editor (37m39s).
  • The evaluation can be manual or integrated into CI workflows, which can be triggered automatically after every code change, allowing for pre-production evaluation against test data and generating evaluation results for metrics such as fluency and coherence (37m45s).
  • The results are posted back directly into GitHub after each run, and experimentation and GitHub actions can be integrated into CD workflows to trigger automatically after a deployment completes (37m57s).
  • Out-of-the-box AI models with all the metrics are provided to make it easy to get started, and this feature can be used in the context of GitHub Co-Pilot for Azure to help learn how to use it (38m20s).

Demo of chatbot app (38m34s)

  • A demo chatbot app is showcased to measure user satisfaction, featuring a new thumbs up and thumbs down feature added to the app (38m35s).
  • GitHub Co-pilot is used to define a metric for measuring user satisfaction in an ABA experiment, which is then added to the config file (38m44s).
  • Modifying a prompt is as easy as changing a text file, and an AB experiment is conducted on the adventurous prompt (38m56s).
  • The code changes trigger the CI process, and the pre-production environment tests the evaluation of the model (39m5s).
  • The model is deployed and runs in the CD environment, allowing for the evaluation of results in production (39m17s).
  • The experiment's results show it was not successful, and the particular version will not be shipped (39m27s).
  • The capability allows for easy scaling of AI applications and models, and getting user feedback once deployed into production (39m38s).
  • Microsoft Ignite is mentioned as an upcoming event where access to this capability will be provided (39m48s).

AI application platform (39m57s)

  • The AI application platform allows users to build intelligent apps using familiar tools like Visual Studio and GitHub, on the same infrastructure that OpenAI and GitHub Copilot are built on (39m58s).
  • This platform utilizes foundational models from leading AI innovators across the industry, as well as the user's own data, with orchestration and tooling to integrate everything (40m15s).
  • The platform also provides industry-leading approach to responsible AI, AI Safety, and Security, allowing users to wrap their applications with these capabilities (40m29s).
  • The AI application platform is part of a broader ecosystem that enables developers to bring joy back to coding with GitHub Copilot, become cloud-native developers with GitHub Copilot for Azure, and transition to intelligent app development with the Azure AI platform and Visual Studio code tools for AI (40m48s).

Overwhelmed by Endless Content?