Pull Requests have become a central part of many team's workflows. We love how they let us group changes into a single transactional chunk so they can be communicated, discussed and improved. But they also create overhead. It takes time to put them together, review them and sometimes it's hard to even merge them.
A good pull request description can help reviewers turn your changes around quickly and can help the rest of your team keep up with progress. We're thinking about how Copilot can make it easier to write a great description. There seem to be different kinds of information that developers include in pull request descriptions and the rules vary between teams. In order to meet developers where they are, we've built a feature that allows developers to insert marker tags in their pull request description. When the description is saved, Copilot (powered by OpenAI's new GPT-4 model) will expand the marker into a description of the changes in the pull request. Developers can then review or modify the suggested description.
copilot:allshowcases all the different kinds of content in one go.
copilot:summaryexpands to a one-paragraph summary of the changes in the pull request.
copilot:walkthroughexpands to a detailed list of changes, including links to the relevant pieces of code.
copilot:poemexpands to a poem about the changes in the pull request.
Fixing a bug is great. Completing a new feature is great. But if you don't add tests to firm up these changes, the bug might soon be back and the feature gone again.
And yet writing tests often feels like a chore, and many PRs don't test the progress they bring to a codebase. "Ok, but please add tests" is a common refrain.
We want to help you with those tests, so new PRs don't increase your project's test debt. Our prototype uses AI to identify changes in your PR that may be lacking tests, and suggests tests for you to build on or use directly.
Copilot users have come to expect "ghost text" (the subdued, inline suggestions that appear as you type in the editor) everywhere they work. We're working on bringing this UX to the pull request experience, so that developers can get suggestions for their pull request descriptions as they type.
Someone just filed an issue on your repository: "Replace TensorFlow with PyTorch". Seems like a big job? Maybe you are not sure where to start? We think that AI can help.
We are prototyping functionality to automatically describe how to solve an issue and even to automatically suggest the changes you need to make. In this video we file the issue about moving to PyTorch and use our AI to example how it might be done, to generate code suggestions and raise a new pull request.
Considerable amounts of developer time are spent on code review and on preparing a PR for code review. We think that AI can help. In this video we use our AI to describe the changes in the pull request and to review the code. See how it makes actionable suggestions for improvement that you can just click to accept. Maybe this won't replace human reviewers, but we think it can cut down the time spent in the review cycle.
Some PRs flow almost automatically from one or two lines of edits. Add a comment to a function? AI can “complete the job” and add comments to all your functions, throughout all your code. Change your packages.json from Express to Koa? Today’s AI models can guide you how to adjust your code.
We’re working to empower our AI bot with the capability to complete pull requests given a few indicative edits. We’ve mocked this up in VS Code and are looking to integrate this into our bot and other delivery channels.
We know the PR process can damage your flow. There is inevitably a delay while you wait for a review, or you wait for Actions to run. And many PRs go back and forth a few times. Each time you switch in and then switch out again is disruptive. We want to know if we can use artificial intelligence to fill in the gaps and reduce the number of switches.
How many times have you submitted a change and forgot to update the unit tests? Or the documentation? Or introduced linter errors. Perhaps we can fix that for you….watch this space!
Is someone nit-picking on your changes? What if we could auto-generate changes in response to their requests to add documentation or even to add another test?