Categories
AI

Stepping Into the Future: Pair Programming with AI

If we do this right, AI can make our jobs and our lives easier and give us time back to do the things we have reason to value. Pair programming with AI serves as a practical example.

With the realization of AI’s power comes well-justified concerns about how AIs will figure into our lives – and in particular our work. Look to any media outlet and you’ll find a dense fog of articles, videos, podcasts, and think pieces about whether, when, and how AIs will take people’s jobs, and whose jobs are most at risk right now.

In this darkness, let me put up a bright beacon on the horizon of possibility and give you a glimpse of what a future of human-AI collaboration can look like.

Explain it to me

You bump up against a problem at work: an Excel formula you’ve forgotten, an inscrutable data processing script written by people no longer on the team, the right way to invoke a particular JavaScript function while being mindful of state. These situations are common, and they consume significant time and cognitive resources. They are also what I call “robot work,” as in the kind of repetitive rote work you can imagine a robot doing. 

Now imagine having a skilled co-worker on call, at all times, ready to help you find and explain the right formula, document that inscrutable script, and refactor or even build from scratch that JavaScript function you need.

That’s what AI can be for us: Just-In-Time assistants for all the tedious, time consuming, and rote robot work taking up our valuable time and cognitive capacity.

If you’re a developer, you can experience this future today via various AI integrations including GitHub Copilot and ChatGPT.

No alt text provided for this image
GitHub Copilot Labs panel in VS Code.

GitHub Copilot coupled with the new GitHub Copilot Labs extension in VS Code gives you a pair programming assistant right in your development environment. Highlight any block of code and in the Copilot Labs panel you can ask for an explanation of the code, have it translated into another (applicable) code language, use a series of “brushes” on it including making the code more readable, adding types, cleaning, chunking, even documenting. You can even use Copilot to write and run tests on your code.

A myriad of ChatGPT extensions including Ali Gençay’s ChatGPT for VS Code does much the same, via a slightly different route. Authenticate the extension with OpenAI’s ChatGPT API, highlight any code, and you can ask ChatGPT to add tests, find bugs, optimize, explain, and add comments automatically. You also get the ability to start a full chat with ChatGPT in a dedicated panel right inside the editor where you can talk to the AI in more detail about whatever you want. 

No alt text provided for this image
Features from the ChatGPT VS Code extension.

Time and Energy

This past week I’ve been working on a large project involving code written by someone else. The JavaScript of the project has been co-located in two giant files, and is an unruly mix of carbon copies of standard components and functions, modified code copied from documentation sites, and custom code. As is often the case, the documentation is lacking where it’s most needed.

For my project, I need to refactor (rewrite to do the same code in a different way) large parts of the code, and add extensive documentation along the way. That means figuring out how the code works, adding comments to clearly document and explain it, reorganizing the code so it can be split into components, cleaning up and updating the code to modern standards, and finally splitting it up into discrete modules. This is robot work: rote repetitive tasks consuming significant time and cognitive energy better spent focusing on what the code is for: Serving as an example for a course on JavaScript build processes.

In the past, I’d end up investing several days doing this work. Now, with the help of AI, the workload is significantly reduced. This, I think, is an instructive example of how our future with AI can unfold: with AI assisting us as we do our work rather than take over that work.

Refactor this

Both GitHub Copilot and ChatGPT are conversational AIs. You chat with them as you would chat to a person. You can ask questions, give instructions, and ask them to perform tasks for you. Using AIs as pair programmers you do a combination of all of this and more.

If you’re using the VS Code extensions mentioned above, they are already set up for the correct context. In the case of ChatGPT you can also use it as a stand-alone pair-programmer with some basic setup:

To start, set the stage (literally) by instructing the AI on how you want it to behave. In a new chat, provide a prompt similar to this:

“You are a pair programmer specializing in JavaScript. You help refactor code to bring it up to ES6 standards. Is that understood?”

No alt text provided for this image
Setting the stage for ChatGPT to provide useful feedback.

The format here is “you are a [some role]. You [perform some skill/action]. Is that understood?” The last question gives the AI an opportunity to state how it is setting itself up based on your instructions and gives you an opportunity to provide further instructions. In my case ChatGPT responded as follows:

“Yes, that is understood. As a language model, I am able to understand and provide assistance with various programming languages, including JavaScript and its latest version, ES6. I can help refactor code to bring it up to current standards, as well as provide guidance on best practices for writing maintainable, efficient code.”

From here you can for example paste in a JavaScript function and ask the AI to help find an error and resolve it:

No alt text provided for this image
ChatGPT finds and explains issues in JavaScript.

In response, ChatGPT provides an explanation of the errors it discovered, prototype examples of solutions to the issues, and finally a full refactoring of the pasted code with the issues resolved.

This kind of contextual response not only helps solve immediate problems, but also teaches you what’s wrong and how to fix it.

This is invaluable for people learning to code and people working with code in any capacity which is why I’d strongly discourage any teacher or manager who is right now trying to figure out how to block people from using AIs in their work. AIs reduce the need for Googling or looking up code examples on documentation sites, coding forums, and open source repositories. Instead they give you contextual explanations and references related to your specific code, and even help you with refactoring. This is the future of work, and gives us more capabilities as workers.

  • Have some code you can’t make heads or tails of? AI can explain what it does. Computers are much better at parsing logic based languages than humans, and conversational AI like ChatGPT are specifically constructed to output human-sounding language making them ideal tools for decrypting complex code for human consumption.
  • Have some code in need of documentation? AI can write a function description, inline comments, or whatever you prefer based on your instructions.
  • Need to refactor based on specific parameters? AI can get you started.
  • I could go on but I think you get the idea.

I’ve worked alongside these AI pair programmers for the past year and a bit, and I can say with absolute conviction these tools and materials will make our lives better if we use them right and integrate them in our lives as helpers for rather than replacements of human labor.

In my experience, pair programming with an AI feels like working with an overly polite person with encyclopedic knowledge of coding and no awareness of what they don’t know. And this constitutes just our first timid steps into the infinite possibility space we are entering as AIs become our assistants.

The beginning of the beginning

As you interact with AI today, be constantly aware of where you are: At the beginning of the beginning of a new era. While these tools are powerful, they are not omnipotent. Far from it. They are shallow, error prone, and while they sound convincing they cannot be trusted. A good mental model for what they produce right now is bullshit as defined by Harry G. Frankfurt: It looks true, and it may be true, but some of the time it’s just plain wrong and the AI will still present it as the truth. While they talk like humans, AIs are not conscious or sentient or aware. They have no human comprehension of your question or their answer. They are advanced pattern recognition systems who tumble down enormously complex decision trees any time a prompt is provided to issue human-sounding strings of text (or code) with a statistically high probability of being the kind of answer that is considered correct by their human trainers. 

When I asked ChatGPT to correct a function containing a deprecated method, it corrected the syntax of the function but kept the deprecated method. When I told it the method was deprecated, it omitted it and refactored the code, but the result used a similar-sounding method that serves a very different purpose and was therefore non-functional and just plain wrong.

When I asked ChatGPT to find an error in a large body of code, it found two real errors and invented a third one, going as far as referencing use of a method that wasn’t even in the code provided.

These examples highlight why I see AIs as tools and materials rather than replacements for human labor. They have no understanding, no contextual awareness, no ability to do creative or lateral thinking. A human still needs to be in the loop; to make sure the output meets parameters, does what was intended, and follows best practices and standards (not to mention hold ethical responsibility for the work created). These things we call “AI” are very much artificial, but they are not intelligent. 

Intelligence is added by the human operator.

Even so, the pair programming offered by these prototype AIs is an enormous leap forward for human workers. And you can easily see how this type of AI-driven assistance can be extended to other work and other tasks. 

I’ve come to think of them as overconfident colleagues with a lack of self-awareness. Because of how they are “trained” – being fed large volumes of data from a corpus lifted off the internet – their “knowledge” is limited to the coding world of two years ago. Therefore, when it comes to modern features, frameworks, techniques, and standards released in the past two years, our current AIs know naught, and more importantly do not know what they do not know. Therefore, if you’re writing code on the bleeding edge of standards, you’re on your own. Or better yet: You’re training your future AI pair programmer! So the pressure is on to get it right!

The future is today

Having seen what AIs can do today, I desperately wish I had a looking glass to see what the future of work looks like. The potential here is infinite. The very best AI tools we have today are prototypes and MVPs trained on old data and limited in their scope. The AIs we’ll have a year from now, five years from now, ten years from now will be beyond anything we can imagine. And with these tools and materials in hand we can choose to build meaningful futures for everyone where we all have the capabilities to be and do what we have reason to value.

The future we are stepping into today is a future where AI is part of our lives, our work, our communities and our society. If you are alive today, and especially if you find yourself in a job, you are in the right place at the right time: These next few years is when we collectively get to figure out how AI fits into our lives and our work. This is when we set the stage for our futures with AI, and we all have a part to play. The work starts by asking yourself in what parts of your life you act like a robot, and whether you’re willing to part with that work and let an AI do it for you so you can do something else. 

If we do this right, AI will allow us to reclaim our time to be human.

Cross-posted to LinkedIn.

By Morten Rand-Hendriksen

Morten Rand-Hendriksen is a Senior Staff Instructor at LinkedIn Learning (formerly lynda.com specializing in AI, bleeding edge web technologies, and the intersection between technology and humanity. He also occasionally teaches at Emily Carr University of Art and Design. He is a popular conference and workshop speaker on all things tech ethics, AI, web technologies, and open source.