The advent of AI coding assistants has been somewhat surreal. Suddenly, after years of raiding Stack Overflow, dog-earring textbooks, and dumping all of my deepest darkest secrets to a rubber ducky, we have tools now that are smart enough to generate code for you.
This sounds great in theory, but in practice, plenty of questions remain to be answered.
(...And even if you're eager to jump in, how does one even know which coding assistant to choose?)
Well, we all have to start somewhere! I initially settled on learning GitHub Copilot - a leading AI coding assistant on the market (at least, as of February 2024, when my journey all started).
Earlier this year, I went on a winding journey of trial and error so that I could ultimately, and definitively, answer one question: "How useful or reliable is GitHub Copilot really?"
I gave a Cisco Live talk on this for your enjoyment, but in this article, I'll be answering the top 10 questions I get asked most frequently about adopting GitHub Copilot.
Let's dive in!
1. Are they training on your data?
If you tend to mistrust corporations and their handling of our data, I won't try to convince you otherwise. However, I can point you to GitHub Copilot's Responsible AI policy, which at this moment states that, "The model that powers Copilot is trained on a broad collection of publicly accessible code, which may include copyrighted code, and Copilotβs suggestions (in rare instances) may resemble the code its model was trained on." In settings, you can also disable prompt and suggestion collection.
2. Why Copilot over another coding assistant?
LLMs are constantly improving and competing with one another to be "the best" or gold standard. In most cases, I do not specifically advocate for using Copilot, but for using a coding assistant that was trained for this specific purpose so as to improve the accuracy of suggestions. Beyond this, whether or not you should use Copilot is likely "it depends." Are you working in GitHub? You may appreciate the GitHub-specific features. Do the alternatives you're considering work within your IDE? If not, Copilot will provide more contextually relevant responses.
3. Should new developers be using Copilot?
Yes, but after learning foundational concepts and writing their first functional app/script. While many may shudder that I would suggest a novice use an AI coding assistant, what I actually preach is that beginners use Copilot to learn and ask questions via Copilot Chat. Students already heavily rely on Google, and in my experience, Copilot Chat far exceeds Google in terms of quality of result and speed to finding a good result.
4. Do you have to pay for a subscription?
Yes. The subscription for individuals is currently $10 USD/month, but anyone can do a free trial, and it appears to be free for certain populations (i.e. students). There are also packages for Business ($19) and Enterprise ($39).
5. Is it just intelligent autocomplete?
The autocomplete is great, but it really just scratches the surface of why you would benefit from using Copilot in your workflow. Copilot can assist with document and test generation, with debugging, and with refactoring (i.e. improved variable names). The chat functionality is phenomenal for learning and is contextually relevant to your codebase. I also go into what I call "Comment-Driven Development" in my Cisco Live talk.
6. Can I actually trust the answers it provides?
It depends on what you're trying to do as well as what you mean by "trust." If by trust you mean blindly generate and accept code suggestions, the answer, at this time, will always be no. To boil down a long talk - the smaller the task that you're asking Copilot to complete, the more likely it is to generate something helpful. But at the end of the day, LLMs are using advanced machine learning algorithms to make predictions; no matter how effective those algorithms are, there is always a chance that the predicted - and provided - response is wrong.
7. Will it create entire applications from scratch?
No (at least, not at the time of writing). I've meticulously tested various prompts and - except in very basic use cases - was unable to scaffold entire scripts or apps without the need for significant review and modification. In my opinion, this makes code scaffolding not worth trying at this point in time.
8. Do you have to use it with GitHub, or can you use another repo?
Copilot works within your IDE; so as long as you're using a compatible IDE, you're good to go! (Keep in mind, however, that depending on the package you're subscribed to, you may be missing out on some GitHub-specific features, like pull request description and summarization.)
9. Is using Copilot just asking to introduce security vulnerabilities?
Copilot is trained on public data, meaning you do run the risk of adopting insecure code or code practices. That being said, this is really no different from when we were copy and pasting code from the internet. As you will hear me say often - what's most important here is that you are not blindly generating code without evaluating it. (And in fact, in my Cisco Live talk, I mention how Copilot can be used to _improve _code security.)
10. Do we still need to learn to code?
Absolutely. In order for code generation and assistance to be effective, we have to keep in mind that "assistance" is the key word here. You are still the brain. Your knowledge, skill, and experience are required, both for effectively leveraging Copilot and for evaluating the output it generates. (And as far as job security and all that... well... we're nowhere near being replaced quite yet.)
BONUS - 11. Does GitHub own my code if I use Copilot?
Not according to GitHub Copilot's Responsible AI policy. "If a suggestion is capable of being owned, our terms are clear: GitHub does not claim ownership."
Top comments (1)
Insightful,i see a nuggets of information for developers here.
Keep up the excellent Erika !