DEV Community

Cover image for šŸ™… Why I don't use AI as my copilot šŸ¤–
Jayant Bhawal for Middleware

Posted on

šŸ™… Why I don't use AI as my copilot šŸ¤–

Jesus, take the wheel. šŸš—
And Github Copilot, take the IDE. šŸ’»

Github says, 92% of US devs are using Copilot.
What. Seriously?!
When did you hear 92% of any demographic using a singular thing?

Unless of course... you say 100% of all people who ever existed, consumed di-hydrogen monoxide.
(There's exactly one way this line gets dark. Don't go there. šŸ‘€)

Join me on a quick journey as I talk about:

šŸ”„ When the machines took over the world in 2024

After a quick Google search, it seems like most devs are using AI assistance in writing code. Iā€™d be lying if I said I havenā€™t used AI to write code at all. Of course, I have. I donā€™t live under a rock.

I've seen devs get oddly comfortable with the idea of sharing their code related data with third party cloud services, which are often not SOC2 (or something similar) certified, and make vague unprovable claims of privacy at best.

Things like Github Copilot (and Copilot chat), Bito.ai, and several other AI code extensions on the VS Code marketplace have more than 30 million installs. Crazy! šŸ¤Æ

And then there's me. Iā€™ve not made AI assistance a part of my regular code workflow. A couple of times Iā€™ve taken help from GPT to get some boilerplate written, sure. But those times are an exception. Things like Github Copilot, or any kind of code review, code generation tool, PR creation, or commit-assistance, isnā€™t a part of my IDE or CLI flow.

Maybe itā€™ll change with time. Weā€™ll see.

ā€œButā€¦ why?ā€

but why

šŸ˜Ÿ What I'm ACTUALLY worried about

The answer is simple. šŸ‘‡

1. I fear my programming skills will get rusty

I am concerned that the way I write and read code will suffer if I get too used to AI assistance.

  • Iā€™m concerned Iā€™ll begin to overlook flaws in code that I can catch otherwise.
  • That Iā€™ll start to take AI generated code for granted.
  • And looking up APIs, built-in methods, or other documentation will start to feel like a chore.

I fearā€¦ that Iā€™ll start slipping.

2. I'm not comfortable enough with sharing all of my code with a third party service

Companies can be damn smart about inferring things from the data you provide. Sometimes they'll know about things that your family won't know about.

The idea that sensitive business logic may get leaked to a third party service, which may eventually be used to make inferences I'm not comfortable with or just... straight-up leak? I mean, software gets hacked all the time.

I think I'm being very reasonable thinking that I don't want to expose something as sensitive as code in an unrestricted manner to a third party company. Even if that company is Microsoft, because even they f*ck up.

šŸ‘€ From the Experienced Devs' point of view

This isnā€™t a take that is unique to me either!

1. More-experienced devs tend to not want to lean on ā€œcrutchesā€ to write their code.

Iā€™ve even had the pleasure to work with senior devs who didnā€™t want to use colored themes on their IDEs because they thought itā€™ll hurt their ability to scan, read, or debug code! (that was a bit much for me too)

After all, ā€œprogramming skillsā€ is a lot more than just writing code.

2. Older devs have seen all kinds of software get hacked, data leaked, etc.

I mean, when haveibeenpwned.com sends you emails about your credentials, emails, and other data leaks every year for over 10 years... MANY TIMES from billion dollar corporations...
When you hear "When you're not paying for the product, you are the product" for the bazillionth time, which is then backed by yet another company selling that data to some third party...

Yeah... it gets tiring.
And it gets real easy to just disconnect as many wires as you can and go back to stone age.

old matt damon
ā€œOlder devsā€? Am Iā€¦ Am I getting old?
Nah, Iā€™m still 22 and this is 2016 or somethingā€¦ right? Right??

Btw, the answer to the question in the title is šŸ‘† this. Congrats! The post is over! Over to the next oneā€¦

Buuuuutā€¦ if you want to continue readingā€¦
joey theres more

šŸš¶ Let's take a step back for a moment...

I think my fears may be exaggerated.

Let's keep the whole data privacy angle aside for now, because that's a whole other topic on it's own that I feel about rather passionately.

I personally donā€™t have enough data to empirically say that using AI assistance will bring about the doom I fearā€¦ That itā€™ll downgrade me from what I am today, to an SDE1.

But Iā€™ve seen patterns.

  • Iā€™ve seen AI-generated sub-par quality code go through code reviews and end up on the main branch.
  • Iā€™ve seen library functions being used without properly understanding what, or what alternatives exist just because an LLM generated that.
  • Iā€™ve even seen code generated to solve a problem, for which a utility function already existed in the codebase but wasnā€™t used because knowing this utility existed was a lot more work than asking GPT to generate it for you.

šŸ’Ž Diamonds are Bad code is forever

ā€œWait a damn minuteā€¦ Iā€™ve seen this movie before!ā€
dejavu

  • LLMs are a pretty new thingā€¦ but šŸ’© code has been eternal!
  • Every. Single. Dev. Ever. Has used library functions without fully understanding it or looking at alternatives. You, and me, are both guilty of that. (What? You thought Array.prototype.sort was the best way to sort anything? Itā€™s just sufficient in most cases!)
  • A piece of logic gets reinvented (re-copy-pasted) all the damn time! Just that before it used to be from StackOverflow, now itā€™s from ChatGPT.

šŸ¤· So, whatā€™s the big fuss about?

"Will using ChatGPT make me a bad programmer?"

I think, no.

The takeaway is that you just need to care about what you build.
Take pride in what you build.

šŸ¤– Where the heck does LLM/AI fit in?

LLMs are not inherently evil.
In fact, they can be pretty damn useful if used responsibly:

  • Quality Code: An LLM might handle edge-cases that a less diligent developer wouldnā€™t consider.
  • Comprehensive Tests: LLMs might write tests that are more comprehensive than what some devs would produce.
  • Comprehensive Types: It might even write types more "completely" than an average dev might write on their own, or might have the skill to write.

However, the responsibility lies with the developer to ensure that the code output is guarded and well-monitored. Someone who doesnā€™t care would have done a shoddy job at any point in history. The presence of LLMs doesnā€™t change that.

šŸ˜Ž The art of actually giving a f*ck

There's a lot of devs out there who don't care.
But youā€™re no such dev. You DO care.
Else you wouldnā€™t be here on dev.to learning from peopleā€™s experiences.

I recently wrote about what new devs should care about to grow in their career. Itā€™s a LOT MORE than just code.

Maybe Iā€™ll introduce some AI in my VSCode.
I think itā€™s a matter of when, instead of if.

Whatā€™s more important isā€¦ as long as I care about making sure my programming output is readable, performant, high quality, and easily reviewable, I think Iā€™ll be fine, and so will you.


šŸ‘‡ P.S.

If you want an example of something I care deeply about, and has both great code šŸ’Ŗ andā€¦ less than excellent code šŸ¤£, take a look at our open-source repo!

Itā€™s something that lets you spot how long it takes for you to deliver your code, how many times PRs get stuck in a review loop, and just generally how great your team ships code.

GitHub logo middlewarehq / middleware

āœØ Open-source DORA metrics platform for engineering teams āœØ

Middleware Logo

Open-source engineering management that unlocks developer potential

continuous integration Commit activity per month contributors
license Stars

Join our Open Source Community

Middleware Opensource

Introduction

Middleware is an open-source tool designed to help engineering leaders measure and analyze the effectiveness of their teams using the DORA metrics. The DORA metrics are a set of four key values that provide insights into software delivery performance and operational efficiency.

They are:

  • Deployment Frequency: The frequency of code deployments to production or an operational environment.
  • Lead Time for Changes: The time it takes for a commit to make it into production.
  • Mean Time to Restore: The time it takes to restore service after an incident or failure.
  • Change Failure Rate: The percentage of deployments that result in failures or require remediation.

Table of Contents





Top comments (26)

Collapse
 
erickrodrcodes profile image
Erick Rodriguez

This article is an antithesis of what a real developer should be. In first instance, take leverage of AI and learn from it. However you should avoid taking for granted what AI produces.

You should take documentation in first place and from there build up.

Personally, I don't feel threatened by AI as I take decisions based on human experience, while AI only produces content as reflection of past experiences.

Should a dev be afraid of AI? Not at all.

Collapse
 
jayantbh profile image
Jayant Bhawal

Good take, Erick.
Devs just need to be responsible with introducing AI tools in their workflows. Starting small and seeing how it works, what it's good at and what it's not.

Though I am iffy on using terms like "real developer".

Collapse
 
aravind profile image
Aravind Putrevu

I slightly disagree, I think devs should commit mistakes with AI and then maybe learn from it. But ofc mistakes could be costly for orgs. That's where PR reviews, senior expertise come in right?!

Collapse
 
jayantbh profile image
Jayant Bhawal

Devs should definitely not avoid AI of course.

But I think maybe starting with a production-grade codebase isn't the best idea, regardless of whether senior devs are involved or not.

Devs should spend enough time learning how to use AI as a tool responsibly in their own personal projects, get community, friends, and coworkers feedback on it before involving AI in production grade code.

Collapse
 
samadyarkhan profile image
Samad Yar Khan

I am also in those 8% of devs not using co-pilot and my main concerns are the same as the ones mentioned here. But I still use AI every day at work to make my life easier. LLMs can boost your productivity to 10x if you are able to understand the generated solutions and iterate over those to make them better. Great Read :)

Collapse
 
jayantbh profile image
Jayant Bhawal

The trick is to not copy/paste production secrets into this thing. šŸ¤£šŸ‘‡

Image description

Collapse
 
quooston profile image
Quooston

Such a divisive topic. Clearly there are pros and cons and both are valid actually.

I would say that having an AI generating most of your code is not a good thing. And I would also say that not incorporating an AI would be a lost opportunity. So my take is to use it simply as a pair programmer. I'm driving, the AI might suggest some things I don't agree with - and I readily ignore those. Other times I might look at the code I've written and ask the AI if there is a simpler way to do something and it, again, may come up with something valuable, or some utter nonsense that I don't want in my codebase.

I think there is real risk in letting it write your code for you, but I see real value in using it as a tool to potentially improve parts of your work while you're doing it. It's great to pair program, now we all have access to one. Just use the tool appropriately, because all it is, is a tool.

Collapse
 
jayantbh profile image
Jayant Bhawal

Nuance, appears to be a skill lost to time. šŸ˜„

Thank you for your balanced perspective.

Collapse
 
xb16 profile image
Ų­Ų°ŁŠŁŲ© • Edited

the competences of new code learners getting weaker day by day,
something like patience, searching for month about issue you faced while you code, watching course of 100 Videos, Or crash course of 10 Hours (Crash :) )
all these wonderful moments they wouldn't live it.
I speak from my experience with my classmates and i feel that the harm of AI for beginner is stronger than for professionals.
yes they won time and submit their assignments in the best way but they lose skills through two year of studying.

AI became an alternative mind think instead of them.

Collapse
 
jayantbh profile image
Jayant Bhawal

I wouldn't say it's competencies are getting weaker for anyone...
Whether patience is or not, well, there might be some merit in that, but that's probably not just driven by AI itself.

Before AI, copypasting from SO without understanding what it actually did was so common that it was a meme. IIRC, there was even a meme package that would import the code snippet from an accepted answer for a question or something...

Those that were responsible and cared about understanding what they were shipping, did so even with SO, or other pre-AI sources.

An argument could be made that they have a better chance at learning what their "copy-pasted" code does now because they could just ask the AI for an explanation (though that could be flawed too, but LLMs indeed have covered some distance).

=============

Where I am more aligned with you is that, it's far easier for someone to manage to not learn things along the way of their work, and just deliver something that works "for now". Meeting timelines at the cost of everything, even learning as a byproduct of researching solutions.

Collapse
 
daniel_karlsson_b4b6b2303 profile image
Daniel Karlsson

Iā€™ve largely given up using co-pilot chat to generate code. For me it is a documentation tool and something I use to validate my decisions. The autocomplete feature is handy, but mostly when doing repetitive work. Although not revolutionary (yet), I couldnā€™t live without it!

Collapse
 
jayantbh profile image
Jayant Bhawal

Do you happen to use some other AI tool to generate code? Like, perhaps GPT or Gemini?

Collapse
 
ryzorbent profile image
Thabo

I'm a self-taught MERN stack developer. Without ChatGPT and Copilot, I wouldn't have been able to release a SaaS platform that's now generating some revenue for my startup. As a solo founder and developer, AI helps me write 96% of my code. I'm very comfortable with this approach because I review and correct the AI-generated code line by line before using it. I've become extremely efficient at prompting AI, and there's no going back for me. When I start hiring developers, those who don't use AI will be let go. I'm serious about this! šŸ˜„

Collapse
 
samjarvis244 profile image
Samuel Jarvis • Edited

Beautifully stated. As a mern developer with years of experience before ChatGPT and Co Pilot. There tools are priceless in regards to the speed and assistance they provide when writing code.

Collapse
 
jottyjohn profile image
Jotty John

Relying solely on AI for all your coding needs risks undermining your development as a programmer, potentially reducing you to a copy-and-paste operator rather than a thoughtful, creative coder. Personal coding style and logic are crucial for fostering innovation and problem-solving skills.

Collapse
 
kingsleyeghianruwa profile image
Kingsley-Eghianruwa

Fear surrounding the current state of AI is largely unfounded. Language models and generative models are simply tools, and being skeptical of them is akin to using a rock to drive in a nail when you have a hammer available.

In reality, AI and language models are not inherently intelligent. Without creative input and context, they are essentially empty shells. This means that if you are not already skilled at something, AI is unlikely to make you significantly better.

However, AI and language models can be incredibly useful in augmenting our existing skills. For instance, I love writing code, but I often struggle with writing clear and concise documentation and comments. Here, AI and language models have been incredibly helpful. By providing the model with a rough idea of what I want to say, it can generate a draft that I can then refine and edit. This can save me a significant amount of time and mental energy, freeing me up to focus on other creative tasks.

In my view, the development of AI and language models should be encouraged, rather than stifled. By improving these tools, we can achieve more. As with any technology, there are certainly risks and challenges to be addressed, but I believe that the potential benefits far outweigh the drawbacks.

In short, AI and language models are not a solution to any of our problems, but they can be incredibly useful tools when used in the right way, which is augmenting current skills and creative pursuits. By embracing these technologies and working to improve them, we can do more.

Collapse
 
gyurilajos profile image
Gyuri Lajos • Edited

Uncle Bob observed

the number of people working in software development doubles every five years

there for half of them at any time have less then 5 years of experience

this half has no experience to evaluate what the AI generates so it

sure looks to me like a doom loop

to me

Some comments may only be visible to logged-in visitors. Sign in to view all comments. Some comments have been hidden by the post's author - find out more