As a JavaScript developer, you know how challenging it can be to work with existing codebases. Often, the code is poorly documented, making it difficult to understand what it does and how it works. That's why I've added two new AI-powered features to the JS Assistant for Visual Studio Code: AI-generated JS-Doc comments and AI-generated explanations for code snippets.
AI-generated JSDoc comments
The JS Assistant can generate documentation for functions and methods. This feature accelerates writing JSDoc comments by providing a pre-filled template you can refine.
AI-generated code explanations
You can generate explanations of JavaScript code snippets to help you understand complex or challenging code passages. While such AI-generated descriptions can sometimes be inaccurate or misleading, they can help accelerate making sense of a legacy codebase. They are best used with your code understanding, e.g., to provide ideas that get you started or unstuck.
These AI-powered actions can help you work more effectively in complex legacy JavaScript code today.
Code explanations and comment generation are early, experimental features. I would love to hear how you think they would be helpful?
Top comments (16)
I think this is a great application for AI! Especially in situations where the code has gotten more convoluted over time.
But it also shouldn't be used as a hammer.
Comments that explain what the code does are redundant if the code is written clearly. The part that is missing, and the part the AI will also struggle with is the external context, or "why" the code is written the way it is.
Counter intuitively if you used this tech everywhere code quality could go down in a number situations.
Like many tools and patterns, there's always a best case and worse case.
JSdoc comments are used for documentation (and also things like intellisense etc) - which means if you’re building a library you 10000000% need exceptional comments.
Wow...
These days, a lot of AI tech tools have been coming out.
I'm excited but at the same time, I'm also kind of scared.
They're coming into a real world.
Anyway, I should be going to talk about the tool with my team.
It's amazing
We're pretty close to AI generate a README from the source code.
wow this is really awesome man ❤️
I think it's a wonderful tool, it'll help me a lot, especially to describe utils or hooks. I develop some complex modules and sometimes I forgot what they do or how do certain things.
Very cool!
Game Changing.
I’m worried about the lack of context.
For reference, I recently needed to replace an argument. In order to do that, I needed to find the right argument in the calling class... which was created in the virtual override of the grandparent class, which was an implementation of the abstract great-grandparent, and that process only happened when triggered with a reaction to a global Rx stream triggering with the right payload, to dispatch that method.
Learning the name of the argument to use spanned ~8 files. My concern is that comments often obscure that kind of thing, because we are conditioned to read the English and not the code.
If the system, instead, could essentially do a reverse proxy through the files to tell me what I am looking for and where I will find it, in a deep inheritance codebase, that would be wonderful.
Interesting case! From what I understand, you would have liked to know with what kinds of objects the function gets called, so you can more easily change the argument, right?
I suppose the simplified case is this:
this would be the closest to barebones that I think I could get, except that none of the instantiation or usage were in main; they were spread out in different places, the xs were loaded in a different service, located elsewhere, called by a different thing, and internally triggered the stream, the component was injected into C. Technically, this is Angular, so Component was used in the HTML template of C, but that's hard-mode. And all of these classes are, of course in different files in an mvc folder structure.
Oh wow, thanks for sharing! Definitely a complex case. I will think about ways to solve this, but my current take is that automated tools are still a few years away from helping with something this complex.
Yeah, I guess my concern is, if each of these files is already ~700 lines of code, and all of the comments explain what the line is doing, then you will have 1400 line files where nothing explains what the system is doing, and nobody will read past the English explaining the line... because humans like to optimize.
It would be handy, though, to have an IDE plugin, with tooltips/popups where you could eli5 an explanation and a simplified use case of any file / highlighted selection, like an MDN article on steroids. Wouldn't help me, here, but it could help people understand less complicated code, while keeping the source less cluttered.
Amazing ! This would really help me
Neither of these features are very compelling and really just highlight VS Code's shortcoming more than anything. I mean, my IDE already makes docblocks generation trivial, just type
/**
and hit enter.The main difference between IDE and AI is that with AI you have to delete all of the inaccurate text that it vomited on the screen vs providing accurate text.
To be fair, this is at least half generated by AI and is almost 100% inaccurate or redundant:
The
action
param is literally describing its type (very helpful AI).The
message
param description is incorrect. It just so happens that a lot of existing implementation name thingsmessage
if they intend to show it to the user. An intelligent being would have picked this up based on the use context.The labour of producing accurate documentation manually is about the same, if not less, because you eliminate the cognitive overload of considering the accuracy of some randomness...
And the second gif doesn't explain the code. The tool is an interpreter. It doesn't explain the code, it translates it into plain English. This is exactly like somebody speaking to me in French and interpreter translating their statements. The role of the interpreter isn't to explain... The ability to understand still lies in the listener's ability to comprehend.
Go back to your AI generated "explanation" and see. Nothing in that generated text gives any context as to:
If I asked you to explain some code to me I'd expect your answer to cover at least two of these points.
So yeah, while these example are cute, they're just another demonstration of how far away these tools are from being anywhere near as useful as the ones made by humans.
I have to second this, yes. The generated comments from the examples given in this post make the code worse by adding noise while providing little to no value (and I would reject this style of comment in contributions to my open-source projects).
I would especially caution against using tools like this if you're just starting out and looking to build a portfolio, also. This style of commenting is sometimes required in teaching environments specifically, and (if confused for manually written ones) highlights that you are an absolute beginner with little experience outside of that.
(It is fine to use the AI privately to help you understand the code, as long as you make sure that it is actually telling the truth each time. Doing so, you'll quickly gain the routine to read the code faster than you can read and verify the explanation in prose.)