My new best friend ChatGPT has so far helped me to write a blender plug-in even when I have no python experience and I know that it works, but I canโt test it or understand if any of this code is secure in terms of python idiom.
So there in lies the โwhyโ we need experienced people to operate factory machines, itโs one thing to spew out code but you still need experience to quality control and sanity check, something AI still has to work hard on.
However hereโs the issue, Iโm happy to release my blender plugin without that experience, for all I know I have to trust this AI is not injecting malicious code unintentionally and thatโs interesting isnโt it.
There are no human errors in AI because there are no humans but it may still be possible for a bad actor to inject nasty bits of code that might not be checked to the same degreeโฆ this code suffers from the aging product problem, I didnโt write it but I must trust my peers and thatโs the trust that could be exploited and itโs new and kind of scary ๐ฆ
Top comments (17)
I have a rule where if you don't understand the code or what it does, don't use it. Learning what the code does etc is a good learning experience.
This is a very good viewpoint to have. Goes for Ctrl-C/Crtl-V on Stack overflow. I've used ChatGPT over the past week to refine a some of my code, it comes up with some good ways, some terrible ways, but in any code it generates, ensure you understand it first.
How do you do it? Do you copy the code and ask "how would you improve it?" to chatGPT?
Well you can, it will do a better job than I and in rapid time, Iโm not a bad programmer I am very experienced but I donโt pretend that I am better than the AI
No, you would be better programmer than an AI, like GPT. Remember its only able to produce results based on the information it, in its current state, has, where-as you can problem solve beyond that. Ask it a question that's not in its training data, and it will struggle.
I've thrown some simple, some not so simple, functions at GPT and asked it: can you improve this? or something to that effect. Sometimes it has given me some slightly better methods to solve a problem, maybe in a few less lines, other times it's given me solutions that don't quite meet what you would call industry standards, and sometimes it gives me my method back saying this is a better way.
The way I see it, AI assisted coding has great potential in the future, or even now I guess, to offset a developer to help reduce the amount of what I might call the 'laborious' side of coding, as in the actual writing of the code, but still have a developer directing how to write it, if that make sense. Almost how a Low Code app works.
The imposter in me wants to concede to my AI overlord but of course you are 100 correct
I think what Iโm doing is a great example of the struggle, blenders python apis change a lot and theoretically it should not be able to answer questions about blender 3, although it seems to in a limited way, I can feed it docs extracts but then we are pair programming at this point (I always loved pair programming)
You touch on industry standards and I think because it has no ego or morality, it will occasionally do things like extend a global built in thing and if your not paying attention that should workโฆ but humans donโt do it becauseโฆ well, we donโt.
This reminds me a lot of my son whoโs autistic, he would not stick to conventional thinking and that can be a great thing, but chat GPT I guess isnโt thinkingโฆ still what a time to be alive!
Yeah I agree I think for me I have a percentage of understanding that I feel comfortable with, maybe 80% because I trust that I will either understand it soon or will refactor it into something less performant, more ergonomic and easier to understand
Thank you I think you hit the nail on the head this is exactly like StackOverflow only a little better in some ways, more asking for help in a judgement free environment and instantaneous answers, certainly an empowering tool but we have both mentioned, error prone occasionally, Iโm definitely learning python and blenders API but itโs starting to feel like I should just put this down and go look at the docs, I know enough now to be dangerous
Iโve certainly said this before but in practice Iโve now learned some python having dared to solve the problem not worrying about the code, I did have to study and scan it and I did learn it. This rule I think it applies to a pre GPT world, we may have to learn by reviewing
We'll need people understanding code. The "it will steal your job" is the wrong way of thinking about it. The way I think about it is that it'll make devs more productive. Psst, for some demos check out this one ...
For a CEO of a company like yours to see the true potential as an additive not as a replacement is encouraging but of course I know and agree as this posts โhumourโ shows, Iโll have a look at your job opportunities, you see the value in people.
Thx mate :)
I can agree with the statement "wrong way of thinking about it".
To me it's like that comic where 2 people talk something like:
Don't remember the exact sentences but in essence I think of AI stuff like ChatGPT mostly as a new ay of using devices a new interface rather then "robots replacing us". Like mouse or a keyboard. Instead of using really specific form of non natural language text to give instructions through keyboard; OR voice or other forms already today; we might later use AI to give instructions in a "natural language" way and leave rest to the PC. We still need to have requirements and someone who will decide on some details where the form and interface for input are irrelevant. I'm not optimistic about doing everything through it but it could work for things like "write me a image upload and compression for AWS Lambda" and then you just use it for non critical parts of some system
Both COBOL and SQL were attempts to replace software developers. COBOL and SQL were invented in the 1960s and 1970s ...
"There are no human errors in AI because there are no humans" and how do you think AI is trained? - by human code. It might sound possible for AI to train and adapt to be better but I would not bet on it. I pretty sure it would take like 2 years on projects using AI generated code to start noticing troubles which by then would mean either full refactor or failure of the project.
A lot of things get overlooked or just aren't problematic due to different expansion speed of the project. I'll give you an example: running a monolith in smaller companies will be faster and more cost effective in most cases while distributing a system will be quite overkill and would lead to managers thinking it was a waste of money - but the point is throughput is expected problem for the project and it takes like 1 year to get enough traffic to notice.
"This" is a common problem amongst people and due to the way AIs are trained would expect to become a problem there as well.
When I say "this" I mean unpredictability of the future where you don't really know what's going to happen and sometimes taking risks like overengineering pays of while sometimes it's more valuable to just take it slow and see what happens. Could AI help; probably to some degree but I would definitely not trust it to be better just faster.
Reason why this is so hyped up is because people are lazy and want to generate revenue overnight - so devs want to ask AI to write a bunch of code while CEOs want to remove devs to generate full projects and save money on salaries. We can see what happens when we don't care and shut off people that warn about this in examples like Facebook, Twitter, crypto thing with that weird guy; and many other companies that took a huge hit due to lack of responsibility 10 years ago (or more in some cases). Stocks are down and things are crashing because of "move fast break things" and a lot of people are suffering. Do we really need another crypto-facebook issue to stop the hype train? I mean there's already couple of companies built around ChatGPT
Thereโs a lot to digest here and Iโm sorry if I donโt cover it all but there is one thing that stands out to me. Iโm aware that the model used to train ChatGPT is vast and so many humans where involved with that, my hope was that we might interpret my meaning as post training. I want to say a quick word about improvement, I believe that many professionals of yesteryear theorised about compilers that could in term compile better versions of themselves, I believe that AI is capable of writing and creating models and will soon not need humans apart from the one to say write me a general purpose AI, however I do not know if the AI it will produce will be better or worse than itself.
Itโs my 2 cents but Iโm glad you made your comment you know your serif
Yeah, I thought it was a bit longer "explanation" but in short: I like AI and use it everyday but for thing like translating and processing text from images or such. Having said this I use it for personal stuff and those tools are built over much longer periods of time with a specific use-case. I'm not using it for something professional or something I need for legal stuff.
I have no doubt that AI could write another AI and that "generic"/"general purpose" might be good to have but I do have doubts about quality of these things and I'm afraid these things will end up in products that later damage users quite soon due to human wish to do everything as fast as possible. Call me paranoid but I've seen it happen way to many times (although not AI).
And I'm not sure what "you know your serif" means :D. Anyways I did look at it as post training but still wanted to kick in with a comment. It's a community so interactions are good.