Or it will be, when we learn more about the implications, both ethical and societal.
Already, we find ourselves getting suspicious all the time, of content we see. We question whether content was generated by man or machine.
How will this affect the way that we, those who were around for this paradigm-shifting moment in human history, think and see the world, compared to generations before and after us? Only time will tell.
However, upon reading this article, I began to think. Chris Diaz is great for calling out the issue of responsibility directly, and it's good to see industry leaders like Microsoft show a strong example of values.
While we can, and will debate the issues of ethical responsibility on the side of Microsoft, and other organizations providing similar products and services, we must also accept responsibility ourselves.
We are now the adults. As Diaz says, this technology is still new. This child is growing, and learning. We are now parents!
We are still very far away from creating an AGI, or anything close, and I have personal doubts that we will ever even get there. However, this technology is already starting to shape the way we act.
We can use the tools to be more productive, but we still have to make sure we research and understand the code we generate.
The tool learns based on other code on GitHub. So, if there is a common coding error present in a lot of code on GitHub, and the LLM trains on it, the same error will be duplicated, mutated and suggested to authors of other code.
This is where work gets hard. When we come up with a solution to a problem, as engineers we break it up, and understand each piece. We must continue to remember to do so with code generated by a machine.
Confidently wrong. This is after all the same technology as ChatGPT. The suggestions have a certain compelling quality to them. As humans we will tend to build repetition bias. The more often that we get good results out of AI generated code, the more we will tend to trust it, and that's dangerous. It is not logical to conclude that a result it right simply because the source has been reliable recently.
At least for the foreseeable future, we are the ones responsible for our code, not the AI, and not the vendor. Even as it speeds up certain tasks, we must remember to slow down and maintain our engineering rigor to make sure that the results are correct.
Top comments (0)