“It would be pretty damn maddening if it turns out programmers are easier to automate than lawyers.” -Professor Alejandro Piad Morffis
The increase in adoption of Large Language Generative AI models such as; ChatGPT, Microsoft Bing, Google Bard, Stable Diffusion, etc, while the advantages of these models cannot be refuted, it has led to an exaggerated and harrowing, but not baseless, fear by members of the public on the possibility of these AI models jeopardizing job security for millions of workers worldwide.
As described earlier, the threat of AI to human jobs, while being exaggerated and harrowing, isn't baseless. The ability of AI to perform repetitive tasks and, process large amounts of information, and mimic human-like decision-making makes it a very good tool to enhance creativity, productivity, and efficiency.
To answer the question; will AI take our jobs? I have enlisted the help of an expert by the name of Professor Alejandro Piad Morffis, A Professor of AI at the University of Havana, Cuba. The Professor is a mentor, teacher, friend, and, most importantly, inspiration to me.
How I hope to approach this
The questions will be pre-fixed with the "Q" letter while the answers will be pre-fixed with the "A" letter. With regards to the questions, I hope to cover technical and philosophical questions as Professor Morffis also has an affinity for the Philosophical. It is also important to note that I will provide links to certain concepts that are complex to grasp, for the sake of understanding.
Let us Begin!
Q: Firstly, Could you tell us a bit about yourself, your professional qualifications and such
A: My name is Alejandro Piad, I majored in Computer Science at the School of Math and Computer Science at the University of Havana, Cuba. I did a Master's in Computer Science also at the same college in 2016 and earned a double PhD in Computer Science at the University of Alicante and a PhD in Math at the University of Havana in 2021. My PhD was in knowledge discovery from natural language, specifically focused on entity and relation extraction from medical text.
Since grad school I've been teaching at the University of Havana, I've been the main lecturer in Programming, Compilers, and Algorithm Design, and also an occasional lecturer on Machine Learning and other subjects. Since 2022 I'm a full-time professor there, I was also one of the founders of the new Data Science career, the first of its kind in Cuba, and I wrote the entire Programming and Computing Systems curriculum for that career, I keep doing research in NLP, right now focusing on neuro symbolic approaches to knowledge discovery, mixing LLMs with symbolic systems.
Q: How long have you been working with AI systems?
A: I played with AI for games as an undergrad student, and did a couple of student projects with computer vision and metaheuristics. After graduating, I started my master’s in Computer Graphics, but as a side project, I did some research in NLP, specifically on sentiment analysis on Twitter. After finishing the master I started thinking about doing a PhD and got all in with machine learning. So you could say around 10 years since I started taking AI seriously. My oldest paper related to this stuff is around 2012.
Q: That is intensely impressive! You worked with AI way before it became cool. What do you believe is the singular most significant technical advancement in AI which has contributed to its current mainstream adoption and the consequent job displacement threats?
A: Well it was always cool, just not outside academia. I'd say, the intersection of two orthogonal developments: the discovery of artificial neural network architectures such as the Transformer, which solved many of the scalability problems of previous architectures, and the invention of hardware where you can run those specific architectures at scale super efficiently.
Q: Fascinating! In your professional opinion as an educator and an AI researcher, what industries stand the risk of being replaced by AI?
A: I don't know if any industry will be replaced entirely but I'm sure there will be massive changes. In the long term, of course, no one can say anything. But in the short and mid-term (5-10 years), with what we're seeing with language models, my bet is that anyone whose job is predicated on the shallow filtering and processing of natural language will have some reckoning to do. This includes all sorts of managerial roles, including anyone whose job is to read emails, summarize, and build reports. Any kind of secretary who doesn't go beyond note-taking and task scheduling. Copywriters who work with templated content.
Basically, any content creation task below the level of actual human creativity will be cheaper to automate than paying a human stochastic parrot. So those will go away. One single copywriter using the ChatGPT of the near future will be able to craft, in hypothesis, 3x to 10x more content with the same quality. Not because the model will give them the final quality they aim, for but because the model will give them 90% of the quality, and then the real human creativity comes as a cherry on top and adds the final 10%. Education has to change considerably, too. We can talk more about that if you want.
Q: This is a really nice angle. You're an educationist and from your piece titled "Rethinking college education," you obviously know how change-averse the educational institution is. Do you think formal education can be depended upon for survival in the post-AI world?
A: Yeah, academia will adapt. It is the longest-living institution in Western civilization. It predates all our mainstream religions, and it has survived all major civilization changes. It will change substantially, as it has changed across the ages.
Q: How important is it for society to consider the ethical implications of AI and job displacement?
A: All technology has potential issues, and the more advanced the tech the more pressing it is to consider them. AI is a very powerful technology with the potential to disrupt all our economic relationships. It is something at the level of an industrial revolution, so it will have massive implications, and so the concern must be at the same level. One thing that is different from previous disruptive tech is that mostly, new tech automates the jobs that require the least cognitive skills, it happened with agriculture, manufacturing, mining, etc.
However, this time we are on the brink of replacing a large number of white-collar jobs while leaving lots of blue-collar jobs undisrupted. so we will have lots of people who are used to working in offices finding that an AI can do their job as well (or maybe slightly better) and much cheaper, so they will either have to upgrade their skills significantly or they will have to turn to less skilled jobs. there are other ethical considerations, there is a lot of potential for misuse of AI technologies for misinformation, fake news, social disruption, etc. I don't think we are prepared for a massive number of human-like chatbots taking over Twitter, it is already starting to happen.
There are also bias issues, as these systems become more and more pervasive, the harms can be very focused on the minorities, so everyone will not reap the benefits of AI to the same degree, some minorities will get the downsides more strongly than those not from minorities.
Q: So in other words, we should pay attention because, unlike past forms of automation, AI has the potential to disrupt cognitively tasking jobs as well.
A: Yeah, especially those jobs. It will automate more white-collar jobs than blue-collar jobs, at least in the near term. That's something new and society isn't used to having to deal with that kind of job disruption. These are folks that went to college and more or less got convinced their jobs were safe, or at least safer than taxi drivers, pizza boys, gardeners, you mention it.
Q: This makes sense, let's hit a little bit close to home. Do you believe that an increase in AI capability will ultimately lead to a decrease in overall employment for software engineers/developers?
A: In the very long term, all jobs will evolve in unpredictable ways, including software engineering and development. AI and technological advancements will transform these professions to the point where they may seem to have disappeared.
However, in the short to midterm, a decrease in software engineers is unlikely due to the increasing demand for software across various industries. This growing need for skilled professionals far surpasses the current number of trained individuals capable of building software.
The AI revolution will follow a similar pattern as previous technological breakthroughs in computer science such as compilers, integrated development environments, cloud computing, containers, code completion and IntelliSense. These innovations made programming more accessible for those without highly formal backgrounds and expanded opportunities for developers.
Over the next 20 years, we can expect an explosion of people entering the field of software development. Although job roles may change somewhat with evolving technology trends, there will likely be continued growth for those interested in learning how to program and write code.
*Q: This is incredible, although, the release of Generative AI models such as GitHub's Co-Pilot and the GPT family of models has prompted (forgive my pun) rumours about the possibility of software developers losing their jobs to AI. What do you say about this? *
A: Look at the numbers. All I'm seeing are more job ads for software developers. The trend is still climbing.
Q: Jeff Clune, an ex-OpenAI engineer, recently made a prediction at the AI Safety Debate conference, he stated that there was a 30 percent chance that AI will be capable of handling "50% of economically valuable work" by the year 2030, what would this mean for the overall developer labour market?
A: First I have no idea how you would wrap your head around what a 30% chance of automating a 50% of jobs even looks like. Is it a 15% expectation of losing your job?
Q:I guess the numbers do make for a confusing scene. But the essential point is; Software developers have lots of reasons to be worried about their job security and many of the tasks they currently spend lots of time on are being automated. The pace at which that occurs will accelerate.
A: Yeah but the thing is, many of the tasks developers spend most of their time on are pretty low value and we would be much better off if they were automated: debugging, writing tests, doing pesky code optimizations. As you automate all of that we'll have more time for the really important parts of software development, which was never about writing code.
Q: Could you speak more about those parts?
A: High level and architecture design, user experience, human-computer interaction, and that's just about the software itself. Software engineering is really about the relationship between software and people, both people that make software, and people that use software. So software skills are only half of the story. Understanding your users and colleagues is the other half.
Q: This is similar to how the core job of accountants is to communicate financial information and not create financial statements, fascinating! It's a fair bet to say AI capabilities will increase in 10-20 years. How prepared are we as a society & species to address the potential job displacement/loss brought on by the potential adoption of AI? How does this affect our sense of purpose as human beings?
A: Very hard to say, of course, we're in the middle of an industrial revolution as big as at least the microprocessor revolution or the internet revolution, no one in 1960 could imagine what 1980 would look like.
Society is never ready for change, by definition. That's what a system is, something that strives to maintain its status quo. But humans are the most adaptable social species out there, so I think we'll manage. Lots of people will suffer, and that's something we have to work on, definitely, but nothing apocalyptic in my opinion will happen.
Q: There have been a lot of talks on the dystopian potential of AI. Why do you say nothing apocalyptic will occur?
A: I still haven't seen any really compelling arguments for the doomsday scenario. Lots of the arguments seem to be predicated on reasoning like "we don't know how this is going to evolve so it will probably kill us all" and that's a classic logical fallacy: you're basically making an inference from lack of knowledge.
Q: This is true. But the AI alignment problem does seem plausible.
A: I think we will solve it, at least well enough to avoid apocalyptic scenarios. The most severe alignment issues require you to believe in a powerful version of the orthogonality thesis that I don't believe plausible.
Subscribed
Q: Fascinating! Going back to automation, How can we leverage AI to augment human work rather than replace it and what industries are ripe for this kind of collaboration?
A: I think that's only natural, as we automate more and more of the menial cognitive work (e.g., summarizing documents or finding relevant references) we humans will get to work on the most creative parts of our jobs. Some jobs have very little of that to begin with, and there I see a challenge because maybe those will be completely or almost completely automated away. But most knowledge work has a creative side, the part where you actually do something novel.
As to which fields are ripe for this, I can't talk about much else but in education at least I think we're bound for a long-needed revolution. We professors no longer need to be gatekeepers of information. Instead of spending most of our time grading the same essays over and over, we can now focus on giving the best possible personal feedback to each student.
Q: What are the possible ways AI could revolutionise the educational system? Perhaps more teaching techniques adapted optimally to students.
A: There are a few easy ways and then some not that easy. First is just a matter of increasing access to knowledge. Now almost everything you want to learn, you can find relevant information on the internet at least to begin with, but it is often split around many sources with disparate levels of detail, contradictory stuff, different linguistic styles, etc. The first relatively easy application is just here take this bunch of sources on some topic and give me a high-level overview of the main takeaways summarized, with links to dive deeper, etc. We are pretty close to that (baring the hallucinations which are a significant problem).
Another way is by simply freeing educators from menial tasks to give them more time to focus on creating learning experiences. But by far the most important thing I believe is the potential for personalized learning. You could have an AI assistant and tell it "I want to learn how to make a rocket" and it could create a very detailed plan, especially for you, based on what it already knows that you know, it would tell you, here, first watch this video, now take this short course, now read this chapter of this book, ... And guide you for 3 months to learn something very specific.
Q: This is truly promising! You make a solid case for humans adapting. You spoke about bias, is it fair to say large-scale AI adoption will affect the minority? If yes how can this be combated
A: Yeah definitely, machine learning is by definition trained on the majority, so it will always hurt the most those whose use case doesn't fit the majority for any reason. In particular, whenever you train models to predict human behaviour or interact with humans, it tends to work better for the subpopulations that are best represented in the data.
What can you do? Start by raising awareness of these issues and make sure to thoroughly test your models for bias. Be very careful about how you collect data, don't go the easy way and crawl the web, and make an effort to find good high quality and high-diversity sources of data.
But more than anything include diverse people with diverse points of view in your team. You can't solve a problem you can't see.
Q: This makes me think. Is there a possibility that access to said AI tools will be relegated to the financially capable?
A: I'm hoping the open-source community will make the tools available to all. We already have seen what having access to a free operating system, a free office suite, a free game engine, a free code editor, etc., does for the creative kids of the poorer parts of the world. I trust we will have open-source AI tools as good as commercial ones, the same way we have open-source dev tools as good as commercial ones.
Q: This seems really feasible. What advice would you give to your students to prepare them for the workforce in a post-AI world?
A: If you are already studying computer science, the basic advice is to focus on fundamentals, not just tools. Tools will change but the fundamentals will remain relevant for a long time. If studying something else, learn how AI can improve your productivity, and learn a lot about its limitations. Use it to make your own work better.
Q:This makes sense, fundamentals will stand the test of time. Thank you very much for your time Professor Morffis. Any closing words?
A: The AI revolution is here. We can all either be a part of it, by learning to use this technology for making good and improving the lives of everyone.
If you enjoyed this article, Invest in the writer.
Top comments (0)