Software Development is Dead, Long Live Developers!
Ai Work“If you don’t learn to code, in ten years it will be like being illiterate!”
That was what someone exclaimed on a panel discussion I was on in 2013. It was a talk about bringing technology and entertainment together held in Beverly Hills and hosted by a coding bootcamp company. Two of the people on the panel were from a different bootcamp company, and then there was me, an actual technologist working in entertainment. Surely I would agree that everyone needs to learn to code, right?
Actually, I’ll disagree a bit with the other panelists on this. Do I think everyone should try coding at some point in their education? Yes. Similar to art, music, and writing, everyone, but especially children, should be exposed to coding to see if they like enjoy it or have a talent for it. But do I think that everyone should be coding all the time? No. Would I equate it to illiteracy? No. Why? Because coding is hard! Sure, you have to learn the programming language syntax, and learn how to run the code and all of the other mechanics of development, but what’s most difficult is the fact that you have to think through problems in a very structured logical way. Not everyone is wired to think like that, in the same way that not everyone can find the beat in a song, or can draw a cat in a way that other people would actually recognize as a cat.
The good news is, that you don’t need to learn to code to use technology in the same way that you don’t need to know how to take apart a carburetor to drive your car. Technology will come to you as it matures and society accepts it. If you have the interest or a goal to learn how to code or anything technical I strongly encourage you to do the work and hopefully become successful, but you don’t need to do any of that so you can use Facebook or make a spreadsheet in Excel, and that’s not going to change.
They didn’t invite me back for any more of their panels.
Here we are now in 2025, over ten years since I sat on that panel and not only do I not get the sense that anyone feels particularly illiterate if they don’t know how to scrape some Python together, the narrative has done a full 180! Don’t bother learning to code! AI will code all the things!
In a recent interview with Joe Rogan, Meta CEO Mark Zuckerberg said that AI will replace mid-level engineers by 2025. He believes AI can take over coding tasks, allowing human engineers to focus on higher-level problem-solving and creativity. Other tech giants like Google have also started integrating AI in coding processes. While initially costly, companies hope the transition to AI-generated code will become more efficient over time. This shift may reduce the demand for mid-level coding roles, pushing software engineers towards more strategic responsibilities.
Yet again I’m about to be the another wet blanket. Only the hype has changed.
Just as I did in 2013, I will start with some common ground. Yes, AI/LLMs are helpful for developers and absolutely can make a developer more efficient. At the moment, I am coding a lot as I am in a Start-Up CTO role, and I use GitHub’s Copilot and have experimented with some of the other models. The experience has been largely quite good. I absolutely see a return on the investment of the $10/month for Copilot plus a few extra dollars in API requests every month. I will also agree that people who can’t code have been able to use LLMs to generate things they wouldn’t have been able to otherwise. Seeing people dream up and create scripts for document automations, Excel macros, or Google Doc scripts is genuinely exciting and potentially a huge gain in productivity for those people. The problem is that none of the things I just mentioned are fully descriptive of what software developers do.
Developers understand and solve problems. The tech changes all the time, but the problem-solving doesn’t. People used to write assembly code, then higher-level languages came that were more approachable and allowed for more efficient development, but the result wasn’t fewer developers, it was more developers. Way more! Software ate the world, and the process of translating a problem into something that can be solved by new code was an increasingly critical skill. Writing a Python script to reformat a CSV, make a blog, or scrape content off the internet is a great demo, but most developers aren’t making little scripts; they are working on huge advanced systems that integrate into other huge advanced systems. Even if LLMs can someday create perfect code, the requirement to understand the problem enough to describe the solution you need remains a steep climb. Then there’s the matter of distributing the resulting code, which leads to potentially big questions about scale! It goes on and on and on…
Sexy AI demos from a guy sitting behind a MacBook and talking a little too slowly than you’d prefer make for a fun LinkedIn post filled with emoji, but it’s a far cry from Zuckerberg’s robot developer army he’s pitching to Rogan, and even OpenAI agrees.
In a new paper, the company’s researchers found that even frontier models, or the most advanced and boundary-pushing AI systems, “are still unable to solve the majority” of coding tasks.
The researchers used a newly-developed benchmark called SWE-Lancer, built on more than 1,400 software engineering tasks from the freelancer site Upwork. Using the benchmark, OpenAI put three large language models (LLMs) – its own o1 reasoning model and flagship GPT-4o, as well as Anthropic’s Claude 3.5 Sonnet – to the test.
Specifically, the new benchmark evaluated how well the LLMs performed with two types of tasks from Upwork: individual tasks, which involved resolving bugs and implementing fixes to them, or management tasks that saw the models trying to zoom out and make higher-level decisions. (The models weren’t allowed to access the internet, meaning they couldn’t just crib similar answers that’d been posted online.)
Though all three LLMs were often able to operate “far faster than a human would,” the paper notes, they also failed to grasp how widespread bugs were or to understand their context, “leading to solutions that are incorrect or insufficiently comprehensive.”
Yes, the models will continue to advance. Claude 3.7 in particular is making some noise as of this writing, but if you can focus your eyes and see through the hype, this will start to look familiar.
What a developer has done throughout a given day has been under constant change since the beginning. We’re not punching little holes in carefully ordered cards anymore. We’re not writing machine code by hand anymore, and soon maybe we won’t need to type quite as much as we do today, but none of that was ever really the job. The job of a developer is to understand how things work, diagnose problems, and use the best tools available to solve those problems. That’s not going away.
Software development as you knew it is dead. It is changing into something else, just like it always has. Long live software development!
Related Posts
A Decade of Clojure at Studio71
What is Clojure and why did it fit for Studio71? Clojure is a programming language (a dialect of Lisp) that excels at concurrency and data processing. Clojure runs on top of Java so it’s runs in all of the places Java runs and can use all of the Java libraries already out there (hello, Google and AWS libraries!
Read moreMindMac - A Native AI Client Experience
AI is everywhere! …but in another sense, it’s everywhere. Meaning, it’s all over the place. By that I mean, I have to go to 10 different websites to experiment with the various models via their developer playgrounds. It’s a pain. So I went in search for a native client on macOS that could at least interact with OpenAI’s API, and I found MindMac.
Read moreApple Intelligence Summaries Are a Mess
Jason Snell over at Six Colors takes Apple to task over the current state of their Apple Intelligence notification summaries. He’s 100% right. They are bad, especially when summarizing news, and that’s unacceptable even with the “beta” tag. Take a look at his included example: A non-apology and the promise of a warning label isn’t enough.
Read more