I had an experience this weekend that was simultaneously humbling, thought-provoking, and exciting.
The short version:
I completely replaced months of engineering work in literally two hours by swapping to a system that was AI-compatible versus one that wasn't.
I was gobsmacked.
Easily over a 100 to 1 difference in productivity.
Companies, systems, and processes that work well with AI will have a substantial competitive advantage over those that don't.
The longer version:
To set some context, one of my favorite books in recent years is Antifragile by Nassim Taleb.
Antifragile refers to the property of a system or entity to not only withstand stress, shocks, and volatility but actually to benefit from it, becoming more robust and more resilient as a result. In contrast, fragile systems break down or weaken under stress. In the extreme, even the slightest error can cause catastrophic problems.
As a simple example, think of building a toy house with Legos or a deck of playing cards. The playing card version is much more prone to fall apart--and fall apart dramatically! A robust lego house can withstand lots of torment from kids and pets!
Fragility versus antifragility shows up nearly everywhere. In business, companies with diverse revenue streams (like Microsoft) tend to be more resilient than companies dominated by single revenue streams (Google and Facebook are current examples). In governmental systems, dictatorships are very fragile, whereas democratic institutions are much more resilient to change. The United States has largely prospered over the last century, even though Republicans lament the years that Democrats are in charge and Democrats lament the years that Republicans are in charge!
This fragility versus antifragility concept is particularly true for technology systems. We have all used software or tech gadgets that are just flaky and unreliable. Every now and then, we come across a product that works even in the presence of a lot of failures and issues--the Internet itself is arguably a fabulous example of an anti-fragile technology.
Of course, even brand-new technologies can be affected by fragility and antifragility. As many of you know, at my company Polyverse we are gearing up to release version 3 of our technology: Polyverse Boost. Boost provides automatic application modernization--simply take your legacy C/C++/C#/Fortran/Cobol/etc. program and run it through our system. In minutes we'll enhance that program with modern cybersecurity, IP protection, and application monitoring--all automatically.
Behind the scenes is a lot of sophisticated technology, including the ability to encrypt and execute programs while they are encrypted. We also integrate extensively with Web3 blockchain technologies like NFTs and WebAssembly.
For the past few months, I've been working on integrating one of those web3 technologies into our new product. It was challenging to say the least. The code base was quite brittle and would frequently break. For my technical readers, it's a system with over 300k lines of Rust code. Normally, a Rust code base is a good thing. Rust, generally speaking, is a brilliant improvement over C/C++ for systems-level code. But Rust has a severe 'dependency hell' problem, particularly around version constraints and automatic type inferencing. For my non-technical readers, a simple analogy is to think of houses across the country. With houses, if you say you want to remodel a bathroom in Chicago, that remodel is contained to the house in Chicago. The remodel may go well or go poorly, but either way, the impact is localized to that house in Chicago.
But in a fragile, poorly designed software system, remodeling a bathroom in Chicago can cause flooding in every house in Florida and the roofs to collapse for all the California houses. While this may seem extreme, it's unfortunately very true for poorly designed software systems--like the one I was trying to use!
Not surprisingly, I hit another 'flooding in Florida' style problem this past weekend. After two months of slogging through issue after issue, I decided to swap to an entirely different implementation of that particular web3 technology--in this case, the Wasmer web assembly runtime.
From start to finish, it was two hours of work.
Thus the crazy mix of emotions. Frustrated at all the time I had "wasted," in a sense, I would have been a lot further along if I had chosen Wasmer in the first place. I was frustrated with myself for not pulling the plug on the brittle tech earlier. I was excited that I found a viable and much more durable technology for integration into Polyverse. That's going to be a great win for our customers.
But it also got me thinking. I've been doing software long enough to know that no system is perfect. It's common to see the so-called religious flame wars of technology A versus technology B. Oh Linux is so much better than Windows. No Windows is better! No, Mac is better still.
Yawn. Everything has a set of tradeoffs, and there is no such thing (at least for now) as a perfect technology. At least historically, the choice of technology A versus technology B tended not to matter a great deal in the grand scheme of things as long as both A and B solved the problem at hand.
Now I am starting to reconsider that mindset, thanks to the productivity gains of AI tools (see my other articles). The unspoken assumption in that "it doesn't matter" mindset is that complex software systems are just that--complex, and they will take time and effort to work with. The cost driver is complexity and the amount of human time needed, not the specific technical features of A versus B.
But a 100:1 productivity difference! What happened, in this case, is a combination of two things. First, while also written in Rust, the Wasmer code base is better architected and less fragile than the previous one I was working with.
But more importantly, it was AI-friendly. We used AI to write much of the integration code for our Boost features using Wasmer (and the earlier technology). Just as you can sometimes tell that an AI wrote an essay (https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/), code written by AI also has detectable patterns to it. The AI, essentially, prefers technology A to technology B, and so the answers it gives for working with A are better and more accurate than the answers for technology B.
The Wasmer product benefited from this. Integration was literally copy paste from the old code to the Wasmer code. It just worked. The APIs and coding patterns suggested by the AI were more compatible with Wasmer.
Across the broader Polyverse technology stack, we work with a wide range of technologies--from Rust to Python to Solidity to Swift to low-level assembly. While AI tools work on all of these, Python is the hands-down winner. Python code from the AI tends to be more accurate and more robust than Rust code generated by the AI (the AI tends to hallucinate too much for Rust and makeup stuff that doesn't exist). Similarly, Rust AI code is more accurate and robust than low-level assembly.
Unambiguously, I'm now completely favoring languages and technologies that work well with AI. With a 100:1 productivity improvement, there is just no question.
Suppose we project forward a few years and assume this trend continues. The "AI bias" would have a dramatic impact on the industry.
We could see a pretty rapid consolidation to a smaller set of programming languages and other technologies. It's the argument above--before, technology A and B's differences were relatively minor. Now, however, it can be orders of magnitude difference.
New companies and technologies will struggle if they can't find a way to be included in the AI tools. As a simple example, in another project, I'm trying to integrate Siri into an iPhone app. Unfortunately, Apple substantially updated the Siri API's after 2021 (with only minimal documentation sadly). Thus, chatGPT and tools using chatGPT do not understand the new model and will give you bad advice and code suggestions. Anybody introducing something new, be it from a large or small company, will face similar challenges.
Of course, this assumes that the AI does not dramatically increase its capabilities and lose some of its bias. That will undoubtedly happen, but my prediction is that support for popular technologies will still improve and likely improve in a more meaningful way. At heart, AI gets better with more data and better data. This need for good data effectively yields a bias towards popularity.
Thus far, we've discussed relatively benign topics like technology choices. The problem of bias in AI itself and similarly the bias introduced by AI-driven productivity gains could have a much bigger societal impact. This is a topic for another day, but what happens if the AI say demonstrates biases towards one political party versus another? What happens when hiring or loan applications or insurance decisions are driven by AI tools, and those tools bias towards one social group over another? Conversely, on the optimistic side, AI technologies could potentially be used to help address some of the inequities in the world today.
As a society, we are going to have to grapple with and solve these issues. In the meantime, on the technology front, I've adjusted my own thinking and planning to favor AI tools. The productivity wins are just too great!