AI Will Eat Tech
There is a famous saying that software will eat the world, but in this article, I will argue that AI will eat tech first. I am using the broad term of “tech” since this will cover not just coding but also the design, deployment, scalability and security of enterprise level systems. Now, some of you will still say that AI is software, but it actually isn’t, it is huge sets of data that underpin the capabilities, often in ways that are not fully understood.
The article that encouraged me to write this is the End of Computing from CACM. Many people like me, who have been around long enough in the industry, saw this coming before GitHub released CodePilot. But seeing the capabilities of ChatGPT3 recently really emphasised that a big shift is coming a lot faster than many realise. The article does a really good job of articulating this.
But it started me thinking. Yes, of course, AI could be writing individual programs that will go into production with minimal supervision (perhaps another AI can review pull requests) as part of larger development efforts, and that will become widespread, I believe, within five to ten years. However, it’s one thing to write a program, but a whole other to design, build, test, deploy, scale and secure an enterprise system with a couple of sentences in a chatbot window. Is that in the future also? If it is, how far away might that be and what do the barriers look like?
If we start with the design stage, there are already low-code and no-code platforms that make the design and build of applications easier. Many companies today use Figma, and AWS in 2021 launched Amplify Studio that can create front-end ready code from Figma designs. There are now many others in this space, including Zecoda and Teleport. We have also seen that AI can create art, so it’s not unreasonable to anticipate that the design of interfaces based on descriptive sentences (of what sort of UI you want and what the inputs/outputs are) is not far away. It might even be here now and I’ve just not seen it yet.
The build stage is where things are currently moving at pace. Assuming we’ve got designs and front-end code, it should be possible in the future to feed that into another service that builds the backend services that are needed to receive, process, respond and store data (Remember, in this example, we are exploring a pure AI solution, when in fact, Amplify Studio and a number of other services provide a full-stack service where users can “point and click” all the pieces together now). In this new world, the lines between build and deployment become blurred as it’s about designing the infrastructure appropriate to the environment. Here, the environments are going to be very simple: a staging one for humans to review what the AI is going to deploy and then production. But as we cross over into the realm of architecture and DevOps, we will encounter our first obstacle because there could be many different ways to architect those backend services and integrate glue services such as asynchronous queue and message passing.
Today, microservice and serverless technologies are common design patterns for many tech companies. An AI that can do this effectively would need to understand how other companies have built the same or similar solutions, as this is about designing and deploying infrastructure that can host the most suitable solution. In order to understand how to do that, we need a language we can train models on. The good news is that this has been a reality for at least fifteen years since the advent of Infrastructure As Code (IAC) tools like Cloudformation, Serverless and Terraform. So it’s not unreasonable to expect a future where you can describe what you want to build and deploy in natural language, and your cloud provider will be instructed what to do via AI written IAC. DevOps won’t exist as its own discipline — it won’t be needed. There might well be distinct components of the AI that act as site reliability “engineers” to ensure the end product(s) meets the SLA level being paid for, but the underlying services being deployed to suit the environment will be mostly invisible to the human overseer.
Some of you might be thinking: Why did I put “test” in there? Surely, AI-written code will be bug free? The problem here is that, firstly, the AI writing the code may well incorporate bugs into the code it writes simply because the model itself has flaws. Having included buggy code, it could also incorporate bugs because it uses a library that was written by humans or partly by humans. There is also the problem of integration testing, that is ensuring the responses sent and returned between services are what is expected and that might change due to security patches, underlying service updates etc. There are already products like functionize that help to write end-to-end tests using AI and Machinet that writes unit tests, so AI being able to generate and run unattended test automation is reachable. The challenge with code has always been the mathematical impossibility of testing all possible paths when you have thousands of lines of code; however, AI might have the advantage of being able to cover a vastly larger percentage than was previously possible simply due to the underlying power of the cloud.
Scaling might well be the easier of these steps if we decide that the infrastructure we will build on will be Serverless as a basis (if you don’t know what Serverless is, you can read my primer here). The promise here is that you don’t pay if you don’t use it and you have infinite scalability if you do need it, capacity planning is no longer a concern. Also I suspect that all the big cloud providers already use some AI services to predict and optimise processor and network capacity within and around datacenters.
Security is an interesting one. There are already tools that can spot suspicious behaviour and AI is very well suited for trawling large amounts of data to look for patterns. Since AI has built and deployed the system in our example, I am going to assume it will also be monitoring resources such as CVE and OWASP to automatically and rapidly patch vulnerabilities, probably far faster than a human could. But there will be a darkside because AI will also allow bad guys to write tools to better aid them in searching for vulnerabilities and we will likely see a security AI arms race. To illustrate the complexity here, a report has come out as I write this showing that hybrid/human AI code is less secure. Granted, in this article I am proposing little or no human oversight of the code produced, but still.
I believe, in the next ten years, each of these areas will advance significantly in the application of AI, some to a greater extent than others. But in this grand vision, what we REALLY want is to have one chat service where we can describe exactly what we want from a system and have it go away and do everything in one go:
“Build and launch an application for me like Twitter on the domain twatter.com. The application should be available in three languages including English, Mandarin and Spanish. It should be built to scale from thousands to millions of users per day with a peak of around 100k users per hour. Redundancy should be designed for three nines and if my budget for hosting exceeds $500/day it should warn me via SMS.”
For this to be reality, we need some kind of overarching model that understands how enterprise level systems are built and can learn from thousands of examples how the pieces all work together in different scenarios. Someone needs to provide the services described above together in one place and allow humans to bolt them together first. That will happen, and because of the scale and reach required it’s going to be one of the big cloud providers that does it. My prediction is that it is twenty years away.
In my career, there have been two paradigm shifts. The first was the Web and the second was cloud computing. There have been many other ground breaking changes but these two were seismic. I believe AI for tech will be the third and will eclipse even them.