Globally, over 1.2 billion people adopted AI tools within three years of launch, establishing artificial intelligence as the fastest spreading technology in human history. Within 3 years of the release of OpenAI to the public in 2022 surveys are showing that ~45% of Australian adults had used GenAI (40% for US in a survey 6 months earlier), 58% of adults globally in a KPMG survey use it for work regularly. The penetration rate of AI has significantly eclipsed the historical adoption curves of both the personal computer, which only reached a 20% adoption rate after three years, and the internet, which required two years to reach a 20% milestone. However, in contrast to the rapid adoption we are seeing, blog articles on how AI is going to destroy the job markets present a starkly different, ominous view. For instance, one post opens with: “The unemployment rate printed 10.2 per cent this morning, a 0.3 per cent upside surprise. The market sold off 2 per cent on the number, bringing the cumulative drawdown in the S&P to 38 per cent from its October 2026 highs,” – (Citrini Research: THE 2028 GLOBAL INTELLIGENCE CRISIS) This Citrini Research article had a significant impact on the stock market, including a 10% hit to Atlassian, dropping with other SaaS stocks globally. I believe, however, that the AI doom and gloom is overblown, or at least the timelines are. It is certainly a disruptive technology, just like the steam engine, the car (I am not going to call it “an automobile”), the PC, and Blockbuster Video. We can look to tangible historical examples that illustrate this slower, drawn-out process of change: Google’s Gemini has informed me that this is known as the Solow Paradox (Wikipedia calls it the Productivity Paradox), the Nobel laureate economist Robert Solow, famously observed, “You can see the computer age everywhere but in the productivity statistics”. The paradox describes the lagging productivity growth compared to rapid technology advances. Aviato Consulting works primarily with large enterprise customers deploying AI, so we have seen first hand that the Solow Paradox exists. Economists have studied this and to summarise a lot of research, the productivity gains only materialize when technological innovation is paired with deep institutional reform, training, and rethinking business processes. In my experience these take 5-10 years, the shift from on premise servers when companies would build or rent datacentres and then fill them with servers to the cloud where Google, AWS, and Microsoft rent you compute capacity is still ongoing and Google released it’s cloud in 2008. During this process the technology will often be seen as underperforming, in one study “adopting banks experience a 428 basis point decline in ROE as they absorb GenAI integration costs.” (The Innovation Tax: Generative AI Adoption, Productivity Paradox, and Systemic Risk in the U.S. Banking Sector) Historical Timelines of Corporate Disruption The argument that artificial intelligence will require between five and over ten years to fundamentally replace human labor is heavily supported by analyzing historical data, which shows a range from four to well over a decade before new innovations can displace established legacy tech. The Automation Paradox: Job Growth Preceding Decline When projecting the timeline for artificial intelligence to replace white collar workers, we must account for the “Jevons Paradox” which can be summarised as “when a labour saving technology in a profession leads to an increase in employment within that sector, as the technology lowers the cost of the service, thereby increasing demand”. The ATM in the 1970s is a great example. At the time analysts predicted that the ability for machines to autonomously dispense cash and accept deposits at any hour of the day or night would decimate the human bank teller profession. In the 1970s and 1980s, ATMs became a staple service. The catastrophic job losses predicted, however, did not materialize. Because ATMs significantly reduced the operational cost of running a physical bank branch, financial institutions subsequently opened more branches to capture greater market share leading to the total number of bank tellers in the US doubling, rising from approximately 300,000 in 1970 to nearly 600,000 by 2010. The nature of the teller’s job was altered, removing them from the low value deposit/withdrawal work, and moving up into more customer relationships (and somewhat annoyingling trying to always upsell me a mortgage or credit card). The decline in teller numbers did eventually eventuate, when online banking became prominent, dropping 30% between 2010 and 2024. Structural Friction in Enterprise AI Adoption While generative AI can write code, review legal contracts, and generate marketing copy in seconds, integrating these capabilities into the rigid, complex architecture of the ASX300, and Fortune 1000 companies that Aviato Consulting works with, their governance, security, and the way these companies are structured introduces immense friction. The words “governance” and “security” have many times stopped an IT project in its tracks, these teams create a chasm between consumer utility and enterprise scalability. Pushing a project that you can complete on a MacMini with Open Claw into an enterprise turns it into a multi year timeline for ASX300’s and subsequently delaying white collar displacement. According to 2024 research by Accenture detailing enterprise operations maturity, a staggering 61% of corporate executives reported that their data assets were “not ready for generative AI”. Further 70% of companies found it exceedingly difficult to scale AI projects that relied on proprietary, unstructured data, which remains largely ungoverned in most organizations. Establishing a centralized data governance architecture, cleaning decades of historical data, and migrating to cloud systems is an absolute prerequisite for deploying autonomous AI agents (See my point about shift to cloud above) This ensures that AI remains trapped in “pilot purgatory” for the foreseeable future. A recent Gartner Report showed a very top heavy AI Maturity Funnel: Conclusion The rapid consumer adoption of generative AI is undeniably unprecedented, but conflating this with immediate wholesale job displacement ignores centuries of technological history. As evidenced by everything from the transition from horses to cars to
The 4-Layer Architecture of AI Systems
The word “agent” gets thrown around a lot right now. If you string two API calls together, someone is going to call it an autonomous AI agent. But if you’ve actually tried to build a system that you can run in production, and get real work done without constant hand holding, you know this is not going to cut it. Building production ready agentic workflows requires a specific architecture. Over a lot of customer engagements Aviato have found it easiest to think about this stack in four distinct layers plus the underlying plumbing that keeps it all from exploding in production. Here’s a practical look at how the modern agentic stack is actually built, and what you need to productionise AI Systems: Layer 1: Large Language Models (LLM’s) At the absolute bottom of the stack sits your foundation model. This is where you’re dealing with the raw mechanics: pinging APIs, handling tokenization, tweaking inference parameters, and prompt engineering. You give it instructions, and it responds. On its own, it doesn’t care about your long term objectives, it forgets what happened five minutes ago, and it definitely can’t orchestrate a complex, multi step workflow. To get that, you have to move up the stack. Layer 2: Agents This is where we take a reactive model and actually turn it into an agent. We’re wrapping the LLM in code that gives it persistence, structure, and a goal. Instead of just answering a question, a Layer 2 agent can actually pursue an objective. To make that happen, you have to bolt on a few things: Unlike a standalone model, a Layer 2 agent acts, looks at the intermediate result of that action, and adapts its next move based on what just happened. Layer 3: Multi-Agent Systems Eventually, you’re going to give a single agent a task that’s simply too big. The context window is exhausted, it loses focus, and the whole thing falls apart. That’s when you need to bring in a multi-agent system. Instead of writing one prompt to rule them all, you build a distributed team of specialist sub agents. This layer handles the collaboration between them, including: By splitting up the work, the whole system becomes drastically faster, more robust, and way less prone to hallucinating under pressure. Layer 4: Agentic Ecosystems When you have a bunch of specialized agents running around asynchronously, things turn into chaos fast. Without structured orchestration, a multi-agent setup is just a cool local demo. With it, you get a scalable, reliable system that can actually survive real-world constraints. For a reliable production system you need: These are not sexy, but are how you ensure accountability, mitigate failure modes, and actually preserve trust in the automated decisions your software is making. Aviato have run a number of PoC’s our current cost is 6 weeks and 80k AUD, to prove an agentic system can meet your needs. Moving these to production requires a team (or Aviato SRE’s) to manage them, and a lot of additional thought.