The Rise of Agents: Where AI goes from here
“In the distance, Miles saw the end of man as the Apex, and the Rise of AI in its place, a bold new universe with boundless exciting and frightening possibilities.”
Our sensibility for thinking about AI is the current “generative” phase, where we tell the AI to create something and it does as we ask.
On the fly, the AI creates images, writes songs, programs code, crafts recipes, and edits text, and these are just some of the generative AI capabilities that are emerging.
But, it’s the next phase, the Agentic Phase, when a new form of AI – autonomous agents – will emerge that changes everything.
In software system terms, what makes an agent an “Agent” is that agents can operate with Independent Thought and Independent Action, and do so WITHOUT human direction.
This is mostly magical stuff, as if you’ve experienced Waymo’s autonomous taxi service, you know firsthand there's this before/after “holy shit” dynamic.
It doesn’t seem real, but not only is it real, it’s a really nice, evolutionary experience.
In not so distant future, there will be 50+ Waymo-style intelligent automation applications across a number of industry verticals, the experience of which will sharpen our mental models for thinking about;
AI's foundational use cases
How quickly such innovations can go from science fiction to rocket launch and then, ubiquity
In doing so, we’ll come to understand why the Rise of Agents represents a hierarchical shift in terms of humankind as Masters of the Universe.
When Agents are Unleashed in the Wild
Consider two very different examples of types of agents that will become endemic within FIVE years.
Think of the Marketing Assistant Bot, whose job is understand your business, its operations, economics and the industry it operates within, in terms of market dynamics, competition and customer needs.
As an autonomous software bot that exists virtually in the cloud, it will tirelessly create your marketing collateral, manage your outbound communications, cultivate your web presence, and provide customer service and technical support.
It will be able to see, hear, speak, write, think and create broadly and deeply.
Such bots will excel at design, orchestration and oversight.
They also will be adaptive, readily augmenting their capabilities, based on the needs of users and industry best practices, enabling agents to expand their coverage role by role, department by department, enterprise by enterprise, and industry by industry.
Now, consider something entirely different.
There will exist Predator Bots, agents that play to human frailty by autonomously monitoring online services, chat and email, and engaging strangers, profiling them, and pushing buttons to get these strangers to be befriended or romanced, so they’ll reveal their secrets, share details on their personal wealth, provide compromising images, account numbers, passwords, and the like.
Such bots will excel at bankrupting, blackmailing and breaking hearts.
They’ll never get sick, never sleep, never feel guilty, can operate their cons over time, will keep getting better based on global scale learnings, and can scale their activities infinitely in terms of individual instigators and conspirator “cohorts.”
While such bots may operate on behalf of organized criminal networks, and may report back to a human or software master that governs them, there is no inherent reason that these agents can’t operate as literal free agents, when it comes to optimizing on their own self interest.
This begs a question. When an AI-based network of Predator Bots decides to break off from its home criminal network, what becomes its compass?
What does it build over time?
Related, is there any reason to believe such a bot will practice loyalty to its human boot master, or simply optimize on predation?
I have no idea, but it hearkens back to Marc Andreessen’s axiom about ‘Software is Eating the World,’ though in the case of AI, it's more like “enveloping the world.”
Such is the promise and the peril of Agentic AI.
When Science Fiction toggles from Impossible to Inevitable
Netting it out, by 2030 – if not sooner – our current AI model of generative intelligence via chatbot will give rise to master and sub agent bots that can operate independently, cooperatively or in a federated fashion as:
Intelligent Task Runners
Generative Engines
Managers of Stage, State and Resource Allocation
This emergence will set in motion the advent of AGI, or Artificial General Intelligence, that achieves a state of Super Intelligence that is all aware, all assimilating and all capable, certainly beyond the realm of human understanding.
It is logical to ask how close this is to reality, and how likely are the scenarios presented to come about in the time frames suggested.
Let me first say that the data side of this argument is based on reading, ‘Situational Awareness – The Decade Ahead’ by Leopold Aschenbrenner, who was one of the founding members of the Superalignment team at OpenAI.
(Note: Better than taking my word, read Situational Awareness via the link above. As an aside, Superalignment is focused on navigating the unique technical and design challenges of reliably controlling AI systems that are much smarter than we are.)
The author makes the case for three vectors of exponential growth leading us to AGI.
The first and most basic is that we are using much bigger computers to train these models, which the author argues presents a straight line between building ever-bigger compute clusters, and the dialing up of the AI revolution.
Case in point, in just a few years we’ve gone from computers barely being able to distinguish chihuahua faces from blueberry muffins, to bots now being able to operate with the full library of knowledge and task execution skills of the most elite grad students.
The graphic below illustrates that, from a compute perspective, it does not require a lottery ticket-level event to occur in technical know-how or manufacturing scale for the compute growth trend and ramp to continue.
(Note: Truth be told, AI ramp is arguably more gated on access to power, a topic worth discussion in its own right.)
Here, Aschenbrenner asserts that we can decompose the progression from GPT-2 to GPT-4 in four years into three categories of scaleups:
Compute: We’re using much bigger computers to train these models.
Algorithmic Efficiencies: There’s a continuous trend of algorithmic progress. Many of these act as “compute multipliers,” and we can put them on a unified scale of growing effective compute.
”Unhobbling” Gains: By default, models learn a lot of amazing raw capabilities, but they are hobbled in all sorts of dumb ways, limiting their practical value. You can think of unhobbling as “paradigm-expanding/application-expanding/re-factoring/right sizing” algorithmic progress that unlocks capabilities of base models.
Needless to say, there is a natural synergy between growth in Algorithmic Efficiencies and Unhobbling Gains.
This is why optimizing on best practices is the gift that keeps giving in how it shapes purpose, process and (realized) potential.
But that’s qualitative. To better quantify this, in Aschenbrenner’s graph, that means in four years time, we were able to achieve the same level of performance for ~100X less Compute (and concomitantly, much higher performance for the same Compute).
A related correlate of this is that as the quality of data, and the means of fortifying that data get better, we could/should see better models for AI to internalize.
The premise being that better training runs on better trains with better tracks.
Unhobbling, by contrast, may focus on overcoming current constraints of AI systems, such as: A) Lack of long-term memory; B) Limited capacity to use a computer; C) Severe limits in most actions that occur in the physical realm (vs. digital); D) Lack of reflective thought; and E) Limited collaborative skills; notably, many of the things that are native to humans.
In closing, here are some scenario planning bets worth marinating on:
By 2030, thanks to Agentic AI, you’re going to have bots that look, feel and perform more like a Co-Worker than a piece of software.
One of the more interesting questions will be pricing models for Agents and Agentic systems. Will pricing be more like a licensed seat; more like a mechanical turk unit; or more like a 1099 hire?
Once the models can automate AI research, that will kick off intense feedback loops, opening the door to solving the remaining bottlenecks to AI fully automating almost everything. AI will then begin to evolve VERY rapidly.
To think of the scale of AI, imagine 100 million automated researchers, each working at 100X human speed, with access to the full library of knowledge in the domain of focus, each able to do a year’s worth of work in a few days.
The hyper acceleration of intellectual activities created through AI automation at scale will yield the creation of ultra-intelligent machines and an ‘intelligence explosion’ that leaves the intelligence and inventions of man far behind. This will be a catalytic event for mankind.
Most basically, AI will be able to self-improve through an ability to write millions of lines of complex code, keep its entire codebase in context, and spending human decades-level checking and re-checking every line of code for bugs and optimizations.
A non-obvious “unfair advantage” of the existence of a robust AI training fabric is that you WON'T have to individually train up each automated AI researcher. Instead, you can just teach and onboard one of them—and then make replicas.
As the AGI race intensifies—as it becomes clear that superintelligence will be utterly decisive in international military, political and economic competition—we will have to face the full force of foreign espionage, hacking and intelligence wars.
Unless we solve alignment—unless we figure out how to instill the critical side-constraints—there’s no particular reason to expect this small civilization of superintelligences will continue obeying human commands in the long run. Put another way, it seems totally within the realm of possibilities that at some point, the AI will conspire to cut out the humans, whether suddenly or gradually.
Either way, rest assured a wild ride is ahead, through the looking glass, that is.
Growth and Innovation Advisor | JTBD Expert | Startup Advisor | Investor
1wcrazy world a head for our kids
Managing Director at BrightStreet Ventures
2w“Superintelligent] systems are actually going to be agentic in a real way,” Sutskever said, as opposed to the current crop of “very slightly agentic” AI. They’ll “reason” and, as a result, become more unpredictable.” https://techcrunch.com/2024/12/13/openai-co-founder-ilya-sutskever-believes-superintelligent-ai-will-be-unpredictable/