04

Digital AI Agents

Investment Perspectives on Autonomous Software Systems

Introduction

Those who have followed us for a few years have heard us talk about AI agents, or the digital janitors. They have also often heard us say "slowly, slowly, suddenly." The reason we're creating this note is because we're now singing the last verses of "slowly." The power that AI agents can bring will create disruptions in most areas.

Our purpose, as always: to present our perspective from where we sit on the edge watching these technologies develop. We are not technical experts or academic minds. We look at technologies and their forces from a financial perspective. If you want absolute truths, stock recommendations, and academic precision, you should stop here. We have nothing to give. But for the rest of you who are curious and seeking perspectives on the future, you get ours here. At least as we see it in the summer of 2025.

We use calculators, Excel, Word, language models, websites, and conversations with companies and analysts as the foundation for our "Disruptive Perspectives" notes.

Understanding the Concept of "Agents" in Philosophy, Economics, and Sociology

Philosophy

In philosophy, an agent often refers to an actor with the ability to act purposefully and make decisions based on intentions, rationality, or free will. Within agency theory, an agent is one who can initiate actions and influence their surroundings. Philosophers like Kant and Sartre have emphasized the agent's autonomy and moral responsibility, while newer perspectives, such as posthumanism, challenge the distinction between human and non-human agents (e.g., machines or AI) by attributing agency to technology.

Economics

In economics, particularly in principal-agent theory, an agent is a party that acts on behalf of a principal, often under asymmetric information. For example, a business executive (agent) acts for shareholders (principal). Economic models often assume that agents are rational and maximize utility or profit, but behavioral economics (e.g., Kahneman and Tversky) challenges this by including irrationality and cognitive biases. In game theory, agents are modeled as strategic actors who make choices based on others' actions.

Sociology

In sociology, agency concerns the individual's or group's ability to act independently and shape social structures - as in Giddens' structuration theory, where agency and structure are mutually dependent. Agents can be individuals, organizations, or technologies that influence social systems. For example, an AI agent in a social network can shape communication or behavioral patterns.

In our world, the concept of "agent" is about the ability to act purposefully, influence surroundings, and make decisions, whether as a human, organization, or technology. Philosophy emphasizes autonomy, economics focuses on rational choices and incentives, and sociology underscores the interplay between agents and social structures.

Key Insight:

There exist, are being created, and will be created armies of agents that will disrupt, tear, and rip apart today's business models. Over the next five years, both physical and digital agents will become part of our everyday lives, both at work and in private. How NVIDIA, as a sort of godfather in AI infrastructure, defines and uses agents is important.

NVIDIA's Use of the Term "Agents"

NVIDIA defines agents as autonomous assistants, either digital (AI) or physical (robots), that can perceive their surroundings, make decisions, and perform actions to achieve goals.

Digital Agents

Software-based entities, such as AI models or virtual assistants, that operate in digital environments (simulations, games, or cloud-based systems). Examples include their Omniverse platform, where AI agents simulate complex scenarios before transferring to action digitally or in the real world.

Physical Agents

Robots or physical systems equipped with AI, such as self-driving cars, drones, or production robots, which use NVIDIA's Jetson platform or Isaac Sim for training (navigation and decision-making) and inference (execution).

NVIDIA's agents build on principles from AI and machine learning, where agents learn through data, sensors, and algorithms to optimize actions. This connects to philosophy's idea of autonomy (agents as independent decision-makers), economics' focus on optimization (maximizing efficiency as a goal), and sociology's perspective on how agents influence systems (e.g., robots in workflows or digital agents on social platforms).

The AI agent is what you and I as users encounter. How we encounter the AI agent depends on what we call the interface, which can be a PC, TV, mobile phone, and car. And soon in the AR glasses on your nose.

Historical Development, Current Status, and Future Use

Historical Timeline

  • 1940-50s: The beginning of AI with Turing and early computers, laying the foundation for autonomous agents
  • 1980s: The emergence of multi-agent systems in AI, where agents interact in complex environments (e.g., game theory and simulations)
  • 2000s: NVIDIA's development of GPUs revolutionized parallel data processing, enabling advanced AI agents
  • 2010s: Deep learning and NVIDIA's CUDA platform accelerated the development of digital agents for image processing, language models, and simulations
  • 2020s: NVIDIA's Omniverse and Isaac Sim introduced platforms for developing both digital and physical agents, used in everything from simulating virtual worlds to autonomous robots

Current Status

Digital Agents:

NVIDIA drives the development of AI agents in simulations and AI models for gaming, design, and customer service.

Physical Agents:

The Jetson platform is used in robots for logistics, healthcare, and self-driving. Isaac Sim enables realistic simulations for training physical agents.

Scaling AI agents requires enormous amounts of data and computing power, and ethical questions about autonomy and responsibility are central. Data centers are sprouting up like mushrooms in the USA, and surely eventually in the Middle East and Europe. GPUs have become a scarce resource.

Future Use

Digital agents are expected to dominate in virtual worlds (metaverse), customer service (chatbots with human-like interactions), and simulations for research, such as climate/weather models or health. Google and Meta's strong focus on AR glasses in recent years can be seen in light of the development of digital agents.

When we were first introduced to the internet, it was via a browser. Those who controlled the browser controlled the user's gateway to the web. This time, the battle will move from hands and keyboard up to the nose tip, controlled by your voice commands. This will, unlike the browser, be intelligent. It will remember who you are, it will know you, and you will never replace it. We think. This battle, which we call "The Battle of Metaverse 2.0," probably starts with launches from Google and Meta in 2026-27. And it's a fight against Apple's hegemony over the mobile phone.

Physical agents: Autonomous robots will play a larger role in industry, transport, and healthcare, with improved sensing and decision-making capabilities. In spring 2025, the first humanoids began working on factory floors with repetitive tasks. Companies developing, designing, and selling humanoids will start pilot production during 2025. Our new colleagues and buddies will probably become both more numerous and many over the next five years. Their goal is to do exactly the same as us, only better, cheaper, and faster.

Categories of Digital Agents

By Technology

Rule-Based Agents

Simple systems that follow predefined rules (e.g., chatbots with fixed responses). Most of us have half-desperately tried to be transferred to a human voice when encountering the first variants of these chatbots. But they're better, and getting even better.

Machine Learning-Based Agents

Use machine learning to adapt to data (e.g., recommendation systems on Netflix).

Generative AI Agents

Advanced models that create content, such as text, images, or code (those we know well: GPT models or DALL-E).

Simulation Agents

Operate in virtual environments to test execution of specific tasks or scenarios before transferring data to robots that perform movements in the real world.

By Use Cases

  • Customer Service: Virtual assistants that handle inquiries
  • Content Production: AI that generates material for marketing, articles, or design
  • Data Analysis: Agents that analyze large amounts of data and assist as decision support
  • Virtual Simulations: Agents used in games, metaverse, or research

AI Agents in Your Glasses

By 2027, glasses will have gained a new function. Not just for vision, but as an interface to a reality where the physical and digital merge. Mixed Reality (MR) glasses, also called AR/VR glasses, have developed into the next big thing for tech giants, startups, and pure AI companies. A big question that arises then is: who owns the user experience and who gets access to it?

The Hardware-Platform Connection

Hardware and the platform (software) are tightly connected. One determines what the user sees and feels, the other determines which digital actors get to contribute functionality and intelligence. For AI agents, it's crucial whether they get access to sensors, processing power, and context, and to what degree they can operate freely.

The Major Players

Apple - The Cathedral Builder

Apple leads in the premium segment. Vision Pro is powered by the new M4 chip (the engine), developed on 3nm technology (energy efficient), with LiDAR (understanding surroundings), sharp 4K micro OLED screens, and tight integration with their own ecosystem.

Today, this is technologically superior hardware, but with a platform that largely shuts out external actors. Access to necessary functions like eye tracking and spatial understanding is largely reserved for Apple's own systems.

Meta - The Town Square

Meta positions itself in the middle, with lower price, semi-open platform, and focus on social interaction. Where Apple builds cathedrals, Meta tries to build town squares.

With Quest 4, Horizon Glasses, and Orion, you get access to AR display and voice control, plus a platform that invites external actors in - albeit under certain conditions. The Orion AR glasses feature advanced specifications including multiple cameras, microphones, and adaptive lenses.

Google - The Open Ecosystem

Google showed their new AR glasses at the "I/O 2025" conference. The glasses are built on the Android XR platform and powered by Gemini, Google's multimodal AI model.

They combine camera-based sensing functions with a discrete in-lens display and sound system, giving users access to real-time information. Features like translation, object recognition, and speech-to-text show how the glasses function as a proactive extension of our abilities.

Xiaomi - The Disruptor

A competitor from China, Xiaomi, showed their first proper AI glasses last week. Note "AI glasses" - meaning smart glasses (like Meta Ray Bans/Oakley), but with some AR functionalities.

They weigh 40 grams, have Sony cameras, five microphones, speakers, and lenses that adapt to light conditions. Powered by Qualcomm's Snapdragon AR2 chip and Xiaomi's own Vela OS, you can use your voice to control real-time translations, object recognition, and QR code payments. Battery lasts up to 1.6 hours and charges in under one hour. With such impressive hardware and a starting price around 250 euros, this could be a serious competitor to American tech companies.

Platform Strategies and Agent Access

It's not enough to look at the glasses' specifications. The platforms behind them determine to what extent AI agents can contribute to the user experience:

  • Apple: Maintains its closed ecosystem, with limited access and strictly curated interfaces
  • Meta: Offers greater flexibility, with open SDK and support for third-party AI but still within the framework of its own priorities
  • Google: Focuses on technological openness, but with binding to its own infrastructure
  • Startups: Open the door completely, but lack depth in both software and distribution

The Agent Landscape in 2027:

By 2027, we can imagine Apple among the best on hardware, Meta on platform accessibility, and Google on AI integration. AI agents increasingly gain entry, but access level and functionality vary.

For actors building applications, Meta is probably the most attractive gateway, given the platform's relative openness and sufficient technical capacity.

Looking Further Ahead

Toward 2029, we expect a more open structure. Standardized protocols and regulatory pressure may force AI agents to move between platforms (called interoperability) without loss of functions. Perhaps agents will create a common language, like HTML is for web pages. Then smart, self-learning assistants can work on all glasses - regardless of brand or platform.

If xAI establishes its own platform and builds from the ground up with AI at the center, they could become a real challenger. The ideal winner has high technical performance on a completely open platform. Maybe that's what xAI is trying. Time will tell. The question is no longer just who builds the best glasses. It's also who opens the door for future AI agents, and who keeps them out.

Leading Companies in the Digital AI Agent Ecosystem

NVIDIA

Focus on GPUs, platforms like Omniverse for simulations and Jetson for robotics. NVIDIA integrates generative AI and simulation agents to support digital twins and metaverse applications. Leading in hardware (GPUs) and software for digital agents.

OpenAI

Develops generative AI agents, such as Operator, with focus on text and image generation. Focuses on scalability and broad application in customer service, content production, and research. Leads in generative AI but faces challenges related to ethics, data quality, and regulation.

Google

Integrates AI agents in the cloud (Google Cloud AI) and products like Google Assistant. Focus on data analysis, personalization, and autonomy in search and ads. Strong position in AI services around cloud with a broad portfolio.

Microsoft

Integrates AI agents in Azure and Office (e.g., Copilot). Collaboration with OpenAI strengthens their position in generative AI. Growing rapidly in the enterprise market, especially for data analysis and productivity services.

Palantir

Develops platforms for advanced data processing and decision support, like Foundry and Gotham, now moving toward operative AI agents - agents that perform tasks independently. Technology used by public and private institutions to analyze large amounts of data and act on insights.

Salesforce

Heavily investing in AI through its Einstein ecosystem, with predictive analysis, customer segmentation, and automation integrated directly into the CRM system. Their AI agents help businesses better understand and serve customers, with process automation at the core.

Meta

Builds both hardware and software for AR/VR and AI agents. Through glasses like Ray-Ban Meta, Orion, and platforms like MX and Llama models, they're trying to place personal agents in our lives. They're betting on making agents an integrated part of social and visual experiences.

Winners and Losers

These are of course rough categories where the idea is to sketch possible consequences of the digital agents' march of conquest.

Winners

Industries:

  • Technology companies with exposure to cloud and AI infrastructure
  • Healthcare using AI for more precise diagnostics and tailored treatment methods
  • Finance automating both trading and risk management
  • Gaming and media where metaverse and generative AI open completely new ways to create and consume content

Among companies, it's particularly the tech giants NVIDIA, OpenAI, Google, and Microsoft that lead, while many startups build niche AI agents for various use cases. At the individual level, there's high demand for data scientists, AI developers, and specialists on ethical questions around technology. Roles related to digital innovation and change management are also becoming increasingly important as AI changes how we work and make decisions.

Losers

At Risk:

  • Traditional sectors like physical retail, routine production, and manual administration
  • Organizations that cannot or will not invest in the changes happening
  • Jobs with highly repetitive content - customer service, simple administrative tasks, manual data analysis
  • Middle managers in large organizations along with simple programming jobs

This can provide higher efficiency and lower costs for businesses, but also creates challenges. Unemployment, need for retraining, and greater inequality in access to opportunities may be the consequences. In addition follow important ethical questions about surveillance, algorithmic bias (AI making unfair decisions, often driven by "bad data"), and accountability for decisions made by digital systems.

Physical Agents: The Robot Revolution

While digital agents transform our virtual world, physical agents - robots and autonomous systems - are reshaping the physical world. The convergence of AI, advanced sensors, and mechanical engineering is creating a new generation of machines that can work alongside or replace humans in various tasks.

Categories of Physical Agents

Industrial Robots

Specialized machines for manufacturing, assembly, and logistics. These range from traditional robotic arms to advanced collaborative robots (cobots) that work safely alongside humans.

Humanoid Robots

Designed to replicate human form and movement. Companies like Tesla (Optimus), Figure, and Agility Robotics are developing humanoids for general-purpose tasks in factories and homes.

Autonomous Vehicles

Self-driving cars, trucks, and delivery vehicles that navigate without human intervention. Leaders include Waymo, Cruise, and Tesla's Full Self-Driving system.

Service Robots

Robots designed for specific service tasks: surgical robots, cleaning robots, agricultural robots, and hospitality robots.

Leading Companies in Physical Agents

  • Tesla: Developing Optimus humanoid robot alongside their autonomous vehicle technology
  • Boston Dynamics: Advanced robotics with Atlas, Spot, and Stretch robots
  • ABB & FANUC: Industrial robotics leaders with millions of robots deployed globally
  • Intuitive Surgical: Da Vinci surgical robot system used in hospitals worldwide
  • Amazon Robotics: Warehouse automation with over 750,000 robots in operation
  • BYD & CATL: Chinese leaders in manufacturing automation and battery production

Challenges in Collaboration Between Digital and Physical Agents

As digital and physical agents become more prevalent, new challenges emerge in how they collaborate with each other and with humans. These challenges mirror classic organizational theory but with unprecedented complexity.

Key Challenges

Coordination Complexity

Managing multiple agents with different capabilities, protocols, and objectives. How do you ensure a fleet of delivery robots coordinates with traffic management AI and customer service chatbots?

Trust and Verification

How do we verify that agents are acting correctly? When an AI makes a decision, how do we audit it? The "black box" problem becomes critical when agents operate autonomously.

Responsibility and Liability

When an autonomous vehicle causes an accident or an AI agent makes a costly error, who is responsible? The manufacturer, the operator, the programmer, or the agent itself?

Human-Agent Interaction

How do humans maintain meaningful control and understanding when surrounded by increasingly autonomous agents? The risk of skill atrophy and over-reliance on AI is real.

Parallels to Organizational Theory

The challenges of agent coordination mirror classic organizational problems. Just as Max Weber described bureaucratic hierarchies, we're now seeing algorithmic hierarchies emerge. The principal-agent problem from economics takes on new dimensions when both principals and agents can be artificial.

We're moving from "System of Record" (passive data storage) to "System of Engagement" (active interaction) to what might be called "System of Agency" - where systems don't just record or engage but act independently on our behalf.

Digital and Physical Agents in AI Training and Inference

Understanding the distinction between AI training and inference is crucial for grasping how agents operate and evolve.

Training Phase

The learning process where AI models are taught using vast datasets. This requires enormous computational power, typically performed in data centers with thousands of GPUs.

  • High energy consumption
  • Requires massive datasets
  • Time-intensive (weeks to months)
  • Centralized in AI factories

Inference Phase

The application of trained models to make predictions or decisions on new data. This is what happens when you interact with ChatGPT or when a robot recognizes an object.

  • Lower computational requirements
  • Can run on edge devices
  • Real-time processing
  • Distributed deployment

The shift toward edge computing means more inference happens locally - in your phone, car, or AR glasses - rather than in distant data centers. This reduces latency, improves privacy, and enables real-time applications. By 2029, we expect most AI agent interactions to happen at the edge, with only complex tasks requiring cloud connectivity.

Where Are AI Agents Located?

AI agents exist across a spectrum from cloud to edge, each location offering different capabilities and trade-offs.

The Agent Deployment Spectrum

Cloud Agents

Powerful but latency-dependent. Examples: ChatGPT, Claude, large-scale analytics

Hybrid Agents

Balance between power and responsiveness. Examples: Smartphone assistants, smart home devices

Edge Agents

Fast and private but resource-constrained. Examples: AR glasses AI, autonomous vehicle systems

Embedded Agents

Integrated directly into devices. Examples: Smart sensors, IoT devices, wearables

Everyday Tasks AI Agents Can Solve

  • Schedule optimization and calendar management
  • Real-time language translation in conversations
  • Personalized health monitoring and recommendations
  • Automated financial management and investment
  • Content creation and editing
  • Home automation and energy optimization
  • Personal shopping and price comparison
  • Learning assistance and tutoring

From Reactive to Proactive

The evolution of AI agents follows a clear trajectory:

1.
Reactive: Responds to direct commands (current Siri, Alexa)
2.
Assistive: Provides suggestions and reminders (Google Assistant)
3.
Proactive: Anticipates needs and acts independently (future agents)
4.
Autonomous: Makes complex decisions with minimal human oversight

By 2029, we expect most personal AI agents to operate at the proactive level, anticipating your needs based on context, history, and patterns. Your AR glasses might book a restaurant when detecting you're discussing dinner plans, or your health agent might schedule a doctor's appointment when detecting concerning symptoms.

Moore's Law and the Agent Revolution

Moore's Law - the observation that transistor density doubles approximately every two years - has been the drumbeat of the digital revolution. But for AI agents, we're seeing something beyond Moore's Law: the convergence of multiple exponential improvements.

Exponential Improvements Driving AI Agents:

  • Compute: GPU performance doubling every 2.2 years
  • Algorithms: AI efficiency improving 44x faster than Moore's Law
  • Data: Available training data doubling every 12 months
  • Cost: AI training costs falling 10x every 18 months
  • Energy Efficiency: Performance per watt improving exponentially

This compound exponential growth means that capabilities we think are 10 years away might arrive in 3-5 years. The "slowly, slowly, suddenly" pattern we mentioned at the beginning is driven by these overlapping exponentials reaching inflection points simultaneously.

Our Disruptive Investment Strategy in a World Where Agents Are Coming

As agents reshape every industry, our investment strategy focuses on identifying companies positioned to benefit from or enable this transformation. We look for businesses that either build the infrastructure for agents, deploy agents effectively, or solve problems that agents create.

Investment Categories

Infrastructure Builders

Companies providing the foundation for AI agents:

  • Semiconductor manufacturers (NVIDIA, AMD, TSMC)
  • Cloud providers (AWS, Azure, GCP)
  • Data infrastructure (Snowflake, Databricks)
  • Development platforms (GitHub, GitLab)

Agent Creators

Companies building and deploying agents:

  • AI model developers (OpenAI, Anthropic)
  • Robotics companies (Tesla, Boston Dynamics)
  • Enterprise AI (Palantir, C3.ai)
  • Consumer AI (Apple, Google, Meta)

Agent Enablers

Companies that help agents function:

  • Sensor manufacturers (Sony, STMicroelectronics)
  • Communication infrastructure (Qualcomm, Broadcom)
  • Cybersecurity (CrowdStrike, Palo Alto)
  • Edge computing (Fastly, Cloudflare)

Transformation Beneficiaries

Companies leveraging agents for advantage:

  • Digital health (Teladoc, Veeva)
  • Fintech (Square, Stripe)
  • E-commerce (Shopify, MercadoLibre)
  • SaaS platforms (Salesforce, ServiceNow)

Investment Principles for the Agent Era

Bet on Platforms, Not Products: Platforms that enable many agents will capture more value than individual agent applications
Focus on Data and Compute Bottlenecks: Companies controlling scarce resources (GPUs, quality data) will maintain pricing power
Watch for Network Effects: Agent ecosystems with strong network effects will dominate their categories
Consider Regulatory Moats: Companies that navigate regulation successfully will have sustainable advantages
Embrace Disruption Risk: Be willing to exit positions in companies being disrupted by agents

Timing the Agent Revolution

We're in the "late slowly" phase of agent adoption. Key indicators we're watching for the "suddenly" phase:

  • AR glasses reaching 10 million units sold annually (expected 2026-2027)
  • Humanoid robots in regular production use (starting 2025)
  • Agent-to-agent transactions exceeding $1 billion annually (2026-2027)
  • Regulatory frameworks for autonomous agents established (2025-2026)
  • Consumer AI agents managing >$100 billion in assets (2027-2028)

Conclusion: The Agent-Driven Future

Digital and physical agents represent one of the most significant technological shifts in human history. We're not just automating tasks - we're creating a new form of intelligence that can perceive, decide, and act on our behalf. This isn't science fiction; it's happening now, accelerating from "slowly" to "suddenly."

The investment implications are profound. Traditional business models will be disrupted. New monopolies will emerge. The companies that successfully deploy agents will achieve unprecedented scale and efficiency. Those that don't will become obsolete.

But beyond the financial opportunities, we're facing fundamental questions about the future of work, the nature of intelligence, and the structure of society. How we answer these questions - through our investments, regulations, and choices - will determine whether agents become tools of liberation or instruments of a new digital feudalism.

Key Takeaways for Investors

  • The Transition Is Accelerating: We're moving from "slowly" to "suddenly" in agent adoption
  • Infrastructure Is King: Companies providing picks and shovels for the agent gold rush will prosper
  • Platform Battles Matter: Control of agent platforms (especially AR glasses) will determine market power
  • Disruption Is Universal: No industry is safe from agent-driven transformation
  • Ethics Drive Value: Companies that solve agent ethics and trust will have sustainable advantages

The agent revolution isn't coming - it's here. The armies of digital and physical agents are already being deployed. The question isn't whether they'll transform our world, but how quickly and completely. As investors, we must position ourselves not just for the world that is, but for the world that's rapidly becoming.

Remember: "Slowly, slowly, suddenly." We're approaching the suddenly. Are you ready?

Disclaimer: The content in this article is not intended as investment advice or recommendations. If you have any questions about the funds referenced, you should contact a financial advisor who knows you and your situation. Also remember that historical returns in funds are never a guarantee of future returns. Future returns will depend on, among other things, market developments, the manager's skill, the fund's risk, and costs of purchase, management, and redemption. Returns can also be negative as a result of price losses.

This perspective has been translated from Norwegian to English

Download Original (Norwegian)