This article is part of a series on Artificial Intelligence.

AI Browsers: More Than Just Search

Who would have imagined that the humble browser, once just a passive window to the web, would evolve into an active, intelligent assistant? Not long ago, we transitioned from desktop applications to web applications. Then came AI search overviews that began summarizing content across the web for us. And now we stand on the brink of a new paradigm: the AI browser.

These new browsers—powered by language models and embedded agents—are not just interfaces; they are collaborators. Projects like Arc Browser’s Arc Search, SigmaOS, and Perplexity.ai are early hints of what’s coming. They can synthesize content, take actions, and even learn from user behavior. Eventually, we may simply speak, type, or gesture, and a conversational interface will orchestrate hundreds of underlying actions.

From Tabs to Agents

I remember when opening a dozen tabs felt like multitasking. Now, AI browsers can autonomously open hundreds in the background to complete tasks like booking reservations, planning agendas, or comparing prices. These are not hypothetical scenarios—they are prototypes already in testing. The agent quietly consults multiple sources, summarizes options, and presents a refined output, mimicking what a human researcher might do manually.

It’s a mirror of our own cognitive workflow. When I write an article or code something new, I often consult many sources. AI browsers are just automating and abstracting this process. They are not replacements for thought, but accelerators of it.

Rethinking Our Assumptions

But this reflection goes far beyond browsers. What we are witnessing is a massive leap in how we interact with machines. I've written before about using natural languages as a communication layer, and I continue to believe that natural language is becoming the new programming language.

This transformation has been in the making for years, but only now are we beginning to experience its power. It’s time to reexamine many of the assumptions we made in the early days of computing—assumptions born from constraints of memory, disk space, and processing power. Constraints that no longer hold.

Revisiting Computational Models

Take cellular automata, for example. We often think of them in their classic black-and-white, two-dimensional forms—Conway’s Game of Life being the iconic case. But what happens when we lift the limitations? With modern compute and graphics capabilities, I’ve been experimenting with multidimensional, multicolored cellular automata that express behavior far beyond what was previously imaginable.

This is just one example of an old algorithmic space being reopened, reinterpreted, and reimagined. There are countless others—machine learning models we dismissed due to training time, simulation methods we overlooked due to rendering constraints, or decentralized systems that we deemed unscalable.

The Age of Massive Parallelism

As Sam Altman recently suggested, we’re approaching a world where a single researcher may launch hundreds or thousands of agents to explore different directions in parallel. These agents might optimize algorithms, test hypotheses, simulate economies, or even write software modules. The boundaries between experimentation and deployment are dissolving.

This isn’t automation in the traditional sense. It’s exploratory augmentation. We’re moving from using computers to execute predefined logic to letting them co-navigate ambiguity and discover novelty.

Natural Language as Interface and Substrate

With natural language interfaces, we no longer need to remember the exact syntax of a function or API. We express our intent, and the system generates, adapts, or retrieves code accordingly. While the output of today’s AI systems may still require refinement, the creative potential they unlock is undeniable.

If you’re like me—curious, exploratory, and perhaps slightly impatient—this is a golden era. Whether refining old algorithms or inventing new ones, we can now sketch in code, revise in conversation, and scale our thinking through computation.

What Comes Next?

So what does all this mean? It means that we must revisit what we thought we knew. From user interfaces to computation models, from browser tabs to digital assistants, everything is up for reinvention.

AI isn’t just changing how we build software; it’s changing why we build it. We’re moving from tool-centric development to goal-driven design—where outcomes are co-created with systems that learn and evolve.

We must rethink not just our interfaces, but our ideas. And that’s not a challenge—it’s an invitation.