🎉 Gate xStocks Trading is Now Live! Spot, Futures, and Alpha Zone – All Open!
📝 Share your trading experience or screenshots on Gate Square to unlock $1,000 rewards!
🎁 5 top Square creators * $100 Futures Voucher
🎉 Share your post on X – Top 10 posts by views * extra $50
How to Participate:
1️⃣ Follow Gate_Square
2️⃣ Make an original post (at least 20 words) with #Gate xStocks Trading Share#
3️⃣ If you share on Twitter, submit post link here: https://www.gate.com/questionnaire/6854
Note: You may submit the form multiple times. More posts, higher chances to win!
📅 July 3, 7:00 – July 9,
AI-Driven Browser Revolution: The Third Browser War Is Coming
AI Reshaping Browsers: The Eve of the Third Browser War
The third browser war is quietly unfolding. From Netscape and Microsoft's IE in the 1990s, to the open-source spirit of Firefox and Google's Chrome, the browser competition has always been a concentrated reflection of platform control and technological paradigm shifts. Chrome has secured its dominant position with its speed of updates and ecological integration, while Google has formed a closed loop of information entry through its "dual oligopoly" structure of search and browser.
But today, this pattern is being shaken. The rise of large language models (LLM) has led to more and more users completing tasks on the search results page with "zero clicks," resulting in a decrease in traditional webpage click behavior. At the same time, there are rumors that Apple intends to replace the default search engine in Safari, further threatening Alphabet's profit foundation, and the market has begun to show unease about the "orthodoxy of search."
Browsers themselves are also facing a role reshaping. They are not only tools for displaying web pages but also containers that integrate various capabilities such as data input, user behavior, and privacy identity. Although AI Agents are powerful, to achieve complex page interactions, call local identity data, and control web elements, they still need to rely on the trust boundaries and functional sandboxes of browsers. Browsers are transforming from human interfaces into system call platforms for Agents.
What can truly break the current browser market pattern is not another "better Chrome," but a new interactive structure: not the display of information, but the invocation of tasks. Future browsers need to be designed for AI Agents - not only able to read but also to write and execute. Projects like Browser Use are attempting to semanticize page structures, transforming visual interfaces into structured text that can be called by LLMs, greatly reducing interaction costs.
Mainstream projects in the market have begun to explore: Perplexity builds a native browser Comet, using AI to replace traditional search results; Brave combines privacy protection with local reasoning, enhancing search and blocking features with LLM; while Crypto-native projects like Donut aim at a new entry point for AI and on-chain asset interaction. The common feature of these projects is: they attempt to reconstruct the input side of the browser rather than beautifying its output layer.
For entrepreneurs, opportunities lie in the triangular relationship between input, structure, and agency. Browsers, as the interfaces through which future Agents will call upon the world, mean that whoever can provide structured, callable, and trustworthy "capability blocks" can become part of a new generation of platforms. From SEO to AEO (Agent Engine Optimization), from page traffic to task chain calls, product forms and design thinking are being restructured. The third browser war occurs in "input" rather than "display"; the decisive factor is no longer who captures the user's attention, but who earns the trust of the Agent and gains the entry for invocation.
A Brief History of Browser Development
In the early 1990s, when the Internet had not yet become a part of daily life, Netscape Navigator emerged like a sailing ship opening up a new continent, offering millions of users access to the digital world. This browser was not the first, but it was the first product to truly reach the masses and shape the Internet experience. At that time, people could easily browse web pages through a graphical interface for the first time, as if the whole world had suddenly become within reach.
However, brilliance is often short-lived. Microsoft quickly realized the importance of browsers and decided to forcefully bundle Internet Explorer into the Windows operating system, making it the default browser. This strategy can be described as a "platform killer" that directly undermined Netscape's market dominance. Many users did not actively choose IE, but rather accepted it because it was set as the default by the system. With the distribution power of Windows, IE rapidly became the industry leader, while Netscape fell into a decline.
In a moment of crisis, Netscape's engineers chose a radical and idealistic path - they made the browser's source code public and called upon the open-source community. This decision seemed like a "Macedonian concession" in the tech world, signaling the end of an era and the rise of new forces. This code later became the foundation of the Mozilla browser project, initially named Phoenix (meaning rebirth of the phoenix), but due to trademark issues, it underwent several name changes before finally being named Firefox.
Firefox is not merely a simple copy of Netscape; it has achieved multiple breakthroughs in user experience, plugin ecosystem, and security. Its birth marks a victory for the open-source spirit and injects new vitality into the entire industry. Some describe Firefox as the "spiritual successor" to Netscape, akin to how the Ottoman Empire inherited the remnants of the Byzantine Empire. This metaphor, though exaggerated, is quite meaningful.
However, in the years leading up to the official release of Firefox, Microsoft had already launched six versions of IE. With the advantage of time and a system bundling strategy, Firefox was initially at a disadvantage, making it clear that this competition was not a fair race from the start.
At the same time, another early player quietly made its appearance. In 1994, the Opera browser was born in Norway, initially as an experimental project. However, starting from version 7.0 in 2003, it introduced its self-developed Presto engine, leading the way in supporting cutting-edge technologies such as CSS, responsive layouts, voice control, and Unicode encoding. Although the number of users was limited, it has always been at the forefront of the industry in terms of technology, becoming "the geek's favorite."
In the same year, Apple launched the Safari browser. This was a significant turning point. At that time, Microsoft had invested $150 million in the financially struggling Apple to maintain the appearance of competition and avoid antitrust scrutiny. Although Google has been the default search engine since Safari's inception, this historical entanglement with Microsoft symbolizes the complex and subtle relationship between internet giants: cooperation and competition always go hand in hand.
In 2007, IE7 was launched with Windows Vista, but the market feedback was mediocre. In contrast, Firefox steadily increased its market share to about 20% due to its faster update pace, more user-friendly extension mechanism, and natural appeal to developers. The dominance of IE was gradually loosening, and the tide was changing.
Google takes a different approach. Although it started planning to create its own browser in 2001, it took six years to convince CEO Eric Schmidt to approve the project. Chrome was launched in 2008, built on the Chromium open-source project and the WebKit engine used by Safari. It was jokingly referred to as a "bloated" browser, but thanks to Google's deep expertise in advertising and brand building, it quickly rose to prominence.
The key weapon of Chrome is not its features, but its frequent version updates (every six weeks) and a unified experience across all platforms. In November 2011, Chrome first surpassed Firefox, reaching a market share of 27%; six months later, it overtook IE, completing the transition from challenger to dominator.
At the same time, China's mobile Internet is also forming its own ecosystem. A well-known company's browser rapidly rose to prominence in the early 2010s, especially in emerging markets like India, Indonesia, and China, winning the favor of users with low-end devices due to its lightweight design and data compression features that save traffic. In 2015, its global mobile browser market share exceeded 17%, reaching as high as 46% in India at one point. However, this victory was not lasting. As the Indian government strengthened security reviews of Chinese applications, the browser was forced to withdraw from key markets, gradually losing its former glory.
Entering the 2020s, Chrome's dominance has been established, with a global market share stabilizing at around 65%. Notably, although the Google search engine and Chrome browser both belong to Alphabet, they represent two independent hegemonic systems from a market perspective - the former controls about 90% of the global search entry points, while the latter holds the majority of users' "first window" into the internet.
To maintain this dual monopoly structure, Google is willing to invest heavily. In 2022, Alphabet paid Apple about $20 billion just to keep Google as the default search engine in Safari. Some analysts pointed out that this expenditure is equivalent to 36% of the search ad revenue Google generates from Safari traffic. In other words, Google is paying a "protection fee" for its moat.
But the wind direction has changed again. With the rise of large language models (LLMs), traditional search has begun to be impacted. In 2024, Google's search market share dropped from 93% to 89%. Although it still dominates, cracks are beginning to show. More disruptive are the rumors that Apple may launch its own AI search engine - if Safari's default search shifts to its own camp, it will not only rewrite the ecological landscape but could also shake Alphabet's profit pillar. The market reacted quickly, with Alphabet's stock price falling from $170 to $140, reflecting not only investor panic but also a deep unease about the future direction of the search era.
From Navigator to Chrome, from open-source ideals to advertising commercialization, from lightweight browsers to AI search assistants, the browser wars have always been a battle over technology, platforms, content, and control. The battlefield keeps shifting, but the essence has never changed: whoever controls the entry point defines the future.
In the eyes of VCs, relying on the new demand for search engines in the era of LLM and AI, the third browser war is gradually unfolding.
The Outdated Architecture of Modern Browsers
When it comes to the architecture of a browser, the classic traditional architecture is shown in the figure below:
Client - Frontend Entry
Query the nearest Google Front End via HTTPS to complete TLS decryption, QoS sampling, and geolocation routing. If abnormal traffic (DDoS, automated scraping) is detected, rate limiting or challenges can be applied at this layer.
Query Understanding
The front end needs to understand the meaning of the words typed by the user, which involves three steps: neural spelling correction, correcting "recpie" to "recipe"; synonym expansion, expanding "how to fix bike" to "repair bicycle"; intent parsing, determining whether the query is informational, navigational, or transactional, and assigning a Vertical request.
Candidate Recall
The querying technology used by a well-known search engine is called: inverted index. In a forward index, we can index a file by providing an ID. However, users are unlikely to know the number of the content they want among billions of files, so a very traditional inverted index is used to query which files contain the corresponding keywords based on the content. Next, vector indexing is used for semantic search, which means finding content with similar meanings to the query. It converts text, images, and other content into high-dimensional vectors (embedding) and searches based on the similarity between these vectors. For example, even if a user searches for "how to make pizza dough," the search engine can return results related to "pizza dough making guide" because they are semantically similar. After going through inverted indexing and vector indexing, around one hundred thousand web pages are initially screened.
Multi-level Sorting
The system usually filters candidate pages from hundreds of thousands to about 1,000 by using thousands of lightweight features such as BM25, TF-IDF, and page quality scores, forming a preliminary candidate set. These types of systems are collectively referred to as recommendation engines. They rely on a massive amount of features generated by various entities, including user behavior, page attributes, query intent, and contextual signals. For example, they will synthesize information such as user history, feedback from other users, page semantics, and query meanings, while also considering contextual elements like time (time of day, specific days of the week) and external events like real-time news.
Deep Learning for Primary Sorting
During the initial retrieval phase, technologies such as RankBrain and Neural Matching are used to understand the semantics of the query and filter out preliminary relevant results from a massive amount of documents. RankBrain is a machine learning system introduced in 2015, designed to better understand the meaning of user queries, especially those that appear for the first time. It converts queries and documents into vector representations to calculate their similarities, thereby finding the most relevant results. For example, for the query "how to make pizza dough," even if the document does not have...