#Gate Latest Proof of Reserves Reaches 10.453 Billion Dollars#
Gate has released its latest Proof of Reserves report! As of June 2025, the total value of Gate’s reserves stands at $10.453 billion, covering over 350 types of user assets, with a total reserve ratio of 123.09% and an excess reserve of $1.96 billion.
Currently, BTC, ETH, and USDT are backed by more than 100% reserves. The BTC customer balance is 17,022.60, and Gate’s BTC balance is 23,611.00, with an excess reserve ratio of 38.70%.The ETH customer balance is 386,645.00, and Gate’s ETH balance is 437,127.00, with an excess reserve
GPT-5 is launching this summer! Sam Altman reveals OpenAI's blueprint for the next generation model, the ambitions of StarGate, will there be ads?
Sam Altman revealed the launch time of GPT-5, the progress of o3 and Deep Research, and the $500 billion "Stargate" infrastructure plan in OpenAI's official podcast. (Synopsis: AI eliminates bamboo engineers? Huang Jenxun shouts that "robots replace thousands of employees": eight Taiwan factories are introducing optimization) (Background supplement: AI really began to grab human jobs" Global manufacturers accelerate layoffs, American college students are unemployed after graduation. At midnight today (19th), OpenAI launched its first podcast on the official Youtube, the first episode was led by CEO Sam Altman, in a 40-minute conversation, he outlined the company's next steps, revealed that GPT-5 is expected to be launched this summer, and deepened reasoning capabilities with the o3 family and Deep Research tools... This article gives you the highlights. GPT-5 coming soon? Sam Altman's New Blueprint for Model Evolution Altman gives a clear timeline for the next-generation flagship model that the market cares about most: "GPT-5 could be available sometime this summer." At the same time, he said that the naming and iteration of models may fundamentally change in the future. He explained that in the past, OpenAI's model was to train a large model and then publish it. But now, the system has become more complex and can evolve through continuous "post-train." This has sparked an internal debate: should GPT-4 be constantly updated but always maintain the major version number, or should GPT-5.1, 5.2, 5.3 be used to make it clear to users that the version has changed? This question reflects a paradigm shift in AI technology from "discrete release" to "continuous evolution." Altman admits that the current naming method of GPT-4o and GPT-3 and other models is a "product" of this transfer process, which is indeed a bit confusing for users. He hopes to get out of this situation as soon as possible and enter a clearer era of GPT-5 and GPT-6. He believes that users should not need to think about whether to use O4-mini-high or O3, but have a top-notch and most reliable model to use. This ability to continue to evolve also blurs the definition of "GPT-5". Altman asks a rhetorical question: "Can users really tell if this is a top-of-the-line GPT-4.5 or a brand new GPT-5?" The answer is not necessarily." This implies that future model upgrades will be seamless and incremental, and that the increase in performance will be more important than the jump in version number. At the same time, Altman is redefining the standard for artificial general intelligence (AGI). He believes that if measured by the standards of five years ago, today's model has long exceeded the definition of AGI at the time. So he proposed a higher goal: "superintelligence." For him, the hallmark of superintelligence is AI's ability to "discover new science on its own," or greatly enhance the ability of human scientists to discover new knowledge. He believes that scientific progress is the most important factor in improving human life, and the potential of AI in this regard is limitless. AI has already shown tremendous value in assisting programmers and scientists, giving him increasing confidence in his roadmap to get there. Stargate Project: A 100-billion-dollar gamble to unlock the future of AI In the current AI race, computing power has become the overwhelming key. Altman unveils the U.S. StarGate project, an ambitious project to build a hyperscale computing infrastructure. He bluntly said that the existing computing power in the world is far from enough, "If people know what more computing power can do, they will desire far more." Although the rumored size of $500 billion has not been confirmed, Altman expressed high confidence in the fundraising and future deployment. He stressed that the Stargate project not only involves hardware construction, but also affects international politics and energy distribution. At the same time, Altman criticized Elon Musk for using his influence to obstruct cooperation with the UAE, emphasizing that AI should not be a zero-sum game, but like a transistor, creating an entire new industry. Energy is at the heart of the project. In the short term, it will rely on a combination of natural gas, solar and nuclear energy, and in the future, it hopes for nuclear fission and fusion. Altman proposes a key mindset shift: "Make energy smart, and export intelligence to the world," AI can deconstruct the geographic limits of energy distribution to completely reshape the global digital infrastructure. From Privacy Wars to Advertising Skepticism: OpenAI's Trust Challenge As AI is deeply integrated into users' private lives, trust and privacy have become core issues that OpenAI cannot avoid. In the interview, Altman responded forcefully to the demands made by The New York Times in its lawsuit with OpenAI. The newspaper asked the court to force OpenAI to keep user chats beyond the regular 30-day period, a move Altman called "crazy overreach." "We're obviously going to fight to the end, and I think we're going to win," he said. He hopes that this incident will be an opportunity to raise awareness of the importance of user privacy in the AI era and establish a solid legal and ethical framework. "People are now having fairly private conversations with ChatGPT, and it's going to be a very sensitive source of information." Altman stresses that privacy protections must be taken seriously. This brings us to another sensitive topic: advertising. How will OpenAI handle the monetization potential of its vast user data? Altman's attitude was extremely cautious. He admits that he is not entirely against advertising, and even thinks that some ads on Instagram have a good experience, but he believes that to introduce ads in ChatGPT, you have to be very careful and the proof standard will be "very, very high." He points out that users have a high level of trust in ChatGPT, in part because their experience is not "tainted" by advertising intent like traditional social media or search engines. So he drew a red line: "If we start modifying the flow of content returned by large language models (LLMs) for who pays more, that feeling is going to be very bad." For users, it will be a moment of trust breakdown." He envisions possible models that don't break trust, such as taking a cut of transactions without affecting the output of the model, or placing ads outside of the main conversation stream. But in any case, the premise must be "really useful to the user" and it must not interfere with the objectivity of the LLM. In contrast, he believes that OpenAI's current model of "building quality services and users paying to use" is very clear and healthy. The ultimate form of human-machine interaction: building new hardware with Jony Ive Another highlight of the interview was Altman's confirmation that OpenAI is working with legendary Apple designer Jony Ive to develop new AI hardware. "We're trying to do something of great quality, and this...