📢 Gate Square #MBG Posting Challenge# is Live— Post for MBG Rewards!
Want a share of 1,000 MBG? Get involved now—show your insights and real participation to become an MBG promoter!
💰 20 top posts will each win 50 MBG!
How to Participate:
1️⃣ Research the MBG project
Share your in-depth views on MBG’s fundamentals, community governance, development goals, and tokenomics, etc.
2️⃣ Join and share your real experience
Take part in MBG activities (CandyDrop, Launchpool, or spot trading), and post your screenshots, earnings, or step-by-step tutorials. Content can include profits, beginner-friendl
The Manus model leads AI breakthroughs, and fully homomorphic encryption has become the new focus of security.
Manus model achieves SOTA results, sparking discussions on AI development paths and safety issues.
Manus has demonstrated outstanding performance in the GAIA benchmark, surpassing other large language models in its class. This means it is capable of independently handling complex tasks, such as multinational business negotiations, involving contract analysis, strategy formulation, and proposal generation, among other aspects. Manus's strengths lie in its dynamic goal decomposition, cross-modal reasoning, and memory-enhanced learning capabilities. It can break down complex tasks into hundreds of executable subtasks while handling multiple data types, and continuously improve decision-making efficiency and reduce error rates through reinforcement learning.
The breakthrough progress of Manus has once again sparked discussions in the industry about the development path of AI: should it move towards a unified model of Artificial General Intelligence (AGI), or a collaborative model of Multi-Agent Systems (MAS)?
This discussion stems from Manus's design philosophy, which implies two possible development directions:
AGI Path: By continuously enhancing the comprehensive capabilities of a single intelligent system, bringing it closer to human decision-making levels.
MAS Path: Using Manus as a super coordinator to direct numerous specialized intelligent agents to work together.
On the surface, this is a debate about the technological path, but it essentially reflects the balance between efficiency and security in AI development. The closer a single intelligent system is to AGI, the higher the risk of opacity in its decision-making process; while multi-agent systems can disperse risk, they may miss critical decision-making moments due to communication delays.
The progress of Manus also highlights the inherent risks in the development of AI. For example, in medical scenarios, it requires access to sensitive patient data; in financial negotiations, it may involve undisclosed corporate information. Additionally, there is the issue of algorithmic bias, which could lead to unfair salary recommendations for specific groups during the hiring process. In the review of legal documents, the misjudgment rate related to emerging industry clauses is also relatively high. More seriously, hackers may mislead Manus into making incorrect judgments during negotiations by implanting specific voice signals.
These issues highlight a grim reality: the more advanced the intelligent systems, the more potential security vulnerabilities they have.
In the field of blockchain and cryptocurrency, security has always been a core concern. The "impossible triangle" theory proposed by Ethereum founder Vitalik Buterin (that security, decentralization, and scalability cannot be achieved simultaneously) has inspired various security strategies:
These security strategies provide important insights for addressing security challenges in the AI era. In particular, fully homomorphic encryption technology is considered a powerful tool for tackling AI security issues.
FHE technology can enhance the security of AI systems at the following levels:
Data layer: All information input by users (including biometric features, voice, etc.) is processed in an encrypted state, and even the AI system itself cannot decrypt the original data.
Algorithmic Level: Achieving "encrypted model training" through FHE, so that even developers cannot directly understand the AI's decision-making process.
Collaborative Level: Communication between multiple intelligent agents uses threshold encryption, so that even if a single node is compromised, it will not lead to global data leakage.
Although these security technologies may seem distant to the average user, they are closely related to everyone's interests. In the "dark forest" of the digital world, only by continuously strengthening security defenses can one avoid becoming a potential victim.
In the field of decentralized identity, the uPort project was launched on the Ethereum mainnet in 2017. In terms of zero trust security models, the NKN project launched its mainnet in 2019. In the field of fully homomorphic encryption, Mind Network became the first FHE project to launch on the mainnet and collaborated with organizations such as ZAMA, Google, and DeepSeek.
Although past security projects may not have received widespread attention from investors, the importance of security issues has become increasingly prominent with the rapid development of AI technology. Whether projects like Mind Network can become leaders in the security field is worth our continued attention.
As AI technology continues to approach human intelligence levels, we need more advanced defense systems. FHE technology not only addresses the current challenges but also prepares us for a more powerful AI era in the future. On the road to AGI, FHE is no longer an option but a necessary condition for ensuring the safe development of AI.