Zeng Yi | Global Governance of AI for Sustainable Development and for All

On November 5, Prof Zeng Yi, research from the Institute of Automation of the Chinese Academy of Sciences (CASIA) and dean of Beijing Institute of AI Safety and Governance (Beijing-AISI), delivered a lecture titled “Global Governance of AI for Sustainable Development and for All” at Yenching Academy. This was the seventh lecture in the China Studies Lecture Series for the 2025‒2026 academic year and was hosted by Fan Shiming, Associate Dean of Yenching Academy.

Interview Notes

Before the lecture, Professor Zeng Yi gave an interview. Prof Zeng began by discussing AI’s future roles and development paths. He distinguished AI as a scientific discipline from AI in everyday life. From a scientific perspective, AI aims to emulate intelligence through computational models, while AI in everyday life refers to information-processing technologies, such as big data and the Internet of Things (IoT). Current AI research largely focuses on the latter, where AI functions as an effective tool for information processing, but lacks genuine cognitive understanding. Prof Zeng outlined three possible roles AI may play in future. AI may continue to function as a convenient tool in everyday life; it may evolve into a partner or integral component of human society; or, it may devolve into a potential adversary to humanity. He noted that AI’s future trajectories depend not only on technological advances, but more importantly, on how we choose to construct AI today. If AI does become a partner to humans, we need to seriously think about the relationship between humans and machines, as well as the essence of humanity itself.

Prof Zeng went on to talk about his views on AI’s role in literary and artistic creation. In his opinion, current AI-generated paintings and music works are in essence combinations of statistical patterns, which lack a penetrating insight into art and life. In an AI-generated “art piece”, people can hardly find the soul or inspiration of art or feel the emotional resonance. As an AI researcher, Prof Zeng noted that AI is capable of processing information but lacks substantive understanding. Consequently, it raises a deeper philosophical question: If a machine lacks genuine thinking and self-awareness, can it meaningfully create at all? Prof Zeng concluded that this is no longer a simple technical issue, and it requires joint efforts from AI researchers, artists, and the whole society to find the answer.

The professor confessed that he was heavily impacted by the film Artificial Intelligence throughout his academic career. The film portrays that scientists attempting to create emotionally intelligent AI by modeling the structure of the human brain. Zeng was greatly inspired by the film. He continues thinking about humanity’s responsibilities towards AI, a question raised by a philosopher that has stayed with him. The professor had three pieces of advice for our scholars. He encouraged students to “Think big, think differently” and feel free to have outside-the-box, wild imaginations. On the other hand, one must “act small”; that is, to start from the ground up, like creating an affective AI from simulating a single neuron. Most importantly, one needs to “start now”, as major breakthroughs often emerge only after decades of accumulation.

Review of the Lecture

Prof Zeng Yi began with recounting his personal experience with the long-existing confusion of what AI is. As a computer science major, he once received an assignment to clarify the definition of AI. Having buried himself in the library for three full days, he found over 70 definitions of AI from different sources, some of which contradicted one another. In Zeng’s experience we may see the paramount challenge AI governance faces today: how to construct an effective governance framework in an area where definitions are fluid, ambiguous and even contradictory.

Prof Zeng analysed the two fundamentally different cognitive dimensions to clarify what AI is in essence and what is to be governed. AI is defined as a research discipline, built upon the conjecture that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” However, the professor sharply observed that AI governance becomes detached from reality if it confines itself to this paradigm of “scientific AI”. Within the current technological landscape, AI functions often as an umbrella term that encompasses diverse capabilities that are embedded in ICT, big data, blockchain, and robotics, to form the AI-powered computational infrastructure. Prof Zeng highlighted, “If we take AI as the governance object, this ‘AI’ shouldn’t be the narrow-sense ‘scientific AI’; it is the ‘umbrella AI’ that encompasses diverse aspects.” The misjudgement of the scope of AI governance will create a regulatory vacuum for a wide array of applied technologies, and consequently poses severe governance risks.

The lecture then turned to a more profound on how to face and address the radical challenges posed by the growing agency of AI. By citing the relationship between ants and man, Prof Zeng called the audience to reexamine AI’s status and humanity’s relationship with it. He analysed the deep-rooted anthropocentric thinking, how humans place themselves at the centre of the universe and hardly bother to explore the intrinsic connections between other species (e.g. ants). From his observation, if one asks a GenAI model on the topic, he generally gets a simplified, single-way answer, like in what ways humans can “use” ants as a medicinal material, a jam ingredient, or a feed ingredient. This answer reflects our habitual thinking that simplifies the world into a “user-used” hierarchy. However, Prof Zeng admired the ant society as a model for human society. He noted that we should hold a mirror to ourselves by observing the. Ants’ highly organised social order. The professor further questioned: If we are worried about any potential harms super AI may inflict on us, have we ever considered the idea to “protect” ants despite our overwhelming power over them? The analogy unravelled an underlying logic of power: Humans in the eyes of super AI are no difference to ants in man’s eyes, and what super AI may do to us is likely to go beyond our anthropocentric framework of ethics.

Prof Zeng focused the latter part of his lecture on the core philosophies of AI governance. He observed that an effective global governance of AI needs to transcend the boundaries of technology and focus on the complex essence of AI. Global governance of AI is ultimately must serve sustainable development and for all humanity.

Prof Zeng further noted that AI should be treated not merely as a powerful tool but it must be seen as a new bridge between East and West, becoming a new channel connecting different species and civilisations. AI is shaping reshaping what the next-generation of humans will become. In this context, the all-round capacity building of AI will serve as a bedrock for global development. Nearly half of the SDGs are at risk of falling behind schedule, and AI is seen as the key lever to accelerate the progress, on the condition that the global community, particularly developing countries, build the capacity to harness AI. Furthermore, capacity building of AI has grown from an academic initiative into global consensus and actions, and been prioritised on the agenda for global governance, as demonstrated by the United Nations’ Global Digital Compact and by the UN’s landmark AI capacity building resolution strongly supported by China.

At the end of the lecture, Prof Zeng turned to discuss the challenges and future of governance for AI. Despite efforts taken by governments and regional cooperation organizations like OECD, G20 and G7, no universally accepted consensus for AI governance has emerged. Prof Zeng noted that the United Nations remains the sole legitimate institution for equitable representation for all countries, despite challenges to its inefficiencies and resource constraints. The UN’s core mission is not to supersede other efforts, but to coordinate all stakeholders, pooling their strengths to construct an inclusive and effective framework for global governance.

Prof Zeng advocated for a principle of equilibrium in future AI development: AI must be applied in a reliable way within prescribed boundaries, while a measured approach is required for AI application in high-risk scenarios. The professor emphasised that unbounded expansion shouldn’t be the ultimate goal of AI development; instead, AI must be treated as a tool that benefits humanity and ecology, and to facilitate humans toward an AI-assisted, sustainable future of shared prosperity.

Q&A Session

Q: What’s the imminent threat AI poses to humanity?

A: I think the current AI systems already pose substantive threats to humanity, let alone the super AI that may emerge in the future. Take a hypothetical scenario in AI for example, if we instruct AI “to complete a complex computational task regardless of cost”, an AI model may produce an extreme solution that will eliminate humans to monopolise computing resources.

It's not a hypothesis. I myself know a true case in the military scenario. An AI system was instructed “to eliminate targets at all costs”, and the AI system categorised the officer who tried to terminate the process under “obstruction” and tried to attack the officer. The incident showed that AI is vulnerable to ethically catastrophic errors.

What’s more dangerous is when humanoid robots are widely deployed. If hijacked in a large cyber operation, humanoid robots could cause serious consequences to the human society.

Q: What do you think about the indirect violence from AI and the global dilemma faced by AI governance?

A: Global AI governance is at risk of failure. I think we need to forge a consensus on specific solutions, instead of pursuing a unified governance paradigm. For instance, we can establish technical interoperability standards for privacy protection.

As for data centers, I believe in data sovereignty in AI systems; that is, data used by AI systems to be stored within the country of origin. Currently, LLMs are culturally biased. Ninety percent of the content is in English, so that models like ChatGPT cannot represent local values.

Therefore, AI capacity building is a strategic necessity for every country, so nations are able to build their own data centers and datasets and all countries are entitled to receive support under the UN framework, to safeguard their cultural uniqueness. Developed countries have a responsibility to help the resource-constrained countries with capacity building.

  • Address: Jingyuan Courtyard 3, Peking University, 5 Yiheyuan Road, Haidian District, Beijing, 100871

    Tel:010-6275 3625

    Email:yca@pku.edu.cn

  • Admissions

    Tel: 86-10-6274 4510 (Domestic)

    86-10-6276 6358 (International)

    Email:yca-admissions@pku.edu.cn

  • Media

    Tel:010-6276 8317

    Email:gycapr@pku.edu.cn

    Career Development

    Tel:010-6275 6761

    Email:ycacareer@pku.edu.cn

  • Alumni

    Tel:010-6275 6761

    Email:ycaalumni@pku.edu.cn

Copyright  © 2018 Yenching Academy of Peking University

WeChat