不二如是 发表于 2023-3-29 23:23:04

Pause Giant AI Experiments: An Open Letter|安全机构生命未来研究所

本帖最后由 不二如是 于 2023-3-29 23:35 编辑



传送门


今天,著名安全机构生命未来研究所(Future of Life Institute,FLI)发布了一封公开信,信中呼吁:

全球所有机构暂停训练比GPT-4更强大的AI至少六个月,并利用这六个月时间制定AI安全协议。
截至上面截图时,已有 1125 人在公开信上签名附议,其中包括特斯拉 CEO 伊隆·马斯克,图灵奖得主约书亚·本吉奥,以及苹果联合创始人史蒂夫·沃兹尼亚克。

签名人数还在持续增加中。

这封信的言辞十分激烈。

我们请鱼C字幕组小伙伴为我们翻译下。



AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
和人类智能相近的 AI 系统会为社会和人类带来极大风险,广泛的研究已经证明了这一点,顶尖的AI实验室也已经承认。正如 Asilomar AI 原则所指出的那样,“高级AI可能意味着地球生命史上的深刻变革,我们应当投入可观的关注和资源对其进行规划和管理”。不幸的是,这种规划和管理没人去做,而最近几个月,AI实验室正在开发和部署越来越强大的数字思维,这种竞争逐渐失控——包括它们的创造者在内,没有人能理解、预测或可靠地控制它们。


Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.
如今,AI 系统在一般任务上已经具备了与人类竞争的能力,我们必须自问:是否该让信息渠道充斥着机器写就的宣传和谎言?是否应该把所有工作都自动化,包括那些有成就感的工作?是否该开发机器大脑,让它们比人脑还多,比人脑还聪明,最终淘汰我们取代我们?是否应该冒文明失控的风险?这样的决定绝不能委托给未经选举的技术领袖来做。只有当我们确信强大的 AI 系统是积极的、风险是可控的,才应该继续这种开发。而且AI的潜在影响越大,我们就越需要充分的理由来证明其可靠性。OpenAI 最近关于人工通用智能的声明指出,“到了某个时间节点,在开始训练新系统之前可能需要进行独立审查,而用于新模型的计算的增长速度也应加以限制”。我们深深认同这份声明,而那个时间点就是现在。


Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
因此,我们呼吁所有正在训练比 GPT-4 更强大的 AI 系统的实验室立即暂停训练,至少暂停 6 个月。实验暂停应该对外公开,可以验证,并涵盖所有关键的实验室。如果不能迅速实施,政府应该介入并发布禁止令。


AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
在暂停期间,AI 实验室和独立学者应针对高级AI的设计和开发共同制定实施一套共享安全协议。这份协议的审计和监督应由独立的外部专家严格执行,确保AI的安全性不超过合理的怀疑范围。这并不意味着我们要暂停人工智能的总体发展,而只是从目前这种会不断涌现出具有新能力的、不可预测的黑匣子模型的危险竞赛中退后一步。


AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
人工智能的研究和开发应该重新聚焦于优化最先进的系统,让它更加准确、安全、可解释、透明、稳健、一致、值得信赖、对人类忠诚。


In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
与此同时,AI 开发人员必须与决策者合作,大力推进强有力的AI治理系统的发展。这个系统至少应包括:针对AI的新型监管机构;对高能力 AI 系统的监督追踪和大型算力池;帮助区分真实数据与 AI 生成的数据、并且追踪模型泄漏的溯源和水印系统;强有力的审计和认证生态系统;对 AI 造成的损害定责;用于技术 AI 安全研究的强大公共资金;应对 AI 可能会引发的巨大经济和政治动荡(尤其是对民主的影响)的有充足资源的机构。


Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.
人类可以和AI共创繁荣未来。我们已经成功地创建了强大的 AI 系统,现在我们可以享受“AI 之夏”,在这个过程中,收获回报,设计监管系统,造福所有人,并给社会一个适应的机会。我们已经暂停了其他可能产生灾难性影响的技术,对于 AI 我们也应如此。





多位专家学者在公开信上签名附议,截止这封公开信的部分签名者如下:

Yoshua Bengio,蒙特利尔大学,因对深度学习的开发获得图灵奖,蒙特利尔学习算法研究所所长。

Stuart Russell,伯克利计算机科学教授,智能系统中心主任,标准教科书《人工智能:现代方法》的合著者。

伊隆·马斯克(Elon Musk),SpaceX、Tesla和Twitter的首席执行官。

史蒂夫·沃兹尼亚克(Steve Wozniak),苹果联合创始人。

Yuval Noah Harari,作家和耶路撒冷希伯来大学教授。

Emad Mostaque,Stability AI 首席执行官。

Connor Leahy,Conjecture 首席执行官。

Jaan Tallinn,Skype 联合创始人,生存风险研究中心,未来生命研究所。

Evan Sharp,Pinterest 联合创始人。

Chris Larsen,Ripple 联合创始人

John J Hopfield,普林斯顿大学名誉教授,联想神经网络的发明者。

沙漠之烟 发表于 2023-3-30 07:33:34

?自己还能审核自己{:10_266:}奇葩

sfqxx 发表于 2023-3-30 07:43:48

我的看法是:

风吹沟子凉 发表于 2023-3-30 10:59:35

我的看法是:思想远大

hornwong 发表于 2023-3-30 22:36:48

我的看法是:未来可期
页: [1]
查看完整版本: Pause Giant AI Experiments: An Open Letter|安全机构生命未来研究所