2019考研英语一 Text3分析
Posted yc-l
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了2019考研英语一 Text3分析相关的知识,希望对你有一定的参考价值。
原文
Text 3
This year marks exactly two countries since the publication of Frankenstein; or, The Modern Prometheus, by Mary Shelley. Even before the invention of the electric light bulb, the author produced a remarkable work of speculative fiction that would foreshadow many ethical questions to be raised by technologies yet to come.
Today the rapid growth of artificial intelligence (AI)raises fundamental questions:”What is intelligence, identify, or
consciousness? What makes humans humans?”
What is being called artificial general intelligence, machines that would imitate the way humans think, continues to evade scientists. Yet humans remain fascinated by the idea of robots that would look, move, and respond like humans, similar to those recently depicted on popular sci-fi TV series such as “Westworld” and “Humans”。
Just how people think is still far too complex to be understood, let alone reproduced, says David Eagleman, a Stanford University neuroscientist. “We are just in a situation where there are no good theories explaining what consciousnesss actually is and how you could ever build a machine to get there.”
But that doesn’t mean crucial ethical issues involving AI aren’t at hand. The coming use of autonomous vehicles, for example, poses thorny ethical questions. Human drivers sometimes must make split-second decisions. Their reactions may be a complex combination of instant reflexes, input from past driving experiences, and what their eyes and ears tell them in that moment. AI “vision” today is not nearly as sophisticated as that of humans. And to anticipate every imaginable driving situation is a difficult programming problem.
Whenever decisions are based on masses of data, “you quickly get into a lot of ethical questions,” notes Tan Kiat How, chief executive of a Singapore-based agency that is helping the government develop a voluntary code for the ethical use of AI. Along with Singapore, other governments and mega-corporations are beginning to establish their own guidelines. Britain is setting up a data ethics center. India released its AI ethics strategy this spring.
On June 7 Google pledged not to “design or deploy AI” that would cause “overall harm,” or to develop AI-directed weapons or use AI for surveillance that would violate international norms. It also pledged not to deploy AI whose use would violate international laws or human rights.
While the statement is vague, it represents one starting point. So does the idea that decisions made by AI systems should be explainable, transparent, and fair.
To put it another way: How can we make sure that the thinking of intelligent machines reflects humanity’s highest values? Only then will they be useful servants and not Frankenstein’s out-of-control monster.
分析
Text 3
This year marks exactly two countries since the publication of Frankenstein; or, The Modern Prometheus, by Mary Shelley. Even before the invention of the electric light bulb, the author produced a remarkable work of speculative fiction that would foreshadow many ethical questions to be raised by technologies yet to come.
自《科学怪人》出版以来,今年恰好是两个国家。 或是玛丽·雪莱(Mary Shelley)撰写的《现代普罗米修斯》。
甚至在电灯泡发明之前,作者就做出了出色的推理小说,这些小说预示了即将出现提出许多由技术引起的道德问题
Today the rapid growth of artificial intelligence (AI)raises fundamental questions:”What is intelligence, identify, or consciousness? What makes humans humans?”
今天,人工智能的快速增长,引出了基本问题,什么是智慧,本体或者意识,是什么让人类变得更加像人类
What is being called artificial general intelligence, machines that would imitate the way humans think, continues to evade scientists.
被称作人工智能的,那种能够模拟人类思维方式的机器,将继续逃避科学家
Yet humans remain fascinated by the idea of robots that would look, move, and respond like humans, similar to those recently depicted on popular sci-fi TV series such as “Westworld” and “Humans”。
人们依旧被那些机器人的观察,行动,像人类一样回应所吸引,有点像在流行的科幻系列TV节目描述的那样,比如"西方世界"和"人类"中的机器人
Just how people think is still far too complex to be understood, let alone reproduced, says David Eagleman, a Stanford University neuroscientist.
一位来自斯坦福大学的神经学家,大卫 伊戈曼,说人们的思维方式依旧太复杂难以理解,更不用说复制了
“We are just in a situation where there are no good theories explaining what consciousnesss actually is and how you could ever build a machine to get there.”
我们现在正处在一种情况,没有一个好的理论用来解释意识到底是什么,以及怎么样打造一个获取意识的机器
But that doesn’t mean crucial ethical issues involving AI aren’t at hand. The coming use of autonomous vehicles, for example, poses thorny ethical questions.
并不意味着涉及到AI的关键的道德的问题不在掌控之下。例如,自动驾驶汽车带来了棘手的道德问题
Human drivers sometimes must make split-second decisions. Their reactions may be a complex combination of instant reflexes, input from past driving experiences, and what their eyes and ears tell them in that moment.
人类司机有时会做一瞬间的决策。在那一刻,他们的反应更像是一个复杂的持续的即时反应,从过去的驾驶经验的输入,以及他们的眼睛和耳朵的的复杂组合
AI “vision” today is not nearly as sophisticated as that of humans. And to anticipate every imaginable driving situation is a difficult programming problem.
当今的人工智能的视觉远不及人类复杂,而且还要预测每一个可以想象的驾驶情况是一个很困难的编程问题
Whenever decisions are based on masses of data, “you quickly get into a lot of ethical questions,” notes Tan Kiat How, chief executive of a Singapore-based agency that is helping the government develop a voluntary code for the ethical use of AI.
基于大量数据的决策,你会快速地陷入大量道德问题,坦 凯特指出,这家新加坡机构的首席执行官,这个机构帮助政府制定一个有关AI道德用途的志愿守则
Along with Singapore, other governments and mega-corporations are beginning to establish their own guidelines. Britain is setting up a data ethics center. India released its AI ethics strategy this spring.
不仅仅是新加坡,其他政府和联合公司正在准备开始建立他们自己的指导方针。英国正在建立一个数据道德中心。印度在今年春天发布了自己的AI道德策略
On June 7 Google pledged not to “design or deploy AI” that would cause “overall harm,” or to develop AI-directed weapons or use AI for surveillance that would violate international norms.
在六月7号,谷歌保证不去 "设计或者部署AI",可能会产生"全面危害",或者开发AI导向的武器或者将AI用于违反国际规范的监视
It also pledged not to deploy AI whose use would violate international laws or human rights.
谷歌同时指出不部署那些违反国际法律和人类权力的的AI
While the statement is vague, it represents one starting point. So does the idea that decisions made by AI systems should be explainable, transparent, and fair.
当陈述不清楚,它代表了一个起点。所以由AI系统决策出的想法应该是可解释的,透明和公正的
To put it another way: How can we make sure that the thinking of intelligent machines reflects humanity’s highest values? Only then will they be useful servants and not Frankenstein’s out-of-control monster.
换一种说法,我怎么确定智能机器的想法反映了人类最高的机制?只有当他们变成可用的侍者而不是像弗拉克斯坦那种失控的怪物
以上是关于2019考研英语一 Text3分析的主要内容,如果未能解决你的问题,请参考以下文章