英语流利说 第13天

Posted qianyindichang

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了英语流利说 第13天相关的知识,希望对你有一定的参考价值。

带着问题听讲解

Q1: 人工智能出现了什么新问题?

Q2: 动词“deprecate”怎么理解(轻视,忽视)

Q3: 你对人工智能有哪些畅想

红色表示重点词汇

蓝色表示句子主干

What‘s wrong with AI? Try asking a human being 

Amazon has apparently abandoned an AI system aimed at automating its recruitment process. The system gave job candidates scores ranging from one to five stars, a bit like shoppers rating products on the Amazon website.

The trouble was, the program tended to give five stars to men and one star to women. According to Reuters, it “penalised résumés that included the word ‘women’s’, as in ‘women’s chess club captain’” and marked down applicants who had attended women-only colleges.

It wasn’t that the programme was malevolently misogynistic. Rather, like all AI programs, it had to be “trained” by being fed data about what constituted good results. Amazon, naturally, fed it with details of its own recruitment programme over the previous 10 years. Most applicants had been men, as had most recruits. What the program learned was that men, not women, were good candidates.

It’s not the first time AI programs have been shown to exhibit bias. Software  used in the US justice system to assess a criminal defendant’s likelihood of reoffending is more likely to judge black defendants as potential recidivists. Facial recognition software is poor at recognising non-white faces. A Google photo app even labelled African Americans “gorillas”.

All this should teach us three things. First, the issue here is not to do with AI itself, but with social practices. The biases are in real life.

Second, the problem with AI arises when we think of machines as being objective. A machine is only as good as the humans programming it.

And third, while there are many circumstances in which machines are better, especially where speed is paramount, we have a sense of right and wrong and social means of challenging bias and injustice. We should never deprecate that.

人工智能怎么了?试着去问问人类吧

亚马逊似乎已经放弃了旨在使其招聘过程自动化的一个人工智能系统。该系统给求职者打分,从一星到五星不等,有点类似于顾客在亚马逊网站上给产品打分。

问题在于,该程序往往给男性求职者打五星,给女性求职者却只打一星。据路透社报道,该系统“将含有‘女性的’这几个字的简历置于不利地位,比如写有‘女子象棋俱乐部队长’这样的简历就会失分”,除此之外,该系统还压低了上过女子大学的申请人的分数。

这并不是说这个程序患有恶毒的厌女症。相反,正如所有人工智能程序一样,它必定被“训练”过,训练方式是向其灌输数据,这些数据表明什么可以被视为好结果。亚马逊自然也向它灌输了过去 10 年里招聘方案的细节信息。大多数申请者都是男性,大多数新员工也都是男性。这个程序学到的是:男人,而不是女人,才是好的候选人。

这已经不是人工智能程序第一次显示出偏见了。美国司法系统用来评估刑事被告再次犯罪的可能性的软件,更有可能将黑人被告视为潜在的累犯。人脸识别软件对非白种人的面部识别能力较差。而谷歌的一个图片应用程序甚至把非裔美国人标注为“大猩猩”。

所有这一切都应该告诉我们三件事。首先,这里的问题不是与人工智能本身有关,而是与社会实践有关。这些偏见存在于现实生活中。

第二,当我们认为机器是客观的时候,人工智能的问题出现了。机器如何表现,仅取决于对它编程的人们。

第三,在许多情况下,机器表现更好,特别是在速度至上的情况下,但我们有对正确和错误的识别能力,以及挑战偏见和不公正的社会手段。我们永远不应该忽视这一点。

以上是关于英语流利说 第13天的主要内容,如果未能解决你的问题,请参考以下文章

英语流利说 第23天

英语流利说 第38天

英语流利说 第11天

英语流利说 第30天

英语流利说 第12天

英语流利说 第27天