TED演讲| 人工智能正在形成“反乌托邦”
Posted TED英语演说
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了TED演讲| 人工智能正在形成“反乌托邦”相关的知识,希望对你有一定的参考价值。
演说者:Zeynep Tufekci
演说题目:人工智能正在形成“反乌托邦”
因我们一次又一次的点击,而造成了由人工智慧驱动的反乌托邦(是一种不得人心、令人恐惧的假想社群或社会)。在这场让人开眼界的演说中,她说明脸书、Google 和亚马逊等公司用演算法来诱使我们点击广告和驻足他们的网站,同时也控制和安排我们看得到、看不到哪些政治内容或社会资讯。我们面对的真正威胁并不是机器或人工智慧,而是当权者可能会用人工智慧来控制我们。她提出了我们应该如何因应。
中英对照演讲稿
So when people voice fears of artificial intelligence,very often, they invoke images of humanoid robots run amok.You know? Terminator?You know, that might be something to consider,but that’s a distant threat.Or, we fret about digital surveillancewith metaphors from the past.1984, George Orwell’s "1984,"it’s hitting the bestseller lists again.It’s a great book,but it’s not the correct dystopia for the 21st century.What we need to fear mostis not what artificial intelligence will do to us on its own,but how the people in power will use artificial intelligenceto control us and to manipulate usin novel, sometimes hidden,subtle and unexpected ways.Much of the technologythat threatens our freedom and our dignity in the near-term futureis being developed by companiesin the business of capturing and selling our data and our attentionto advertisers and others:Facebook, Google, Amazon,Alibaba, Tencent.
当人们表达出对人工智慧的恐惧,他们脑中的景象通常是 形象似人的机器人失控杀人。知道吗?魔鬼终结者?虽然考量那种情况的确没错,但那是遥远以后的威胁。或者,我们担心被数位监视,有着来自过去的隐喻。乔治欧威尔的《1984》再度登上了畅销书的排行榜。虽然它是本很棒的书,但它并未正确地反映出 21 世纪的反乌托邦。我们最需要恐惧的并不是人工智慧本身会对我们怎样,而是掌权者会如何运用人工智慧来控制和操纵我们,用新颖的、有时隐蔽的、精细的、出乎意料的方式。那些会在不远的将来威胁我们自由和尊严的科技,多半出自下面这类公司,他们攫取我们的注意力和资料,贩售给广告商和其他对象:脸书、Google、亚马逊、阿里巴巴、腾讯。
Now, artificial intelligence has started bolstering their business as well.And it may seem like artificial intelligenceis just the next thing after online ads.It’s not.It’s a jump in category.It’s a whole different world,and it has great potential.It could accelerate our understanding of many areas of study and research.But to paraphrase a famous Hollywood philosopher,With prodigious potential comes prodigious risk.
人工智慧开始巩固这些公司的事业。看似人工智慧将是线上广告后的下一个产物。并非如此。它是个大跃进的类别,一个完全不同的世界,它具有庞大的潜力,能够加速我们对于 许多研究领域的了解。但,转述一位知名 好莱坞哲学家的说法:「惊人的潜力会带来惊人的风险。」
Now let’s look at a basic fact of our digital lives, online ads.Right? We kind of dismiss them.They seem crude, ineffective.We’ve all had the experience of being followed on the webby an ad based on something we searched or read.You know, you look up a pair of bootsand for a week, those boots are following you around everywhere you go.Even after you succumb and buy them, they’re still following you around.We’re kind of inured to that kind of basic, cheap manipulation.We roll our eyes and we think, "You know what? These things don’t work."Except, online,the digital technologies are not just ads.Now, to understand that, let’s think of a physical world example.You know how, at the checkout counters at supermarkets, near the cashier,there’s candy and gum at the eye level of kids?That’s designed to make them whine at their parentsjust as the parents are about to sort of check out.Now, that’s a persuasion architecture.It’s not nice, but it kind of works.That’s why you see it in every supermarket.Now, in the physical world,such persuasion architectures are kind of limited,because you can only put so many things by the cashier. Right?And the candy and gum, it’s the same for everyone,even though it mostly worksonly for people who have whiny little humans beside them.In the physical world, we live with those limitations.
先谈谈一个数位生活的 基本面向:线上广告。我们算是有点轻视了线上的广告。它们看似粗糙、无效。我们都曾经因为在网路上 搜寻或阅读过某些内容,而老是被一个广告给跟随着。上网搜寻一双靴子,之后的一周,你到哪儿 都会看见那双靴子。即使你屈服,买下了它, 它还是到处跟着你。我们算是习惯了 那种基本、廉价的操纵,翻个白眼,心想: 「知道吗?这些没有用。」只除了在线上,数位科技并不只是广告。为了解这一点,我们先用 实体世界当作例子。你们有没有看过,在超市结帐台 靠近收银机的地方,会有放在孩子视线高度的 糖果和口香糖?那是设计来让孩子哀求正在结帐的父母用的。那是一种说服架构,不太好,但算是有些效用,因此在每个超级市场都看得到。在实体世界中,这种说服架构有点受限,因为在收银台那里 只摆得下那么点东西,对吧?并且每个人看到的 是同样的糖果和口香糖,这招只对身旁有小孩子喋喋不休吵著的大人有用。我们生活的实体世界里有那些限制。
In the digital world, though,persuasion architectures can be built at the scale of billionsand they can target, infer, understandand be deployed at individualsone by oneby figuring out your weaknesses,and they can be sent to everyone’s phone private screen,so it’s not visible to us.And that’s different.And that’s just one of the basic things that artificial intelligence can do.
但在数位世界里,说服架构的规模可达数十亿的等级,它们会瞄准、臆测、了解,针对个人来部署,各个击破,弄清楚个别的弱点,且能传送到每个人 私人手机的萤幕上,别人是看不见的。那就很不一样。那只是人工智慧 能做到的基本功能之一。
Now, let’s take an example.Let’s say you want to sell plane tickets to Vegas. Right?So in the old world, you could think of some demographics to targetbased on experience and what you can guess.You might try to advertise to, oh,men between the ages of 25 and 35,or people who have a high limit on their credit card,or retired couples. Right?That’s what you would do in the past.
让我举个例子。比如说,你要卖飞往赌城的机票。在旧式的世界里,你可以想出 某些特征的人来当目标,根据你的经验和猜测。你也可以试着打广告,像针对 25~35 岁的男性,或高信用卡额度的人,或退休的夫妻,对吧?那是过去的做法。
With big data and machine learning,that’s not how it works anymore.So to imagine that,think of all the data that Facebook has on you:every status update you ever typed,every Messenger conversation,every place you logged in from,all your photographs that you uploaded there.If you start typing something and change your mind and delete it,Facebook keeps those and analyzes them, too.Increasingly, it tries to match you with your offline data.It also purchases a lot of data from data brokers.It could be everything from your financial recordsto a good chunk of your browsing history.Right? In the US, such data is routinely collected,collated and sold.In Europe, they have tougher rules.
有了大量资料和机器学习,方式就不一样了。试想,想想脸书掌握什么关于你的资料:所有你输入的动态更新、所有的讯息对话、所有你登入时的所在地、所有你上传的照片。如果你开始输入些内容, 但随后改变主意而将之删除,脸书会保留那些内容和分析它们。它越来越会试着将你 和你的离线资料做匹配,也会向资料仲介商购买许多资料。从你的财务记录到你过去浏览过的一大堆记录。在美国,这些资料被常规地收集、校对和售出。欧洲的规定比较严。
So what happens then is,by churning through all that data, these machine-learning algorithms --that’s why they’re called learning algorithms --they learn to understand the characteristics of peoplewho purchased tickets to Vegas before.When they learn this from existing data,they also learn how to apply this to new people.So if they’re presented with a new person,they can classify whether that person is likely to buy a ticket to Vegas or not.Fine. You’re thinking, an offer to buy tickets to Vegas.I can ignore that.But the problem isn’t that.The problem is,we no longer really understand how these complex algorithms work.We don’t understand how they’re doing this categorization.It’s giant matrices, thousands of rows and columns,maybe millions of rows and columns,and not the programmersand not anybody who looks at it,even if you have all the data,understands anymore how exactly it’s operatingany more than you’d know what I was thinking right nowif you were shown a cross section of my brain.It’s like we’re not programming anymore,we’re growing intelligence that we don’t truly understand.
接下来发生的状况是透过搅拌所有这些资料, 这些机器学习演算法──这就是为什么它们 被称为学习演算法──它们学会了解过去购买机票飞往赌城的人有何特征。当它们从既有的资料中 学到这些之后,也学习如何将所学 套用到新的人身上。如果交给它们一个新的人,它们能辨识那人可能 或不太可能买机票。好。你心想,不就是提供 购买飞往赌城机票的讯息罢了,可以忽略它。但问题不在那里。问题是,我们已经不能真正了解 这些复杂的演算法如何运作。我们不了解它们如何分类。它是个巨大的矩阵, 有数以千计的直行和横列,也许有上百万的直行和横列,程式设计者也无法了解,任何人看到它都无法了解,即使握有所有的资料,对于它到底如何运作的了解程度,绝对不会高于你对我现在 脑中想什么的了解程度,如果你单凭看我大脑的切面图。感觉好像我们不是在写程式了,而是在栽培一种我们不是 真正了解的智慧。
And these things only work if there’s an enormous amount of data,so they also encourage deep surveillance on all of usso that the machine learning algorithms can work.That’s why Facebook wants to collect all the data it can about you.The algorithms work better.
只在资料量非常巨大的情况下 这些才行得通,所以他们也助长了 对我们所有人的密切监视,这样机器学习才能行得通。那就是为什么脸书要尽可能 收集关于你的资料。这样演算法效果才会比较好。
So let’s push that Vegas example a bit.What if the system that we do not understandwas picking up that it’s easier to sell Vegas ticketsto people who are bipolar and about to enter the manic phase.Such people tend to become overspenders, compulsive gamblers.They could do this, and you’d have no clue that’s what they were picking up on.I gave this example to a bunch of computer scientists onceand afterwards, one of them came up to me.He was troubled and he said, "That’s why I couldn’t publish it."I was like, "Couldn’t publish what?"He had tried to see whether you can indeed figure out the onset of maniafrom social media posts before clinical symptoms,and it had worked,and it had worked very well,and he had no idea how it worked or what it was picking up on.
让我们再谈谈赌城的例子。如果这个我们不了解的系统发现比较容易把机票销售给即将进入躁症阶段的躁郁症患者。这类人倾向于变成 花钱超支的人、强迫性赌徒。他们能这么做,而你完全不知道 那是他们选目标的根据。有次,我把这个例子 给了一群电脑科学家,之后,其中一人来找我。他感到困扰,说:「那就是 为什么我们无法发表它。」我说:「不能发表什么?」他曾尝试能否在出现临床症状前 就预知躁郁症快发作了,靠的是分析社交媒体的贴文。他办到了,结果非常成功,而他完全不知道是怎么成功的, 也不知道预测的根据是什么。
Now, the problem isn’t solved if he doesn’t publish it,because there are already companiesthat are developing this kind of technology,and a lot of the stuff is just off the shelf.This is not very difficult anymore.
如果他不发表结果, 问题就没有解决,因为已经有公司在发展这种技术,很多东西都已经是现成的了。这已经不是很困难的事了。
Do you ever go on YouTube meaning to watch one videoand an hour later you’ve watched 27?You know how YouTube has this column on the rightthat says, "Up next"and it autoplays something?It’s an algorithmpicking what it thinks that you might be interested inand maybe not find on your own.It’s not a human editor.It’s what algorithms do.It picks up on what you have watched and what people like you have watched,and infers that that must be what you’re interested in,what you want more of,and just shows you more.It sounds like a benign and useful feature,except when it isn’t.
你可曾经上 YouTube 原本只是要看一支影片,一个小时之后你却已看了 27 支?你可知道 YouTube 在网页的右栏摆着「即将播放」的影片,而且会自动接着播放那些影片?那是种演算法,选出它认为你可能会感兴趣,但不见得会自己去找到的影片。并不是人类编辑者,而是演算法做的。它去了解你看过什么影片, 像你这类的人看过什么影片,然后推论出那就是你会感兴趣、想看更多的影片,然后呈现更多给你看。听起来是个良性又有用的特色,除了它不是这样的时候。
So in 2016, I attended rallies of then-candidate Donald Trumpto study as a scholar the movement supporting him.I study social movements, so I was studying it, too.And then I wanted to write something about one of his rallies,so I watched it a few times on YouTube.YouTube started recommending to meand autoplaying to me white supremacist videosin increasing order of extremism.If I watched one,it served up one even more extremeand autoplayed that one, too.If you watch Hillary Clinton or Bernie Sanders content,YouTube recommends and autoplays conspiracy left,and it goes downhill from there.
在 2016 年,我去了一场 拥护当时还是候选人川普的集会,我以学者身份去研究支持他的运动。我研究社会运动,所以也去研究它。接着,我想要针对 他的某次集会写点什么,所以就在 YouTube 上 看了几遍。YouTube 开始推荐给我并为我自动播放, 白人至上主义的影片,一支比一支更极端主义。如果我看了一支,它就会送上另一支更极端的,并且自动播放它。如果你看的影片内容是 希拉蕊柯林顿或伯尼桑德斯,YouTube 会推荐并自动播放 阴谋论左派的影片,之后就每况愈下。
Well, you might be thinking, this is politics, but it’s not.This isn’t about politics.This is just the algorithm figuring out human behavior.I once watched a video about vegetarianism on YouTubeand YouTube recommended and autoplayed a video about being vegan.It’s like you’re never hardcore enough for YouTube.
你可能会想,这是政治。但并不是,重点不是政治,这只是猜测人类行为的演算法。我曾经上 YouTube 看一支关于吃素的影片,而 YouTube 推荐并自动播放了 一支关于严格素食主义者的影片。似乎对 YouTube 而言 你的口味永远都还不够重。
(Laughter)
(笑声)
So what’s going on?Now, YouTube’s algorithm is proprietary,but here’s what I think is going on.The algorithm has figured outthat if you can entice peopleinto thinking that you can show them something more hardcore,they’re more likely to stay on the sitewatching video after video going down that rabbit holewhile Google serves them ads.Now, with nobody minding the ethics of the store,these sites can profile peoplewho are Jew haters,who think that Jews are parasitesand who have such explicit anti-Semitic content,and let you target them with ads.They can also mobilize algorithmsto find for you look-alike audiences,people who do not have such explicit anti-Semitic content on their profilebut who the algorithm detects may be susceptible to such messages,and lets you target them with ads, too.Now, this may sound like an implausible example,but this is real.ProPublica investigated thisand found that you can indeed do this on Facebook,and Facebook helpfully offered up suggestionson how to broaden that audience.BuzzFeed tried it for Google, and very quickly they found,yep, you can do it on Google, too.And it wasn’t even expensive.The ProPublica reporter spent about 30 dollarsto target this category.
发生了什么事?YouTube 的演算法是专有的,但我认为发生的事是这样的:演算法发现到,如果诱使人们思索你还能提供他们更重口味的东西,他们就更可能会留在网站上,看一支又一支的影片, 一路掉进兔子洞,同时 Google 还给他们看广告。没人在意商家伦理的情况下,这些网站能够描绘人的特性,哪些人痛恨犹太人,认为犹太人是寄生虫,以及哪些人明确地反犹太人,让你针对他们提供广告。它们也能动员演算法,为你找出相近的观众群,那些侧看不怎么明显反犹太人,但是被演算法侦测出来 很容易受到这类讯息影响的人,让你针对他们提供广告。这可能听起来像是个 难以置信的例子,但它是真实的。ProPublica 调查了这件事,且发现你的确可以 在脸书上做到这件事,且脸书很有效地提供建议,告诉你如何拓展观众群。BuzzFeed 用 Google 做了实验,他们很快发现,是的,你也可以在 Google 上这样做。而且甚至不贵。ProPublica 的记者 花了大约 30 美元来针对这个类别。
So last year, Donald Trump’s social media manager disclosedthat they were using Facebook dark posts to demobilize people,not to persuade them,but to convince them not to vote at all.And to do that, they targeted specifically,for example, African-American men in key cities like Philadelphia,and I’m going to read exactly what he said.I’m quoting.
去年川普的社交媒体经理透露,他们利用脸书的隐藏广告贴文 来「反动员」选民,不是劝说或动员他们,而是说服他们根本不去投票。为做到这一点,他们准确设定目标,比如像费城这样 关键城市的非裔美国男性,让我把他的话一字不漏读出来。以下为引述。
They were using "nonpublic postswhose viewership the campaign controlsso that only the people we want to see it see it.We modeled this.It will dramatically affect her ability to turn these people out."
他们使用「非公开贴文,那些贴文的观看权限 由竞选团队来控制,所以只有我们挑的读者才看得到。我们为此建立了模型,会严重影响到她(指希拉蕊) 动员那些人去投票的能力。」
What’s in those dark posts?We have no idea.Facebook won’t tell us.
那些隐藏广告贴文中有什么内容?我们不知道。脸书不告诉我们。
So Facebook also algorithmically arranges the poststhat your friends put on Facebook, or the pages you follow.It doesn’t show you everything chronologically.It puts the order in the way that the algorithm thinks will entice youto stay on the site longer.
所以脸书也用演算法的方式 来安排你的朋友在脸书的贴文或是你追踪的页面。它并不会照时间顺序 来呈现所有内容。呈现顺序是演算法认为能引诱你在网站上逗留久一点的顺序。
Now, so this has a lot of consequences.You may be thinking somebody is snubbing you on Facebook.The algorithm may never be showing your post to them.The algorithm is prioritizing some of them and burying the others.
所以,这么做有许多后果。你可能会认为有人在脸书上冷落你。也许是演算法根本没把 你的贴文呈现给他们看。演算法优先呈现其中某些, 而埋藏掉其他的。
Experiments showthat what the algorithm picks to show you can affect your emotions.But that’s not all.It also affects political behavior.So in 2010, in the midterm elections,Facebook did an experiment on 61 million people in the USthat was disclosed after the fact.So some people were shown, "Today is election day,"the simpler one,and some people were shown the one with that tiny tweakwith those little thumbnailsof your friends who clicked on "I voted."This simple tweak.OK? So the pictures were the only change,and that post shown just onceturned out an additional 340,000 votersin that election,according to this researchas confirmed by the voter rolls.A fluke? No.Because in 2012, they repeated the same experiment.And that time,that civic message shown just onceturned out an additional 270,000 voters.For reference, the 2016 US presidential electionwas decided by about 100,000 votes.Now, Facebook can also very easily infer what your politics are,even if you’ve never disclosed them on the site.Right? These algorithms can do that quite easily.What if a platform with that kind of powerdecides to turn out supporters of one candidate over the other?How would we even know about it?
实验显示,演算法选择呈现给你的内容, 会影响你的情绪。但不止这样,它也会影响政治行为。在 2010 年的期中选举时,脸书做了一个实验, 对象是美国 6100 万人,该实验后来被揭露出来。有些人看到的是「今天是选举日」,简单的版本,有些人看到的是有 小小调整过的版本,用小型照片缩图来显示出你的朋友中按了 「我已投票」的那些人。这是个小小的调整。唯一的差别就是照片,这篇贴文只被显示出来一次,结果多出了 34 万的投票者在那次选举投了票,根据这研究指出,这结果已经由选举人名册确认过了。是侥幸吗?不是。因为在 2012 年, 他们重复了同样的实验。那一次,只显示一次的公民讯息造成投票者多出了 27 万人。供参考用:2016 年 总统大选的结果,大约十万选票的差距决定了江山。脸书也能轻易推论出你的政治倾向,即使你未曾在脸书上透露过。对吧?那些演算法很轻易就做得到。一旦具有那种力量的平台决定要使一位候选人的支持者出来投票, 另一位的则不,会如何呢?我们如何得知发生了这种事?
Now, we started from someplace seemingly innocuous --online adds following us around --and we’ve landed someplace else.As a public and as citizens,we no longer know if we’re seeing the same informationor what anybody else is seeing,and without a common basis of information,little by little,public debate is becoming impossible,and we’re just at the beginning stages of this.These algorithms can quite easily inferthings like your people’s ethnicity,religious and political views, personality traits,intelligence, happiness, use of addictive substances,parental separation, age and genders,just from Facebook likes.These algorithms can identify protesterseven if their faces are partially concealed.These algorithms may be able to detect people’s sexual orientationjust from their dating profile pictures.
我们讨论的起始点看似无害──线上广告跟着我们到处出现──但后来却谈到别的现象。身为大众、身为公民,我们不再知道 我们看到的资讯是否相同,或是其他人看到了什么,没有共同的资讯基础,渐渐地,就会变成不可能公开辩论了,我们目前只是在 这个过程的初始阶段。这些演算法很容易推论出比如你的种族、宗教和政治观点、个人特质、智力、快乐程度、 是否使用上瘾式物质、父母离异、年龄和性别,只从脸书按的赞就能知道。这些演算法能够辨识抗议者,即使遮蔽他们部份的脸也能辨识。这些演算法或许能侦测人的性向,只要有他们的约会侧写照片即可。
Now, these are probabilistic guesses,so they’re not going to be 100 percent right,but I don’t see the powerful resisting the temptation to use these technologiesjust because there are some false positives,which will of course create a whole other layer of problems.Imagine what a state can dowith the immense amount of data it has on its citizens.China is already using face detection technologyto identify and arrest people.And here’s the tragedy:we’re building this infrastructure of surveillance authoritarianismmerely to get people to click on ads.And this won’t be Orwell’s authoritarianism.This isn’t "1984."Now, if authoritarianism is using overt fear to terrorize us,we’ll all be scared, but we’ll know it,we’ll hate it and we’ll resist it.But if the people in power are using these algorithmsto quietly watch us,to judge us and to nudge us,to predict and identify the troublemakers and the rebels,to deploy persuasion architectures at scaleand to manipulate individuals one by oneusing their personal, individual weaknesses and vulnerabilities,and if they’re doing it at scalethrough our private screensso that we don’t even knowwhat our fellow citizens and neighbors are seeing,that authoritarianism will envelop us like a spider’s weband we may not even know we’re in it.
这些是用机率算出的猜测,所以不见得 100% 正确,但我并没有看到因为这些技术有 假阳性结果(实际没有被预测为有)大家就抗拒使用它们,因而这些假阳性结果 又造成全然另一层的问题。想像一下国家会怎么用所拥有的大量国民资料。中国已经在使用面部辨识技术来识别和逮捕人。不幸的是,起初我们建立这个 专制监视的基础结构,仅仅为了要让人们点阅广告。这不会是欧威尔的专制主义。这不是《1984》。如果专制主义 公然利用恐惧来恐吓我们,我们会害怕,但我们心知肚明,我们会厌恶它,也会抗拒它。但如果掌权者用这些演算法悄悄地监看我们、评断我们、轻轻推使我们,用这些演算法来预测和辨识出 问题制造者和叛乱份子,部署大规模的说服结构,并个别操弄每一个人,利用他们个人、个别的缺点和弱点,如果规模够大,透过我们私人的萤幕,那么我们甚至不会知道其他公民及邻居看到了什么内容,那种专制主义会像蜘蛛网 一样把我们紧紧地包裹起来,而我们甚至不会知道 自己被包在里面。
So Facebook’s market capitalizationis approaching half a trillion dollars.It’s because it works great as a persuasion architecture.But the structure of that architectureis the same whether you’re selling shoesor whether you’re selling politics.The algorithms do not know the difference.The same algorithms set loose upon usto make us more pliable for adsare also organizing our political, personal and social information flows,and that’s what’s got to change.
所以,脸书的市场资本化已经接近五千亿美元。因为它是个很成功的说服架构。但用的架构一样,不论你销售的是鞋子或是政治。演算法不知道差别。那个被松绑了的、为使我们更容易 被广告左右的演算法,同时也正组织着我们的 政治、个人和社会的资讯流,这点必须要被改变才行。
Now, don’t get me wrong,we use digital platforms because they provide us with great value.I use Facebook to keep in touch with friends and family around the world.I’ve written about how crucial social media is for social movements.I have studied how these technologies can be usedto circumvent censorship around the world.But it’s not that the people who run, you know, Facebook or Googleare maliciously and deliberately tryingto make the country or the world more polarizedand encourage extremism.I read the many well-intentioned statementsthat these people put out.But it’s not the intent or the statements people in technology make that matter,it’s the structures and business models they’re building.And that’s the core of the problem.Either Facebook is a giant con of half a trillion dollarsand ads don’t work on the site,it doesn’t work as a persuasion architecture,or its power of influence is of great concern.It’s either one or the other.It’s similar for Google, too.
别误会我,我们使用数位平台,是因为 它们能提供我们极大的价值。我用脸书来和世界各地的 朋友家人保持联络。我写过关于社交媒体对于 社会运动有多重要的文章。我研究过这些技术能如何被用来规避世界各地的审查制度。但,不是脸书 或 Google 的营运者在恶意、刻意地尝试让国家或世界变得更两极化、或鼓励极端主义。我读过许多出发点很好的声明,都是这些人发出来的。但重点并不是科技人的意图或声明,而他们建造的结构与商业模型才是问题的核心。要不就脸书是个大骗子, 诈骗了半兆美元,该网站上的广告没有用,它不以说服架构的形式运作;要不就它的影响力很让人担心。只会是两者其一。Google 也类似。
So what can we do?This needs to change.Now, I can’t offer a simple recipe,because we need to restructurethe whole way our digital technology operates.Everything from the way technology is developedto the way the incentives, economic and otherwise,are built into the system.We have to face and try to deal withthe lack of transparency created by the proprietary algorithms,the structural challenge of machine learning’s opacity,all this indiscriminate data that’s being collected about us.We have a big task in front of us.We have to mobilize our technology,our creativityand yes, our politicsso that we can build artificial intelligencethat supports us in our human goalsbut that is also constrained by our human values.And I understand this won’t be easy.We might not even easily agree on what those terms mean.But if we take seriouslyhow these systems that we depend on for so much operate,I don’t see how we can postpone this conversation anymore.These structuresare organizing how we functionand they’re controllingwhat we can and we cannot do.And many of these ad-financed platforms,they boast that they’re free.In this context, it means that we are the product that’s being sold.We need a digital economywhere our data and our attentionis not for sale to the highest-bidding authoritarian or demagogue.
所以,我们能做什么?这必须要改变。我无法提供简单的解决之道,因为我们得要重建整个数位技术的运作方式;每一样──从发展技术的方式到奖励的方式,不论是实质 或其他形式的奖励──都要被建置到系统中。我们得要面对并试图处理专有演算法所造成的透明度缺乏,难懂的机器学习的结构性挑战,所有被不分皂白地收集走、 与我们相关的资料。我们面对巨大的任务。我们得要动员我们的科技、我们的创意、以及我们的政治。以让我们建立的人工智慧能够支持我们人类的目标,那些同时也被人类价值 所限制住的目标。我知道这不容易。我们甚至无法轻易取得 那些用语意义的共识。但如果我们认真看待我们如此依赖的这些系统如何运作,我看不出我们怎能再延迟对话。这些结构正在组织我们运作的方式,并且控制了我们能做什么、不能做什么。许多这类由广告赞助的平台,它们夸说它们是免费的。在这个情境下,意思就是说 「我们」就是被销售的产品。我们需要一个数位经济结构,在这个结构中,我们的 资料和注意力是非卖品,不能售与出价最高的 专制主义者或煽动者。
(Applause)
(掌声)
So to go back to that Hollywood paraphrase,we do want the prodigious potentialof artificial intelligence and digital technology to blossom,but for that, we must face this prodigious menace,open-eyed and now.Thank you.
回到前面说的好莱坞改述,我们的确希望人工智慧与数位科技的巨大潜能能够绽放,但为此,我们必须要 面对这个巨大的威胁,睁开眼睛,现在就做。谢谢。
期待与你相遇,不见不散 ▼
你说你喜欢雨,但是你在下雨的时候打开了伞
你说你喜欢风,但是你在刮风的时候关上了窗
这就是为什么我害怕你说你也喜欢"TED英语"
因为你连"TED英语演说"都没关注...
快长按二维码▲关注我啊魂淡
以上是关于TED演讲| 人工智能正在形成“反乌托邦”的主要内容,如果未能解决你的问题,请参考以下文章