WEBVTT FILE 1 00:00:08.520 --> 00:00:10.740 Hello. This is 6 Minute English from 2 00:00:10.740 --> 00:00:12.420 BBC Learning English. I’m Sam. 3 00:00:12.420 --> 00:00:13.260 And I’m Neil. 4 00:00:13.260 --> 00:00:15.780 In the autumn of 2021, something 5 00:00:15.780 --> 00:00:17.700 strange happened at the Google 6 00:00:17.700 --> 00:00:20.160 headquarters in California’s Silicon 7 00:00:20.160 --> 00:00:22.260 Valley. A software engineer called, 8 00:00:22.260 --> 00:00:24.720 Blake Lemoine, was working on the 9 00:00:24.720 --> 00:00:27.060 artificial intelligence project, ‘Language 10 00:00:27.060 --> 00:00:29.760 Models for Dialogue Applications’, or 11 00:00:29.760 --> 00:00:32.700 LaMDA for short. LaMDA is a 12 00:00:32.700 --> 00:00:34.560 chatbot – a computer programme 13 00:00:34.560 --> 00:00:36.600 designed to have conversations with 14 00:00:36.600 --> 00:00:37.680 humans over the internet. 15 00:00:37.680 --> 00:00:40.500 After months talking with LaMDA 16 00:00:40.500 --> 00:00:42.779 on topics ranging from movies to 17 00:00:42.780 --> 00:00:45.000 the meaning of life, Blake came to 18 00:00:45.000 --> 00:00:47.700 a surprising conclusion: the chatbot 19 00:00:47.700 --> 00:00:50.340 was an intelligent person with wishes 20 00:00:50.340 --> 00:00:52.500 and rights that should be respected. 21 00:00:52.500 --> 00:00:55.140 For Blake, LaMDA was a Google 22 00:00:55.140 --> 00:00:56.519 employee, not a machine. 23 00:00:56.520 --> 00:00:58.680 He also called it his ‘friend’. 24 00:00:58.680 --> 00:01:01.260 Google quickly reassigned Blake from 25 00:01:01.260 --> 00:01:03.120 the project, announcing that his ideas 26 00:01:03.120 --> 00:01:05.340 were not supported by the evidence. 27 00:01:05.340 --> 00:01:08.040 But what exactly was going on? 28 00:01:08.040 --> 00:01:09.900 In this programme, we’ll be 29 00:01:09.900 --> 00:01:12.060 discussing whether artificial intelligence 30 00:01:12.060 --> 00:01:15.060 is capable of consciousness. We’ll hear 31 00:01:15.060 --> 00:01:18.060 from one expert who thinks AI is not as 32 00:01:18.060 --> 00:01:19.620 intelligent as we sometimes think, 33 00:01:19.620 --> 00:01:22.080 and as usual, we’ll be learning some 34 00:01:22.080 --> 00:01:23.400 new vocabulary as well. 35 00:01:23.400 --> 00:01:25.500 But before that, I have a question for 36 00:01:25.500 --> 00:01:27.779 you, Neil. What happened to Blake Lemoine 37 00:01:27.780 --> 00:01:30.720 is strangely similar to the 2013 Hollywood 38 00:01:30.720 --> 00:01:34.140 movie, Her, starring Joaquin Phoenix as 39 00:01:34.140 --> 00:01:36.000 a lonely writer who talks with his 40 00:01:36.000 --> 00:01:38.279 computer, voiced by Scarlett Johansson. 41 00:01:38.280 --> 00:01:40.140 But what happens at the end 42 00:01:40.140 --> 00:01:41.519 of the movie? Is it: 43 00:01:41.520 --> 00:01:43.380 a) the computer comes to life? 44 00:01:43.380 --> 00:01:46.740 b) the computer dreams about the writer? or, 45 00:01:46.740 --> 00:01:48.780 c) the writer falls in love with the computer? 46 00:01:48.780 --> 00:01:52.200 ... c) the writer falls in love with the computer. 47 00:01:52.200 --> 00:01:54.900 OK, Neil, I’ll reveal the answer at the end 48 00:01:54.900 --> 00:01:57.360 of the programme. Although Hollywood is 49 00:01:57.360 --> 00:01:59.700 full of movies about robots coming to life, 50 00:01:59.700 --> 00:02:02.940 Emily Bender, a professor of linguistics and 51 00:02:02.940 --> 00:02:05.040 computing at the University of Washington, 52 00:02:05.040 --> 00:02:09.480 thinks AI isn’t that smart. She thinks the 53 00:02:09.480 --> 00:02:11.280 words we use to talk about technology, 54 00:02:11.280 --> 00:02:14.580 phrases like ‘machine learning’, give a 55 00:02:14.580 --> 00:02:16.200 false impression about what 56 00:02:16.200 --> 00:02:17.940 computers can and can’t do. 57 00:02:17.940 --> 00:02:20.340 Here is Professor Bender discussing 58 00:02:20.340 --> 00:02:22.680 another misleading phrase, ‘speech 59 00:02:22.680 --> 00:02:25.139 recognition’, with BBC World Service 60 00:02:25.139 --> 00:02:26.579 programme, The Inquiry: 61 00:02:27.360 --> 00:02:29.580 If you talk about ‘automatic speech 62 00:02:29.580 --> 00:02:31.500 recognition’, the term ‘recognition’ 63 00:02:31.500 --> 00:02:33.600 suggests that there's something 64 00:02:33.600 --> 00:02:35.760 cognitive going on, where I think a 65 00:02:35.760 --> 00:02:37.860 better term would be automatic transcription. 66 00:02:37.860 --> 00:02:39.840 That just describes the input-output 67 00:02:39.840 --> 00:02:43.500 relation, and not any theory or wishful 68 00:02:43.500 --> 00:02:45.779 thinking about what the computer is 69 00:02:45.780 --> 00:02:47.220 doing to be able to achieve that. 70 00:02:47.220 --> 00:02:50.580 Using words like ‘recognition’ in relation 71 00:02:50.580 --> 00:02:52.680 to computers gives the idea that 72 00:02:52.680 --> 00:02:55.439 something cognitive is happening – something 73 00:02:55.440 --> 00:02:57.780 related to the mental processes of 74 00:02:57.780 --> 00:03:00.539 thinking, knowing, learning and understanding. 75 00:03:00.540 --> 00:03:03.240 But thinking and knowing are human, 76 00:03:03.240 --> 00:03:06.300 not machine, activities. Professor Benders 77 00:03:06.300 --> 00:03:08.340 says that talking about them in connection 78 00:03:08.340 --> 00:03:11.910 with computers is wishful thinking - 79 00:03:11.910 --> 00:03:14.220 something which is unlikely to happen. 80 00:03:14.220 --> 00:03:16.440 The problem with using words in this 81 00:03:16.440 --> 00:03:18.540 way is that it reinforces what 82 00:03:18.540 --> 00:03:20.700 Professor Bender calls, technical 83 00:03:20.700 --> 00:03:23.220 bias – the assumption that the computer 84 00:03:23.220 --> 00:03:25.740 is always right. When we encounter 85 00:03:25.740 --> 00:03:27.419 language that sounds natural, but is 86 00:03:27.419 --> 00:03:29.699 coming from a computer, humans 87 00:03:29.700 --> 00:03:32.160 can’t help but imagine a mind behind 88 00:03:32.160 --> 00:03:34.380 the language, even when there isn’t one. 89 00:03:34.380 --> 00:03:36.540 In other words, we anthropomorphise 90 00:03:36.540 --> 00:03:39.060 computers – we treat them as if they 91 00:03:39.060 --> 00:03:41.280 were human. Here’s Professor Bender 92 00:03:41.280 --> 00:03:43.500 again, discussing this idea with 93 00:03:43.500 --> 00:03:46.200 Charmaine Cozier, presenter of BBC 94 00:03:46.200 --> 00:03:47.820 World Service’s, the Inquiry. 95 00:03:48.420 --> 00:03:52.080 So ‘ism’ means system, ‘anthro’ or ‘anthropo’ 96 00:03:52.080 --> 00:03:55.080 means human, and ‘morph’ means shape... 97 00:03:55.080 --> 00:03:58.080 And so this is a system that puts the 98 00:03:58.080 --> 00:04:00.240 shape of a human on something, and 99 00:04:00.240 --> 00:04:01.500 in this case the something is a computer. 100 00:04:01.500 --> 00:04:04.680 We anthropomorphise animals all the time, 101 00:04:04.680 --> 00:04:07.740 but we also anthropomorphise action figures, 102 00:04:07.740 --> 00:04:10.620 or dolls, or companies when we talk about 103 00:04:10.620 --> 00:04:12.660 companies having intentions and so on. 104 00:04:12.660 --> 00:04:15.540 We very much are in the habit of seeing 105 00:04:15.540 --> 00:04:17.160 ourselves in the world around us. 106 00:04:17.160 --> 00:04:19.380 And while we’re busy seeing ourselves 107 00:04:19.380 --> 00:04:21.480 by assigning human traits to things that 108 00:04:21.480 --> 00:04:24.000 are not, we risk being blindsided. 109 00:04:24.000 --> 00:04:26.400 The more fluent that text is, the more 110 00:04:26.400 --> 00:04:29.100 different topics it can converse on, the 111 00:04:29.100 --> 00:04:31.020 more chances there are to get taken in. 112 00:04:31.860 --> 00:04:34.619 If we treat computers as if they could think, 113 00:04:34.620 --> 00:04:37.140 we might get blindsided, or 114 00:04:37.140 --> 00:04:40.320 unpleasantly surprised. Artificial intelligence 115 00:04:40.320 --> 00:04:42.540 works by finding patterns in massive 116 00:04:42.540 --> 00:04:45.060 amounts of data, so it can seem like 117 00:04:45.060 --> 00:04:46.920 we’re talking with a human, instead 118 00:04:46.920 --> 00:04:49.140 of a machine doing data analysis. 119 00:04:49.140 --> 00:04:53.159 As a result, we get taken in – we’re tricked 120 00:04:53.160 --> 00:04:55.080 or deceived into thinking we’re dealing 121 00:04:55.080 --> 00:04:58.083 with a human, or with something intelligent. 122 00:04:58.083 --> 00:05:01.401 Powerful AI can make machines appear conscious, 123 00:05:01.401 --> 00:05:04.346 but even tech giants like Google are years 124 00:05:04.346 --> 00:05:06.753 away from building computers that can 125 00:05:06.753 --> 00:05:09.108 dream or fall in love. Speaking of which, 126 00:05:09.108 --> 00:05:11.779 Sam, what was the answer to your question? 127 00:05:11.779 --> 00:05:14.530 I asked what happened in the 2013 movie, Her. 128 00:05:14.531 --> 00:05:16.431 Neil thought that the main character 129 00:05:16.431 --> 00:05:18.678 falls in love with his computer, which 130 00:05:18.678 --> 00:05:20.039 was the correct answer! 131 00:05:20.880 --> 00:05:23.520 OK. Right, it’s time to recap the vocabulary 132 00:05:23.520 --> 00:05:25.860 we’ve learned from this programme about AI, 133 00:05:25.860 --> 00:05:29.040 including chatbots - computer programmes 134 00:05:29.040 --> 00:05:30.419 designed to interact with 135 00:05:30.420 --> 00:05:31.980 humans over the internet. 136 00:05:31.980 --> 00:05:34.620 The adjective cognitive describes 137 00:05:34.620 --> 00:05:36.480 anything connected with the mental 138 00:05:36.480 --> 00:05:37.800 processes of knowing, 139 00:05:37.800 --> 00:05:39.179 learning and understanding. 140 00:05:39.180 --> 00:05:41.820 Wishful thinking means thinking that 141 00:05:41.820 --> 00:05:43.920 something which is very unlikely to happen 142 00:05:43.920 --> 00:05:45.960 might happen one day in the future. 143 00:05:45.960 --> 00:05:48.360 To anthropomorphise an object means 144 00:05:48.360 --> 00:05:49.860 to treat it as if it were human, 145 00:05:49.860 --> 00:05:51.060 even though it’s not. 146 00:05:51.060 --> 00:05:53.580 When you’re blindsided, you’re 147 00:05:53.580 --> 00:05:55.080 surprised in a negative way. 148 00:05:55.080 --> 00:05:57.960 And finally, to get taken in by someone means 149 00:05:57.960 --> 00:05:59.820 to be deceived or tricked by them. 150 00:05:59.820 --> 00:06:02.460 My computer tells me that our six minutes 151 00:06:02.460 --> 00:06:05.040 are up! Join us again soon, for now 152 00:06:05.040 --> 00:06:06.179 it’s goodbye from us. 153 00:06:06.180 --> 00:06:06.680 Bye!