News

Is AI set to destroy trust in elections? Tackling misinformation in politics & Parliament, with top fact checker Full Fact's Chris Morris - Parliament Matters podcast, Episode 32 transcript

30 Apr 2024
©Adobe Stock/Skorzewiak
©Adobe Stock/Skorzewiak

Artificial Intelligence (AI) has the capacity to make it far easier to generate and spread political misinformation online. But could it actually impact the result of elections? We talk to Chris Morris, head of fact-checking organisation Full Fact, about the scale of misinformation in politics, the potential impact of AI, and what politicians and organisations like Full Fact can do about it.

This transcript is automatically generated. There are consequently minor errors and the text is not formatted according to our style guide. If you wish to reference or cite the transcript please first check against the audio version. Timestamps are provided above each paragraph for ease of reference.

[00:00:02.26] You are listening to "Parliament Matters," a Hansard Society production, supported by the Joseph Rowntree Charitable Trust. Learn more at hansardsociety.org.uk/pm.

[00:00:13.83] Welcome to "Parliament Matters", the podcast about the institution at the heart of our democracy, Parliament itself. I'm Ruth Fox.

[00:00:20.97] And I'm Mark Darcy. And coming up in this special edition of the podcast--

[00:00:25.23] We talk to the man trying to guarantee the truth, the whole truth, and nothing but the truth in politics and particularly in the upcoming general election campaign. That's Chris Morris, Chief Executive of the fact checking organisation Full Fact.

[00:00:43.85] It is, we're told, going to be an election year. And in that election, there may be a new wild card. Will artificial intelligence be used to produce fake interviews or fake sounds or even fake pictures of candidates doing fake things that turn the voters against them?

[00:01:00.68] Will fake facts creep into political dialogue and possibly turn voters away from one party or another? Well, one of the people who's grappling with the dangerous possibilities raised by all kinds of new technologies and indeed by fake news on the internet already is Chris Morris, who's the head of Full Fact, the fact checking organisation that sinks its teeth into mistakes, misinformation, and outright lies that appear particularly online. So, Chris, first of all, you've produced a new report, Full Fact have produced a new report, that grapples with some of these possibilities. What's the most dangerous thing, in your view, that might happen at the next election?

[00:01:40.51] A lot of it is about operating in a climate where we know that public trust in politics and politicians is at its lowest ebb for 40 years. And there are things that could happen at this election that could damage that trust almost irretrievably. We know that the information environment we work in has changed remarkably quickly over 10 years.

[00:02:01.81] But I think my main message would be "you ain't seen nothing yet" because generative AI is accelerating the trend. And I think the danger is that we get to a place, whether in elections or more broadly, where many people start to wonder whether they can believe anything that they read or see or hear. And if we get to that point, then you have to start to wonder what's left of liberal democracy.

[00:02:24.94] So that's kind of the high level challenge, I think. Of course, every election feels like there's the first Twitter election. I guess way back in the States, Kennedy and Nixon was the first TV election, right? The sort of famous debate of Nixon sweating badly.

[00:02:39.82] And this is going to be the first generative AI election. And when we talk about generative AI, as opposed to just artificial intelligence, the generative bit is tools that can create content just like that, whether it's audio or video or images. And anyone can use them.

[00:02:53.74] So stuff that may be a few years ago would have taken a lot of computing power and a team of boffins, you and I can do when we go home and sit on our couch. And I think it's the danger that we get overwhelmed with so much bad information, misinformation, disinformation, that people start to doubt, A, the result of elections in the future, and B, anything they read, see or hear. And I think that's the danger to democracy overall.

[00:03:18.88] So in terms of this upcoming election, you've just-- Full Fact has just published a new report about this. What do you want the political parties, the politicians, to do?

[00:03:27.01] We'd like them to promise publicly-- we know they're slightly allergic to the word pledge because that reminds them of Nick Clegg and tuition fees and things like that. But we'd like them to say very publicly that they will use AI responsibly, whether that be in their campaign materials, whether that be in the way they communicate within the party. I think it's really important that things are done transparently. If people are distrusting the political system, then it is up to the politicians to make sure that they are the people setting the highest standards.

[00:03:59.62] So what does responsible use of AI look like? Is this just labeling when you've created something via artificial intelligence, a picture of a politician doing something as a sort of cartoon?

[00:04:09.49] Yeah, and a cartoon is an interesting example because we do have to take account of things like satire. And you don't want to sort of be the pedant sitting in the corner saying that anything satirical that you've made up on artificial intelligence will not be tolerated. It's when it's a deliberate attempt to deceive and where you are creating some audio or video content in which you are saying something which has proper political import, and people are meant to believe that it is real.

[00:04:36.25] I think satire is a slightly different thing. People can usually tell that this is satire. That's allowed. It's OK. Full Fact is not trying to stop people having a laugh, but it is trying to stop politicians--

[00:04:46.08] That's a relief.

[00:04:46.38] --deliberately misleading people. I know. We're very kind.

[00:04:49.65] But it's two elements here, then, isn't it? Because I was thinking about the risk to the election of, say, foreign hostile states using AI to import misinformation and lies into the campaign. What you're also talking about here is there's actually a risk of the political parties themselves using generative AI to alter the terms of trade of the campaign itself.

[00:05:11.41] Yeah, it's both. Obviously, we have to be aware of the possibility of hostile foreign states trying to carry out some big disinformation campaign. Could it be an information event just a few days before polling?

[00:05:22.80] Now, in our system, particularly with the way the opinion polls are looking at the moment, and they could change, do I think there could be an information event just before the election that would change the outcome? It seems unlikely. Could there be an event generated by AI in a specific constituency which is very close which might cause people to change their vote? That's more possible.

[00:05:45.47] That wouldn't change the outcome of the entire election, but it seems to me if any seat has been decided by deceptive means, that's a very bad precedent to set. So, yes, it's the responsibility of the political parties themselves to show leadership. But also, within that, when you talk about hostile states and foreign powers, do we have a system, which tells people clearly and transparently what would happen if there was a big information incident?

[00:06:10.24] In Canada, for example, they have a protocol which is very clear. This is what will happen. These are the people who will be involved. This is what we will tell you, and here's how we'll deal with it.

[00:06:19.72] So would that, for example, mean if, for example, there was something in an individual parliamentary seat, if this was found to have happened, we would rerun the election in that seat?

[00:06:29.00] That, I think, is up to the politicians to decide. But it's good to say, this is what we would do so people are clear. And I think, again, transparency is the word I keep coming back to. So people have confidence in the system that if there is an attempt to manipulate it, that there is a system in place whereby that can be dealt with. If there was clear evidence in a particular seat that there had been manipulation, I think-- you know, I want our elected representatives to say this is how we would deal with that.

[00:06:55.48] And that you know beforehand so that it doesn't look like you're trying to manipulate the result afterwards if you've not done so well--

[00:07:00.55] Hence the focus on transparency, yes. So you have-- it's very clearly set out. It's a publicly available document.

[00:07:05.69] I mean, clearly, there are people in the darker corners of Whitehall or even maybe some of the brighter corners of Whitehall who are thinking about these things very hard. But I think there's a feeling that it's all being done a bit behind the scenes. And one of the reasons we at Full Fact like the Canadian model is it's very open, it's very transparent, and it tells people very clearly, we want you to give you confidence in our electoral process. So we're telling you what will happen if there is a major attempt by a hostile state or by others to manipulate the outcome of an election.

[00:07:33.80] And are we anywhere near getting to a set of rules like that for this country? Are the parties talking about this between themselves, to your organisation, to anybody else?

[00:07:43.76] I don't think we are close to having a public protocol about how to deal with a major election incident, no, although it's something Full Fact has talked about and tried to persuade people about, obviously unsuccessfully. In terms of what the parties are going to do, this week, there will be an open letter which will ask parties to commit to a variety of things, some of which are set out in our report, about the way they use generative AI, essentially asking them to make public commitments to their voters, really, that we will use generative AI responsibly in a way that doesn't either mislead or deceive you.

[00:08:18.71] So that's a starting point on all this. But what I'm working around to here is you could get to a position where election results have to be overturned because something untoward has happened that has clearly influenced the result. Or you get to a position where people are in power who the public think shouldn't be in power because they cheated.

[00:08:37.18] Yeah, well, look at the United States in the last four years. Millions of people still think-- erroneously, in my view, and I'm sure in yours-- that the election was stolen from them. But--

[00:08:46.48] But it's incredibly corrosive that they think that.

[00:08:48.46] It's incredibly corrosive that they think that. And we may think, oh, that couldn't happen here, but I don't think we should be quite so sure. I'm not saying that we're going to have a Donald Trump type figure in the United Kingdom, and we have a different political system.

[00:09:01.43] But the fact that many people get their information now in sort of closed online groups, a lot of people are forming very strong beliefs around what you might call conspiracy theories, and there is a danger, I think, that kind of feeling morphs into something which says we simply don't trust the results of elections anymore, which is why, again, it comes back to the word transparency, honesty, openness, and for politicians to take the lead in doing that. I mean, the idea that with public office comes public responsibility is something I think they should take more seriously. It's very easy for politicians, if they want to be elected and they want to take over the next government, to say, we're going to be different. We're going to do things differently.

[00:09:41.18] But they need to actually commit to specific standards, I think, whether it be through the ministerial code, whether it be through adherence to the code of practice for statistics. There's all sorts of good documents out there. But I think politicians need to do more to persuade people they actually mean to uphold what's said there.

[00:09:58.94] There are examples of this kind of misinformation, disinformation, AI generated content already being used, at least attempting to disrupt election processes. A little while ago, there was a clip floating around supposedly of Keir Starmer swearing viciously at an aide that turned out to have been AI-generated. And people did start calling that out and warning across parties.

[00:10:19.01] There was a faked phone call of Joe Biden which was used in an attempt to suppress voter turnout in one of the US primaries. So this is something that's already going on. And the vibe I get from you is that our parties are nowhere near alarmed enough about this.

[00:10:32.89] I think they are alarmed about it. I just don't think they're doing quite enough about it yet. And I understand the difficulty.

[00:10:38.92] Last week's information is out of date next week. This is an area of policy and an area of technology which is moving incredibly quickly, which is one of the reasons why it's so hard to legislate or regulate on it because it's out of date almost before the ink is dried on the paper. But you're right. There have been examples.

[00:10:54.28] And you mention the fake audio of Keir Starmer. One of the worrying things about audio in particular is that you go and talk to the audio experts on this. They say it's already almost impossible to say whether something has been generated by artificial intelligence.

[00:11:09.77] So the Starmer example you give alleging that he was shouting at his staff, I'm convinced it was fake. But if you go to an expert and say, was this generated by artificial intelligence, they will tell you it's almost impossible to tell. And in fact, one of the experts we consulted about this did put it through a couple of the tools that are available and that told them, oh, this might be real.

[00:11:29.81] And so this is the problem is that there's also almost, if you like, a liar's dividend because not only can you not tell whether something is fake, but if you do say something-- I'm thinking about, you know, one prime ministerial candidate who you may remember, Gordon Brown caught in his car with a TV microphone on saying something he shouldn't about a voter he'd just met. It's much easier now to say, oh, I didn't say that. I didn't say that. That's been artificially generated. And if you can't prove that it's fake, you also can't prove that it's real.

[00:11:55.91] Yeah. So, you're as an organisation, you're going to have an important role, I think, in the election campaign coming up as one of the main, if not the main, fact checking organisation. So how organisationally are you thinking about how you need to gear up for the election campaign whenever it comes?

[00:12:11.00] We don't know whether we're going to get, for example, TV, party leaders debates. We may, we may not. But if we do, then your organisation is going to have a role, I think, in checking what the party leaders say.

[00:12:22.74] Yeah, I guess like almost everyone else involved in any way in politics, we just have to be ready for an election when it comes. It may be November, it may be before that. It might even be January next year, God forbid.

[00:12:33.69] But yes, we're getting ready. We're preparing. We will be taking on secondees to try and increase the size of our fact checking teams.

[00:12:40.98] We'll be working with big media houses, whether in broadcast or in print. It's a time when we can create impact. But we're trying to build kind of advanced chat bots so people can ask questions. We are trying to build a walled garden in which you can actually ask questions safely. I can't tell you too much about the technical details--

[00:12:57.71] Watch this space.

[00:12:57.92] --how that's being done. But watch this space. And obviously, there's always a risk as soon as you put out something like that because lots of people will try and make it say something stupid, as we've seen with gen AI tools like ChatGPT and so forth.

[00:13:10.71] But yeah, the election's a big moment, and we know that many people really pay very little attention to politics, unlike people in this area of London, who are obsessed with it every day. But during the last few weeks of an election campaign, they do. And I think our overall point is that every voter deserves access to good information so they can make informed choices on the things that matter to them. We don't care which way they vote, but we would like them to have access to the information which allows them to make an informed choice.

[00:13:38.19] And you do have some specific proposals about one small part of, if you like, the false fact ecosystem, which is correcting stuff that's been said in Parliament that turns out to be inaccurate or turns out to be a flat lie or turns out to be a misreading of a statistical table or whatever it is. So you're looking at proposals so that if MPs say something in Parliament that turns out to be wrong, they have a strong mechanism for correcting it because otherwise, you can get to a situation where someone footnotes a false fact to something that's said in Parliament. It looks a lot more convincing because it's been said in Parliament.

[00:14:12.75] Exactly. So until earlier this month, only ministers could formally correct the official record, i.e. Hansard. From the 15th of April, all MPs have the ability to do that, and that's as a result of a Full Fact campaign.

[00:14:26.79] And you're right. The reason it's important is that there were some arguments saying, well, you don't need to do this because an MP can stand up and make a point of order in the House of Commons and say, the thing I said two weeks ago is incorrect.

[00:14:37.89] That would not correct the record, the written record, in Hansard. And of course, if you're trying to train a large language model which is used to build Gemini or ChatGPT or something, and you think, well, OK, British politics, what's going to be the most authoritative source? Well, we'll certainly put Hansard in there.

[00:14:54.39] And if there are claims in there which essentially stay there, albeit corrected two or three weeks down the line, but those uncorrected claims remain on the books, if you like, that's a problem because they will spread, and they will morph into other things. What happens now, and we're just beginning to see the first one or two corrections, is that MPs will be able to go in, and a correction will appear in a place where the error was made. And you're right that sometimes, it's something which is said deliberately.

[00:15:21.11] But sometimes, people make honest mistakes. We make mistakes, right? It's--

[00:15:24.54] Oh, surely not.

[00:15:25.64] It's hard to believe. The important thing is, how do you respond when you make a mistake? And again, it comes back to transparency and saying, we made a mistake. This is what we said. This is what we should have said.

[00:15:35.34] And initially, when Hansard put out information about the new correction system, they said that they would like MPs to inform them that they want to make correction within five days of the initial statement being said. And we went back to them and said, well, hang on. Sometimes, you know, it might be a constituency emails them two weeks later, and you said, this is wrong.

[00:15:54.99] So they've now-- we've had a correspondence with Hansard, and they've agreed that it should be five days where possible. But they've also agreed that retrospective corrections can be made. So before the new system came into place this month, if MPs realise they've said something that is wrong, they can go back and correct it in the record.

[00:16:13.14] So we believe it's a positive because it shows that Parliament is trying to get its house in order. We would have liked the system to have gone one stage further, which is what happens-- how does the house, how does Parliament deal with MPs who refuse to correct things which are clearly wrong? And we're not talking about opinions here. It's really important to make that distinction.

[00:16:30.93] Everyone's got opinions, right? But we think there should be a bedrock of shared facts on which you then have robust political debate. And if there are things which people have said which can be shown to be factually incorrect, as an elected representative, I think they have a responsibility to the people who elected them to make that correction. So there's another stage of the campaign to come, which is, how do you deal with people who refuse to correct the record?

[00:16:54.55] How would you like them to deal with people who refuse to correct the record?

[00:16:57.28] Well, there should be some sort of sanction. One thing I don't think is there are those who say it should be a criminal offence for a Member of Parliament to lie. I don't believe that should be the case.

[00:17:06.97] But I think if Parliament wants to get its own house in order, there should be some form of sanction if there are people who have said things which are verifiably incorrect, who refuse to correct it. And I think it's up to MPs to decide what kind of sanction that should be. And you both know there are all sorts of sanctions that can be brought in.

[00:17:24.13] I suppose one possibility is that just as they have on Twitter, community notes, maybe the Commons can have a kind of community note under someone's claim about something that everyone else thinks this is twaddle.

[00:17:34.57] Yeah, you--

[00:17:35.62] I can't see the House of Commons saying that.

[00:17:38.04] You don't think the House of Commons would use the word twaddle.

[00:17:40.37] I'm sure they'd come up with a better phrase.

[00:17:42.24] I don't think you'll get the officials--

[00:17:43.55] I mean, the community notes is an interesting one. The problem with community notes, for me, it's great that it's there, but it's sort of crowdsourcing fact checking, a bit like, you know, if you sort of pile in, you can manipulate what a Wikipedia page has said. If enough people say, oh no, this is actually what should have been said-- you've got to make sure that the sources are there. I think one thing that we do and all-- I mean, there are 400 fact checking organisations around the world now-- is one of the questions we often get is, who fact checks the fact checkers?

[00:18:11.60] Yeah.

[00:18:11.93] And the answer is everyone does because whenever we do a fact check, all the sources are there. And so in effect, you can reverse engineer what we've done. And you might decide, OK, we don't agree with you, but we will show you why we've come to the decision we have, and we will show you the source of where we think what we would call the good information is.

[00:18:29.76] And that gets us onto another interesting point about fact checking organisations. Fact checking organisations are wonderful things, provide a baseline of truth for political debate if they're getting it right. But who owns the fact checkers?

[00:18:41.35] I mean, in your case, your organisation is funded by people like Facebook and Google and so forth. So would you be able to call them out? Would you be able to say to Google, your algorithm is constantly leading people to this misleading or inaccurate information?

[00:18:55.30] Well, if you read our annual report, just come out, we are very critical of the tech companies and the search engines. It's all disclosed on our website. Google funds the research we do and the tools we build for automated fact checking.

[00:19:06.97] At the moment, we're trying to build a new tool that will mean we can rank health misinformation in online videos by the harm they cause. There's a heck of a lot of health misinformation out there on YouTube and other things. Google are helping us fund something which may be very critical of YouTube.

[00:19:22.54] Google and YouTube are owned by the same people. So we are critical of the search engines and the tech companies where we need be. But yes, we do get funding from them. We're very open about it. It doesn't mean we won't criticise them.

[00:19:33.04] And is there the danger that one day, someone might think to found, if you like, a kind of Potemkin fact checker, a fact checking organisation that claims to be searching objectively for truth but is actually out there quietly to promote a particular viewpoint? How would you catch that?

[00:19:47.23] They're there already. I'm sure-- I'm fairly certain there's a Russian state fact checker. So one of the things we have internationally, there's an international fact-checking network. There's a European fact checking network whereby you have to abide by certain standards, and you can only gain membership of that organisation if you can show--

[00:20:05.07] So you kind of have to look for a fact checking kite mark, as it were--

[00:20:09.04] Yeah.

[00:20:09.36] --on each of these organisations.

[00:20:10.98] And of course, it's always going to be abused. You may remember at the last election, there was a CCHQ Twitter account which renamed itself-- I think it was Fact Check UK for the night of one of the debates, which, I wasn't at Full Fact at the time. You can imagine Full Fact didn't react well to that.

[00:20:25.89] CCHQ being Conservative Campaign Headquarters.

[00:20:28.27] So yes, of course, there's going to be people just like there are people who try and pretend this is a BBC News website page. Actually, it's completely fake. So that's going to happen all the time.

[00:20:36.99] I suppose it takes us into another area which is of real importance, which is media literacy, because one of the things I've always thought about fact checking, if you think of the flow of information from its kind of creation to production to distribution, fact checking sort of waits until the very end of the process. Oh, the bad stuff's come out. That's wrong. We're going to fact check it.

[00:20:54.49] But there are plenty of ways that you can actually intervene much earlier in that flow of information. One of them is what we call prebunking instead of debunking, making sure good information is out there in the first place. I think media and information literacy is really important. I'm not just talking about in schools. I'm talking about lifelong learning so that people feel comfortable with technology. And feel they know how to navigate their way around this kind of, for many people, slightly bewildering new world of information which dominates so much of our lives.

[00:21:21.36] And then the other thing even further back is what's called annotation, which is basically organisations like ours are trying to get involved in training the models, the large language models, which are essentially behind the creation of publicly available tools like ChatGPT because somebody's got to train what goes into ChatGPT to enable it to say things that come out. So if you can make sure that really reliable, good information goes in, there's a better chance that better stuff comes out at the other end. So it's about intervening slightly earlier in the process.

[00:21:51.49] Just to take this back to the conversation about MPs, Full Fact has a sort of a list on its website of all the claims that have been made by MPs that you think are inaccurate and where the MPs have not retracted them or not corrected them. You contact the MPs individually to point this out, urge them to do the correction.

[00:22:10.56] What kind of responses do you get back? I mean, why are they not doing this? Is it--

[00:22:14.22] I think I'll politely say it's varied. Some of them are very, very thin skinned even when we go to them and say, look, it's actually not a criticism. It comes back to this idea of we all make mistakes.

[00:22:24.39] We're all human, right? I think your constituents don't expect you to be superhuman and never err. But just just, yeah, say I got this wrong.

[00:22:31.36] I think people will respect you more if you can do that. But there is a culture in Westminster that is the culture of the whatever you do, don't U-turn. But if you've said something wrong, actually, for me, it's a strength, not a weakness.

[00:22:42.52] Say, I got that wrong. I should have said this. And it's partly about having a conversation with politicians to say, look, it's in your long term interest to prove you're one of those people who want to be honest and want to be accurate.

[00:22:54.99] And to come back to where I started, about if trust in politics and politicians, that's it's such a low at the moment, it does need to be rebuilt, and it needs to be rebuilt by the people who are brave enough-- and I use that word advisedly-- who are brave enough to go into elected politics because I can see, you know, you go on social media, it's not an easy world to be in. But they do need to take the lead on all of these things.

[00:23:16.03] So yeah, we try and have good conversation with MPs. And I think it's one of the things that sets Full Fact apart from organisations like BBC Verify or Channel Four Factcheck. We don't just publish fact checks. We then go and ask for corrections.

[00:23:31.48] And if the corrections aren't there, we keep asking for the corrections. And we also look at what, if you like, sort of trends that the fact checks produce and think, are there things in the system we need to change? That was part of the genesis of changing the correction system.

[00:23:46.84] We thought, hang on. There should be a better way for MPs to do this much more simply. And so that was through our fact checking that we came to that understanding that let's try and make sure we get a parliamentary committee to take this up and do it. And it took a while before I was at Full Fact, but it happened.

[00:24:01.48] Yeah, I mean, to the credit of the Procedure Committee, they did take it up. They have done the inquiry, produced their report. It might not have gone quite as far as you would have liked, but I mean, compared actually to a lot of Procedure Committee reports in recent Parliaments where the government has not wanted to engage and take up the recommendations, and they're ultimately the ones that have to lay the motions before the House of Commons to get recommendations implemented, I think you made quite good progress.

[00:24:23.68] Agreed. And as I said, there are other things we'd love to happen now. For example, you know, I think numbers are really important. A lot of erroneous claims are about numbers. Great respect to Mark, but my former journalistic colleagues, I always thought, why is everyone illiterate and no one seems to be very numerate around here?

[00:24:38.89] But, you know, using numbers and using statistics is really important. Same applies in politics. And at the moment, the Ministerial Code, I think, says Ministers should take account of the Code of Practice for Statistics. The Code of Practice for Statistics is very clear about the best practice of using numbers and not misleading people. So why does the Ministerial Code say "take account" of it?

[00:24:58.68] As opposed to "follow".

[00:24:59.67] Yeah, follow it. Just strengthen the language a little bit to make sure that it's incumbent upon people to do these things. It shouldn't be too much to ask, I think, for us to expect that they do follow when they're in high public office, the highest standards.

[00:25:13.38] So, just as a final thought, really, looking ahead to the election that is yet to come, how nervous are you about it? How worried are you that it might be swung by fake news, false facts?

[00:25:25.69] You know, I'm probably less worried than some. I think we need to be on our guard. There are clearly challenges with the technology. I find it hard to believe there would be an information incident using fake news of some sort of such import that it would genuinely sway an election.

[00:25:41.54] We know that an awful lot of people have made up their minds way before the last two or three weeks. One of the things I am concerned about, though, is that we spend so much time talking about the impact of generative AI, is this fake, is it not, that we forget to actually focus on the things that politicians actually do say. Because when we focus on the things politicians actually do say, we do find that sometimes, they're telling porky pies, and we need to make sure that we keep holding people to account rather than waiting for the big specter that never quite arrives.

[00:26:10.28] I'm more worried about, as I say, the more general picture of are we losing trust in all sorts of information and the longer term implications that that has. I think that this election will be a real challenge, but I'm not convinced that there will be a moment where we look back and think, oh, my goodness me, the 2024 election was stolen by technology. I think we need to be on our guard, but I think at the moment, we're not quite at the point. We could be at the point in maybe '29 or '34 because the technology is moving so quickly. So we need to have the conversation now to make sure we don't get to that point.

[00:26:49.27] Chris Morris, thanks very much for joining us on the pod.

[00:26:52.15] Thank you. Thanks for having me.

[00:26:53.60] Fascinating. Thanks, Chris.

Parliament Matters is supported by the Joseph Rowntree Charitable Trust

Parliament Matters is supported by a grant from the Joseph Rowntree Charitable Trust, a Quaker trust which engages in philanthropy and supports work on democratic accountability.

Subscribe to Parliament Matters

Use the links below to subscribe to the Hansard Society's Parliament Matters podcast on your preferred app, or search for 'Parliament Matters' on whichever podcasting service you use. If you are unable to find our podcast, please email us here.

News / Will Parliament pay a price for promises to WASPI women? - Parliament Matters podcast, Episode 61

As Christmas approaches, Westminster eases into its pre-festive lull. Yet, a major political storm clouds the year’s end: the fallout from the Government’s decision not to compensate the WASPI women. This controversy highlights a recurring dilemma in politics—the risks of opposition parties over-promising and the inevitable backlash when those promises confront the harsh realities of governing. And as a seasonal stocking filler, Ruth and Mark talk to the authors of two fascinating books that uncover hidden aspects of parliamentary history.

20 Dec 2024
Read more

News / Parliament Matters Bulletin: What's coming up in Parliament this week? 16-20 December 2024

MPs will review five bills, including the Water (Special Measures) Bill, and debate two e-petitions on Israel and Palestine, including one on halting arms exports to Israel which may raise sub judice concerns. Six Select Committees will see membership changes following front bench reshuffles, and Peers will consider proposals for four new inquiry committees for 2025. The Defence Committee Chair will raise concerns about poor service accommodation, while Angela Rayner, Yvette Cooper, Shabana Mahmood, Wes Streeting and Michael Gove face Select Committees.

15 Dec 2024
Read more

Briefings / The Assisted Dying Bill: A guide to the Private Member's Bill process

This briefing explains what to watch for during the Second Reading debate of the Terminally Ill Adults (End of Life) Bill on 29 November. It outlines the procedural and legislative issues that will come into play: the role of the Chair in managing the debate and how procedures such as the 'closure' and 'reasoned amendments' work. It looks ahead to the Committee and Report stage procedures that will apply if the Bill progresses beyond Second Reading. It also examines the government's responsibilities, such as providing a money resolution for the Bill and preparing an Impact Assessment, while addressing broader concerns about the adequacy of Private Members’ Bill procedures for scrutinising controversial issues.

27 Nov 2024
Read more

News / Licence to scrutinise: spooks, hereditary peers and assisted dying - Parliament Matters podcast, Episode 60

In this week’s episode the ‘assisted dying’ bill takes centre stage as the newly chosen members of the Public Bill Committee gear up for detailed scrutiny of the legislation. With 23 members, including two ministers, this committee promises a mix of seasoned voices and first-time MPs debating a very difficult issue. We are joined by Hansard Society researcher, Matthew England, who breaks down the committee’s composition, party balance, and the strategic dynamics that will influence the bill’s trajectory.

13 Dec 2024
Read more

News / How a British student has schooled the US Congress - Parliament Matters podcast, Episode 58

In this special episode, we dive into the fascinating world of US congressional procedure with Hansard Society member Kacper Surdy, the once-anonymous force behind the influential social media account @ringwiss. Despite being a 20-year-old Durham University student, Kacper has become a go-to authority on Capitol Hill’s intricate rules, earning the admiration of seasoned political insiders. With Donald Trump hinting at bypassing Senate norms to appoint controversial figures to his cabinet, Kacper unravels the high stakes procedural battles shaping Washington.

04 Dec 2024
Read more