There was a time when journalists knew their sources, personally. A typical day at a local newspaper would begin with a visit to the police station to look, with the chief inspector, through the list of crimes. That might be followed by a trip to the nearby fire and ambulance stations to do something similar. Then the magistrates court and the municipal council offices, not to forget chats with pub landlords, religious and community leaders, business owners and general gossips. Real life members of the public came into the office and if all else failed, there was the telephone.
If the journalist didn’t know them personally, he or she spoke to someone who did. Sources were everything, reliable sources. And they still are.
The difference now is that with the advent of Artificial Intelligence (AI) and myriad social media platforms, there are many more sources. How they are constructed is more complicated, the journalist doesn’t know them personally and it is much more difficult to verify the veracity of any information that comes their way.
However, it is still the job of the journalist to check those sources are true.
But true? What do we mean by true? All the information on an algorithm, gleaned from reliable big data, may well be accurate. But filter bubbles, working with information from the recipient’s previous online activity, leave out what the user is not interested in, or may find offensive. It works on a ‘need to know’ basis. But who decides who needs to know what? And is a partial truth still ‘true?’
But hold on! Haven’t news users always worked with partial information? The journalist, operating under time and space constraints, has always decided what is put in and what is left out, to then see their copy altered by sub-editors and the editor. An algorithm is simply taking that process one step further. All information is selected, partial, incomplete.
The onus, as it always has been, is on the journalist to provide as much balanced, accurate, well-written material as conceivably possible. And on the reader or viewer to pick and choose then question the integrity of what they are seeing or hearing.
But what if a machine is now doing that journalism? Last year, Digital First, the owners of the Denver Post, began talks with the unions about using artificial intelligence to cover high school sports events. They hope eventually to have computers also gather and publish reports from municipal councils and community groups (The Intercept, 11/10/2019).
Digital First is owned by a New York hedge fund called Alden Global Capital whose principal aim in introducing the technology is simple cost cutting, despite the Denver paper, as it now functions, being profitable. Journalists will be replaced by computers.
It is not AI itself that is laying off the journalists. Most welcome the technology as a means of making their jobs more comprehensive and efficient. Ken Doctor, a media analyst with Nieman Lab, said (The Intercept, 10 Oct 2019): “The problem is the tools are being used by those who are primarily looking at cost-cutting. Actual journalism requires judgement.”
That judgement, from both the journalist and the consumer of news, is under greater strain with the threat of doctored photographs and deep fakes, videos changed using sophisticated editing and content management tools.
Deep fakes showed the two main candidates in the 2019 British elections, Boris Johnson and Jeremy Corbyn, apparently telling voters to back the other man. Facebook boss Steve Zuckerberg seemingly admitted he’s stealing users’ personal data, and we’ve seen visual ‘proof’ that innocent parties have committed atrocities – all recent examples of deep fakes. They all looked ‘real’ but their unlikely content should have, and usually did, raise suspicions.
Bill Posters from the Spectre Project, which works to highlight such misuses, said: “Democracy just doesn’t work if people don’t believe in it. Danger is likely to increase as long as politicians and tech companies remain unsure of how to deal with it.”
And Aviv Ovadya, from the Thoughtful Technology Project, another organisation working in the field, added: “Politicians escape scrutiny by saying ‘that deep fake video was not me.’” He called the effects ‘Reality apathy,’ where people opt out of politics, saying they don’t believe in it.
They are just some of those battling against the insidious consequences of big fake. Another is Dr Sander van der Linden at Cambridge University who is working on a Fake News game.
It tests the players who must get as many followers as possible without losing credibility. So, initially the news they publish can’t be too ridiculous. Then users, after being duped, must question the information. Van der Linden says the participants are fed what he calls small doses of mental antibodies to build up resistance to fake news, to become their own “bullshit detectors.”
There are several other organisations working along similar lines. InVid says: “the ease in which fake information spreads in electronic networks requires reputable news outlets to carefully verify third-party content before publishing it.”
It offers what it calls “a knowledge verification platform to detect emerging stories and assess the reliability of newsworthy video files and content spread via social media.” This entails networks of media outlets, academics and others to check and share the validity of material, as well as tools that enable processes such as reverse image searches, that allow the user to check the source and reliability of suspect photo and video material.
First Draft was founded in 2015 and helps implement measures to counter fake news. Its webpage lays out its basic plan:
- Core: newsrooms that have staff dedicated to social monitoring and verification and have publicly listed standards policies and corrections protocols.
- Academic: journalism schools and researchers in a variety of disciplines that work to understand and explain information disorder.
- Technology: organizations that help to bring insights to the reporting and understanding of how information travels online.
First Draft’s advisory board includes representatives from human rights organisations, journalism, law, copyright law, cyber-security and politics, trying to fill what Bill Posters calls a “regulatory black hole” – created by the rapid development of technology used by the media.
So, while journalists have swapped their manual typewriters for computers, their function remains much the same – to check their sources, to verify the validity of the information they’re using and to present as balanced an account as possible of the story they are covering.
After a period of (justified) panic over the damage to democracy that fake news could do, civil society, many in the industry, journalists, politicians and others are beginning to fight back by developing their own tools, combined with a more robust, questioning approach to where our information is coming from, who is producing it and how.
The journalists must be better if they are to win and maintain the trust of an increasingly cynical public. But the public too, faced with the possibility of trusting no-one and believing nothing, must also be more rigid in assessing the validity of what they read and watch.
By Daniel Schweimler (675588)
First Draft: https://firstdraftnews.org/about/
First Draft Field Guide to fake news: https://firstdraftnews.org/project/field-guide-fake-news/