Print

Censored: “Online Harms”
By UK Column News
Global Research, April 12, 2019
UK Column
Url of this article:
https://www.globalresearch.ca/censored-online-harms/5674219

On Monday, 8 April 2019, the UK’s Home Office and Department for Digital, Culture, Media and Sport, jointly released a white paper on what they describe as ‘online harms’.

The white paper will lead, they say, to the “first online safety laws of their kind”, legally requiring social media companies and “tech firms” to “protect their users and face tough penalties if they do not comply.”

A twelve week consultation period began in parallel with the launch of the white paper. We would encourage everyone to submit your views.

The release of the white paper was covered by Mike Robinson and Patrick Henningsen on UK Column News:

In parallel with publication of the white paper, the Cabinet Office announced the ‘RESIST’ toolkit, which “enables organisations to develop a strategic counter-disinformation capability.”

Also announced was a behaviour change campaign aimed at the public to tackle “disinformation”. A pilot campaign, they said, “has launched and aims to increase audience resilience to disinformation, by educating and empowering those who see, inadvertently share and are affected by false and misleading information. The campaign will increase the audience’s ability to spot disinformation by providing them with straightforward advice to help them check whether content is likely to be false or intentionally misleading.”

How did we get here? Let’s find out:

David Cameron tells the United Nations General Assembly that something must be done to prevent extremism – not just violent extremism – from appearing online.

He makes it clear that he considers ‘extremism’ to include narratives on global events which are counter to his own.

2017 

Amber Rudd meets with representatives of Google, Microsoft, Twitter and Facebook. Following the meeting she says:

My starting point is pretty straightforward. I don’t think that people who want to do us harm should be able to use the internet or social media to do so. I want to make sure we are doing everything we can to stop this.

It was a useful discussion and I’m glad to see that progress has been made.

We focused on the issue of access to terrorist propaganda online and the very real and evolving threat it poses.

I said I wanted to see this tackled head-on and I welcome the commitment from the key players to set up a cross-industry forum that will help to do this.

In taking forward this work I’d like to see the industry to go further and faster in not only removing online terrorist content but stopping it going up in the first place. I’d also like to see more support for smaller and emerging platforms to do this as well, so they can no longer be seen as an alternative shop floor by those who want to do us harm.

The letter is signed by Hugh Milward, senior director, corporate, external and legal affairs, Microsoft UK; Nick Pickles, UK head of public policy and government, Twitter; Richard Allan, VP public policy EMEA, Facebook; and Nicklas Lundblad, VP public policy Europe, Middle East, Russia and Africa, Google.

It says:

Thank you for the constructive discussion today on the challenges that terrorism poses to us all.

We welcome the opportunity to share with you details of the progress already made in this area and to hear how the UK government is developing its approach in both the online and offline space. Our companies are committed to making our platforms a hostile space for those who seek to do harm and we have been working on this issue for several years. We share the government’s commitment to ensuring terrorists do not have a voice online.

We believe that companies, academics, civil society, and government all have an interest and responsibility to respond to the danger of terrorist propaganda online—and as an industry we are committed to doing more.

The German government approves a bill that punishes social networking sites if they fail to swiftly remove ‘illegal’ content such as ‘hate speech’ or defamatory ‘fake news’.

German Justice Minister Heiko Maas says that companies providing online platforms are responsible for removing hateful content. He said the new bill will not restrict freedom of speech. He says:

Just like on the streets, there is also no room for criminal incitement on social networks … The internet affects the culture of debate and the atmosphere in our society. Verbal radicalization is often a preliminary stage to physical violence.

Europol’s European Counter Terrorism Centre (ECTC) hosts its first high-level Conference on Online Terrorist Propaganda. Over 150 participants gather at Europol’s headquarters in The Hague to discuss a wide variety of related topics.

Participants include members of the ECTC Advisory Group on Terrorist Propaganda, representatives of the EU Commission and EU Council, academia and law enforcement practitioners from Europe and the US.

‘Experts’ from Google, Facebook and Twitter appear before Yvette Cooper’s Home Affairs Select Committee. The main concern of the inquiry seems to be online abuse of Members of Parliament.

In the course of giving evidence, all three platforms admit to having added staff to their censorship teams. Facebook, for example, admits to having added 3000 staff to its ‘community operations’ team in the previous six months, plus 20,000 staff to their ‘safety and security’ team. Twitter and Google have recruited similar numbers into their equivilent teams.

Damian Collins MP, chair of the Digital, Culture, Media and Sport Select Committee, asks Facebook to tackle fake news in the run up to the UK general election on 8 June.

The ads are published in The Times, The Guardian and Daily Telegraph amongst others and list ten “things to look out for” when deciding if a story is genuine, including checking the article date and website address, as well as making sure it isn’t intended to be satire.

Facebook says it has already removed “tens of thousands” of fake accounts and that it has set up systems to monitor the repeated posting of the same content.

Theresa May leads calls at the G7 meeting in Sicily to set up an industry-led forum to deal with ‘extremist’ content online. The official statement following the meeting says:

The G7 calls for Communication Service Providers and social media companies to substantially increase their efforts to address terrorist content … We encourage industry to act urgently in developing and sharing new technology and tools to improve the automatic detection of content promoting incitement to violence, and we commit to supporting industry efforts in this vein including the proposed industry-led forum for combating online extremism.

Theresa May travels to Paris to meet French President Emmanuel Macron. There she continues to press for an industry-led ‘forum’ to deal with ‘unacceptable’ content online. She says:

The counter-terrorism cooperation between British and French intelligence agencies is already strong, but President Macron and I agree that more should be done to tackle the terrorist threat online.

In the UK we are already working with social media companies to halt the spread of extremist material and poisonous propaganda that is warping young minds.

And today I can announce that the UK and France will work together to encourage corporations to do more and abide by their social responsibility to step up their efforts to remove harmful content from their networks, including exploring the possibility of creating a new legal liability for tech companies if they fail to remove unacceptable content.

We are united in our total condemnation of terrorism and our commitment to stamp out this evil.

Facebook, YouTube, Twitter and Microsoft release a joint statement which says:

Today, Facebook, Microsoft, Twitter and YouTube are announcing the formation of the Global Internet Forum to Counter Terrorism, which will help us continue to make our hosted consumer services hostile to terrorists and violent extremists.

… The new forum builds on initiatives including the EU Internet Forum and the Shared Industry Hash Database; discussions with the U.K. and other governments; and the conclusions of the recent G7 and European Council meetings.  It will formalize and structure existing and future areas of collaboration between our companies and foster cooperation with smaller tech companies, civil society groups and academics, governments and supra-national bodies such as the EU and the U.N.

The statement highlights cooperation with ‘partners’ such as the Center for Strategic and International Studies, Anti-Defamation League and Global Network Initiative “to identify how best to counter extremism and online hate, while respecting freedom of expression and privacy.

The Global Internet Forum holds its first meeting in San Francisco, where “representatives from the tech industry, government and non-governmental organisations are coming together to share information and best practices about how to counter the threat of terrorism online.”

In her comments about the meeting, British Home Secretary once again reminds us that ‘terrorism’ includes ‘extremism’, echoing David Cameron’s words of three years previously:

The World Socialist Web Site reports that:

New data compiled by the World Socialist Web Site, with the assistance of other Internet-based news outlets and search technology experts, proves that a massive loss of readership observed by socialist, anti-war and progressive web sites over the past three months has been caused by a cumulative 45 percent decrease in traffic from Google searches.

The World Socialist Web Site has obtained statistical data from SEMrush estimating the decline of traffic generated by Google searches for 13 sites with substantial readerships. The results are as follows:

* wsws.org fell by 67 percent
* alternet.org fell by 63 percent
* globalresearch.ca fell by 62 percent
* consortiumnews.com fell by 47 percent
* socialistworker.org fell by 47 percent
* mediamatters.org fell by 42 percent
* commondreams.org fell by 37 percent
* internationalviewpoint.org fell by 36 percent
* democracynow.org fell by 36 percent
* wikileaks.org fell by 30 percent
* truth-out.org fell by 25 percent
* counterpunch.org fell by 21 percent
* theintercept.com fell by 19 percent

Reports of censorship by social media companies become a regular occurrence:

Erica Anderson, Partnerships Manager at Google News Lab, announces that Google will partner with the International Fact-Checking Network (IFCN) at the Poynter Institute to “fact-check” news stories that appear in search results.

Anderson says:

With so much information available around the clock and across devices, being able to understand at a glance what’s true and what’s false online is increasingly important.

The Poynter Institute for Media Studies is openly funded by Soros’ Open Society Foundations.

In a statement, Twitter says:

This decision was based on the retrospective work we’ve been doing around the 2016 U.S. election and the U.S. intelligence community’s conclusion that both RT and Sputnik attempted to interfere with the election on behalf of the Russian government. We did not come to this decision lightly, and are taking this step now as part of our ongoing commitment to help protect the integrity of the user experience on Twitter.

Eric Schmidt, the Executive Chairman of Google’s parent company Alphabet, says during a Q&A session at the Halifax International Security Forum in Canada, that the company will “engineer” specific algorithms for RT and Sputnik to make their articles less prominent in search results.

He says:

We are working on detecting and de-ranking those kinds of sites – it’s basically RT and Sputnik … We are well of aware of it [Russian ‘propaganda’], and we are trying to engineer the systems to prevent that [their content appearing high up in search results]. But we don’t want to ban the sites – that’s not how we operate.

Schmidt claims that he is “very strongly not in favour of censorship,” but says that he has faith in “ranking”.

The Trust Project describes itself as “a consortium of top news companies” including the dpa news agency, The Economist, The Globe and Mail, Hearst Television, the Independent Journal Review, Haymarket Media, Institute for Nonprofit News, Italy’s La Repubblica and La Stampa, Mic, Reach Plc and The Washington Post.

Search engines and social media companies are described as “external partners”.

They say they aim to produce “trust indicators”: standardised disclosures “that provide clarity on a news organization’s ethics and other standards for fairness and accuracy, a journalist’s background, and the work behind a news story”, and which can be fed into search engines so that “quality news” can be brought to the top of search results.

2018

“Technologies like the internet were developed with a philosophy that connecting us together would improve people’s lives,” she says, “And in many ways they have. But so far, that hasn’t been completely true for everyone.”

She continues:

Just this week, a survey in the UK has found that 7 in 10 people believe social media companies do not do enough to stop illegal or unethical behaviour on their platforms, prevent the sharing of extremist content or do enough to prevent bullying.

The loss of trust is hugely damaging. And it is in all our interests to address it.

… And underpinning all of this is our determination to make the UK a world leader in innovation-friendly regulation.

Regulation that will make the UK the best place to start and grow a digital business – but also the safest place to be online.

In a speech to the House of Commons, Matt Hancock, Secretary of State for Digital, Culture, Media and Sport, announces government support for mainstream media. He says:

Today in a world of the Internet and clickbait, our press face critical challenges that threaten their livelihood and sustainability – with declining circulations and a changing media landscape.

… In 2015, for every 100 pounds newspapers lost in print revenue they gained only 3 pounds in digital revenue.

… Action is needed. Not based on what might have been needed years ago – but action now to address today’s problems.

… Our new Digital Charter sets out the overarching programme of work to agree norms and rules for the online world and put them into practice.

… And our review into the sustainability of high quality journalism will address concerns about the impact of the Internet on our news and media.

Matt Hancock, Secretary of State for Digital, Culture, Media and Sport, announces that Dame Francis Cairncross will lead the government review into the sustainability of the mainstream media.

Speaking at the Oxford Media Convention, Matt Hancock says:

There are a multitude of challenges facing our media today. Falling newspaper circulations, declining advertising revenues, changing consumption and wholesale disinformation.

Trusted, sustainable, high quality media is needed now more than ever.

Dame Frances Cairncross will bring her experience in journalism and academia to tackle these issues with a view to examine the press and protect the future of high quality journalism.

Also known as the ‘fake news unit’, the Rapid Response Unit is given an initial six months funding. It brings together a “team of analysts, data scientists and media and digital experts,” armed with cutting edge software to “work round the clock to monitor online breaking news stories and social media discussion.”

According to the RRU’s head, Alex Aiken:

The unit’s round the clock monitoring service has identified several stories of concern during the pilot, ranging from the chemical weapons attack in Syria to domestic stories relating to the NHS and crime.

For example, following the Syria airstrikes, the unit identified that a number of false narratives from alternative news sources were gaining traction online. These “alt-news” sources are biased and rely on sensationalism rather than facts to pique readers’ interest.

Due to the way that search engine algorithms work, when people searched for information on the strikes, these unreliable sources were appearing above official UK government information. In fact, no government information was appearing on the first 15 pages of Google results. We know that search is an excellent indicator of intention. It can reflect bias in information received from elsewhere.

The unit therefore ensured those using search terms that indicated bias – such as ‘false flag’ – were presented with factual information on the UK’s response. The RRU improved the ranking from below 200 to number 1 within a matter of hours.

Facebook reports that it has taken down 32 ‘suspicious’ pages and accounts that appear to have been run by ‘leftists’ and ‘minority activists’.

Some within the US Administration claim the pages were probably run by ‘Russian agents’. Facebook says is does not know for sure.

Andrew Parker says in his speech:

Age-old attempts at covert influence and propaganda have been supercharged in online disinformation, which can be churned out at massive scale and little cost. The aim is to sow doubt by flat denials of the truth, to dilute truth with falsehood, divert attention to fake stories, and do all they can to divide alliances.

Bare-faced lying seems to be the default mode, coupled with ridicule of critics.

The Russian state’s now well-practised doctrine of blending media manipulation, social media disinformation and distortion with new and old forms of espionage, high levels of cyber attacks, military force and criminal thuggery is what is meant these days by the label ‘‘hybrid threats’’.

… We are committed to working with them [social media companies] as they look to fulfil their ethical responsibility to prevent terrorist, hostile state and criminal exploitation of internet carried services: shining a light on terrorists and paedophiles; taking down bomb making instructions; warning the authorities about attempts to acquire explosives precursors.

This matters and there is much more to do.

Facebook announces that it was “partnering” with the Atlantic Council, “to combat election-related propaganda and misinformation from proliferating on its service.”

Facebook becomes a top donor to the Atlantic Council, alongside Western governments, NATO, various branches of the US military, and a number of major defense contractors and corporations.

Theresa May announces the establishment of “a new Rapid Response Mechanism (RRM)”, following Britain’s proposal for “a new, more formalised approach to tackling foreign interference across the G7” at the G7 Foreign Minister’s meeting the previous month.

This agreement sends “a strong message that interference by Russia and other foreign states would not be tolerated,” she says.

The Rapid Response Mechanism “will support preventative and protective cooperation between G7 countries, as well as post-incident responses”, including:

  • co-ordinated attribution of hostile activity
  • joint work to assert a common narrative and response

In a press release, Facebook says:

Today we removed 32 Pages and accounts from Facebook and Instagram because they were involved in coordinated inauthentic behavior. This kind of behavior is not allowed on Facebook because we don’t want people or organizations creating networks of accounts to mislead others about who they are, or what they’re doing.

We’re still in the very early stages of our investigation and don’t have all the facts — including who may be behind this.

Telesur English, a multi-state-funded Latin American news network, says that Facebook has removed its page for the second time this year “without any specific reason being provided.”

“This is an alarming development in light of the recent shutting down of pages that don’t fit a mainstream narrative,” they say.

The UK Council for Internet Safety (UKCIS) is the successor to the UK Council for Child Internet Safety (UKCCIS). It has an ‘expanded scope’ to improve online safety for everyone in the UK.

The Executive Board brings together expertise from a range of organisations in the tech industry, civil society and public sector, including:

  • Apple
  • BBC
  • Childnet
  • Children’s Commissioner
  • Commission for Countering Extremism
  • End Violence Against Women Coalition
  • Facebook
  • GCHQ
  • Google
  • ICO
  • Independent Advisory Group on Hate Crime
  • Internet Matters
  • Internet Watch Foundation
  • Internet Service Providers and Mobile Operators (rotating between BT, Sky, TalkTalk, Three, Virgin Media, Vodafone)
  • Microsoft
  • National Police Chiefs’ Council
  • National Crime Agency – CEOP Command
  • Northern Ireland Executive
  • NSPCC
  • Ofcom
  • Parentzone
  • Scottish Government
  • TechUK
  • Twitter
  • UKCIS Evidence Group Chair
  • UKIE
  • Welsh Assembly

Government says UKCIS will contribute to the Government’s commitment to make the UK the safest place in the world to be online, and will help to inform the development of the forthcoming Online Harms White Paper.

UK Column News asks, Is this the embryonic regulator of the internet?

Facebook, Apple, Spotify, Youtube and Pinterest remove Alex Jones and Inforwars from their platforms. Twitter initially refuses to do so, but follows suits shortly afterwards, following campaigns by mainstream media.

 2019

“From January 2019,” they say, “Full Fact will begin reviewing images, videos and articles on Facebook, as the third-party factchecking initiative comes to the UK for the first time.”

Fullfact says it wants to tackle disinformation at its source and give people the tools to spot it for themselves.

They say they will being checking photos, video and give them a rating. Content with a lower rating will appeak lower in Facebook news feeds, thereby reaching fewer people.

WhatsApp announces it will limit all its members to forwarding any single message up to five times in an effort to tackle the spread of false information. The previous limits was twenty times.

They say they made their decision having “carefully” evaluated the results of a half-year-long pilot.

“The forward limit significantly reduced forwarded messages around the world.”

She proposes:

  • New codes of conduct to rebalance the relationship between publishers and online platforms
  • The Competition & Markets Authority to investigate the online advertising market to ensure fair competition
  • Online platforms’ efforts to improve their users’ news experience should be placed under regulatory supervision
  • Ofcom should explore the market impact of BBC News, and whether it inappropriately steps into areas better served by commercial news providers
  • The BBC should do more to help local publishers and think further about how its news provision can act as a complement to commercial news
  • A new independent Institute should be created to ensure the future provision of public interest news
  • A new Innovation Fund should be launched, aiming to improve the supply of public interest news
  • New forms of tax reliefs to encourage payments for online news content and support local and investigative journalism

The Rapid Response Unit, the Cabinet Office ‘fake news unit’ established in April 2018, is given permanent funding to continue its work monitoring social media and making sure the government narrative appears at the top of search rankings.

On a visit to Dublin, Facebook CEO Mark Zuckerberg admits that there is a lot more the social network can do to regulate social media content.

He told RTE News:

I think these days a lot of people don’t want tech companies or any private companies to be making so many decisions about what speech is acceptable and what this harmful content that needs to be gets taken down.

So I think there is a role for a broader public debate here and I think some of these things would benefit from a more democratic process and a more active government role.

A joint Home Office / DCMS initiative, the white paper proposals include:

  • A new statutory ‘duty of care’ to make companies take “more responsibility” for the content or activity on their services. This will apply to all platforms of whatever size which permit user interaction such as forums or comments, and carries the potential for massive fines and imprisonment.
  • Giving a regulator the power to force social media platforms and others to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address this.
  • Codes of practice, issued by the regulator, which could include measures such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.
  • A media literacy strategy to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, including catfishing, grooming and extremism.

A twelve week consultation period begins from the date of publication.

*

Note to readers: please click the share buttons below. Forward this article to your email lists. Crosspost on your blog site, internet forums. etc.

Disclaimer: The contents of this article are of sole responsibility of the author(s). The Centre for Research on Globalization will not be responsible for any inaccurate or incorrect statement in this article.