Digital Safety, Censorship and Payment Processors

Digital Safety, Censorship and Payment Processors

Disclaimer: I am not a legal expert and have tried my best to compile the information in this document from various internet sources. I have linked these sources where appropriate and given my own commentary on the situation. Although best efforts have been made to fact check myself, I can not guarantee with absolutely certainty that all the information is correct. I encourage you look into this further yourself.

Legislation arrives - Recap

It's been a little over a week since the UK's Online Safety Act 2023 came into affect, met with both domestic and international criticism. At the time of writing the Repeal the Online Safety Act petition has now reached over 490k signatures, with the government's initial response being "The Government has no plans to repeal the Online Safety Act".

The government's Secretary of State for Science, Innovation and Technology dismissed the the backlash on X saying "If you want to overturn the Online Safety Act you are on the side of predators. It is as simple as that." going on to further address the government's position on BBC Breakfast.

"And for everybody who's out there thinking of using VPNs, let me just say to you directly, verifying your age keeps a child safe. Keeps children safe in our country. So let's just not try to find a way around. Just prove your age. Make the internet safer for children."

I have a major problem with this statement. If you are an adult, why does it matter if you use a VPN to get around the age verification? Well besides they fact the government isn't worried about VPNs, age verification could be used now or in the future to track users.

Melanie Dawes, the head of Ofcom certainly isn't concerned with VPNs, telling MPs in May:

“A very concerted 17-year-old who really wants to use a VPN to access a site they shouldn’t may well be able to,” she said. “Individual users can use VPNs. Nothing in the Act blocks it.”

Is this really all about protecting kids?

VPNs

So, what's the problem? Just use a VPN to get around it! The government already said there are no plans to ban VPNs. Well they don't need to, for the following reasons:

  1. With the recent AI boom, most major platforms like Reddit and Instagram have already blocked or severely rate limited traffic from data centres and VPN services. Want to scrape our data to train your AI bot? You'll have to pay us for that.
  2. The government's position of "platforms have a clear responsibility to prevent children from bypassing safety protections." moves the obligation (as does many parts of the OSA) from the government to the corporations. Loosely implying it may be in their interest to block VPNs.
  3. Buy-in, they want everyone to do the age verification checks to show the public supports the government's measure. What about other countries? They'll come around eventually, we'll make sure of it.

Recent reports of MPs charging VPN usage to expenses I don't think is the 'gotcha' people think it is. Not when the narrative from suppliers and YouTubers has been that using a VPN somehow magically makes you more secure online.

Davos and the Global Online Safety Regulators Network

Ofcom has been gearing up for the last couple of years to take on the role as the UK’s online safety regulator, with the much-delayed Online Safety Bill. Both the Digital Economy Act 2017 and the Online Safety Act 2023 have have significantly expanded the remit and powers of the UK's communications regulator.

So what, I live in <insert country> it doesn't affect me. Even the US has states pushing age verification bills.

Following the COVID pandemic (2020), Russia invading Ukraine (2022) and multiple government elections, there has been a major push towards both cyber security and digital safety, in 2012 and 2022 the World Economic Forum starts addressing digital safety with a heavy focus on disinformation/misinformation, terrorism and child abuse/safety.

In November 2022 Ofcom founded The Global Online Safety Regulators Network (GOSRN). It was established as "a forum for independent online safety regulators to share experiences, expertise, and evidence, fostering a more coherent international approach to online safety regulation." The founding members comprised of the safety regulators from Australia, Fiji, and the UK with others joining later.

Between the WEF's Coalition for Digital Safety (Est. 2021) and the newly established GOSRN there has been a multilateral push towards regulating the internet, at this point I can't see a world in which it doesn't happen unless there is major push back from countries like the US or those affected by such policies.

While global safety regulators push for control, rather than consulting their citizens they will instead look towards research and advocacy groups like the notorious Collective Shout. They work closely with the eSafety commission through submissions and often referenced by other safety regulators. Recently they featured in the news after lobbying Payment processors like Visa and Mastercard to censorship video games. Some may look this action and the Online Safety Act and see them as unrelated "the world is just changing", I don't and will come to that.

A point of note, there have also been multiple instances of the censorship faced by Japan. Over the last 5 years anime has seen a massive rise in popularity. Could safety regulators such as Ofcom be having conversations with payments processors and their concerns with this type of media? I can't find any evidence, so... Inconclusive, but I will say Ofcom are certainly aware of manga and children consuming this type of content.

The Online Safety Act - Protecting the kids narrative

Won't somebody *please* think of the children?!? : r/TheSimpsons

We're just trying to protect the kids they say, you're not on the side of Jimmy Savile are you? The optics the government has been pushing 'this is all about stopping kids from accessing pornography and adjacent harmful content such as suicide and self-harm'. The Online Safety Act introduces the premise of risks, with a register across 480 pages.

But as has come to light we know it's not just about the kids, a day after the OSA came into effect an elite police squad was reported in the news. Their remit? To monitor anti-migrant sentiment. This squad like all police fall under the Secretary of State.

Image

sources: Elite police squad / Exposed: Labour's plot

This is an act which at different times had support from both major parties, drafted into existence by the Conservative government and voted through by the successive Labour government. Both parties are authoritarian in the grab for new powers, marching the UK towards an unequivocal police state.

Secretary of State

If you look at OSA in detail you'll find how it could be abused by the government of the day. Ofcom in their own words express how they are "an independent regulator and we make evidence-based decisions without fear or favour. Although Ofcom is accountable to Parliament, we are independent of government and the companies we regulate."

The Secretary of State's powers under OSA are as follows:

  • Directing Ofcom on Strategic Priorities - The Secretary of State can issue statements outlining the government's strategic priorities for online safety, which Ofcom must consider when carrying out its functions. These priorities can cover areas like safety by design, transparency, accountability, agile regulation, inclusivity, resilience, technology, and innovation.
  • Influencing Ofcom's Codes of Practice - The Secretary of State has the power to direct Ofcom to modify a draft code of practice if deemed necessary due to reasons like public policy, national security, or public safety.
  • Issuing Guidance - The Secretary of State provides guidance to Ofcom on how it should exercise its online safety functions and powers.
  • Setting Threshold Conditions - The Secretary of State is responsible for issuing regulations that define the conditions for categorizing online services (e.g., Category 1, 2A, 2B)
  • Oversight and Accountability - The Secretary of State plays a role in the oversight of Ofcom's work, including receiving reports and statements from the regulator

While these powers are designed to ensure online safety, critics have raised concerns about the extent of the Secretary of State's authority under the Act:

Under section 44 of the Act, which empowers the secretary of State to direct Ofcom to modify its code of practice if the minister believes it is necessary for reasons of public policy, national security, or public safety. The power allows them to designate what constitutes "priority illegal content" through additional secondary legislation further solidifying state control. The ability to issue guidance has been quoted as "closer to authoritarian than to liberal democratic standards even with the safeguards”.

Free speech is important to any democracy, does the UK have free speech? Debatable. The Secretary of State Yvette Cooper wishes to strengthen hate-crime laws particularly around Non-crime hate incidents (NCHIs).

Non-crime? Why should you be worried if a crime hasn't been committed. NCHIs are a form of pre-crime, an episode in which the ‘victim’ perceives that the ‘perpetrator’ is motivated wholly or partly by hostility towards them based on a protected characteristic. So if the victim perceives it to be and they've taken offense. Although they do not meet the legal threshold for a criminal offence, they can be recorded by police against you and if your employer is required to do an enhanced DBS check it may show up on your report and harm your future prospects, despite in their own words a crime never having been committed.

So? Just don't be a horrible person! Applying this logic goes along the same premise that if you have anything negative to say about OSA you must be paedophile. Where anti-immigration posts could land you with a knock on your door, granted some can be quite colourful and any calls of violence should be dealt with no matter someone's ethnicity. Should this type of definition be applied to the wider internet you could possibly see how websites might come under fire for what is deemed offensive speech or crass jokes.

Harms

So what is OSA meant to protect us from?

Illegal content and activity: Child Sexual Exploitation and Abuse (CSEA) and related content. Terrorism content. Hate offences. Harassment, stalking, threats, and abuse. Intimate image abuse. Fraud. Promotion or facilitation of suicide. Illegal immigration and people smuggling. Illegal drugs and weapons

All bad things, I'm sure people wouldn't argue against, other than hate offences related to speech, as mentioned above. The continuous argument that free speech it's not absolute vs how hate speech laws can be misused by authorities to silence dissent and suppress unpopular views.

Content harmful to children: Pornography. Self-harm and eating disorders. Serious violence. Bullying. Dangerous stunts and challenges.

Again I'm sure a lot of people would agree these are things that children would be better off not accessing.

Other harms: Misinformation and disinformation, harmful algorithms.

There in lies another problem, who decides if the information is misinformation or disinformation? Well Ofcom does of course! That independent regulator directly influenced by the Secretary of State. Now they are required to establish and maintain an advisory committee, consisting of "a chairman appointed by OFCOM" and "such number of other members appointed by OFCOM as OFCOM consider appropriate."

What does this committee do? "The function of the Committee is to provide advice to Ofcom about specific areas of our work relevant to disinformation and misinformation" so to provide advice around regulation not anything related to what constitutes misinformation.

Censorship

Image

To reiterate the requirements of the act for platforms is a "Duty of care" to remove or restrict access to content deemed "harmful". Everyone knows Ofcom isn't saying "Take down this one specific post" (but they made use it as evidence). It's about the systemic pressures which the Online Safety Act creates. Large platforms like Facebook, X or Reddit aren't realistically going to block the UK market.

And in terms of content, what is more likely. A company will support multiple experiences across their platform based on geographical location, or they update their policies and ToS to age restrict/block any content which loosely aligns as being in-scope. Even if it's perfectly legal in other countries.

This is why it's being criticized as a threat to freedom of expression, leading to accusations of potential censorship. We've already seen some examples of platforms over-censoring content to avoid hefty penalties, potentially restricting access to legal content for adults.

This age gated censorship has hit any media around protests in the UK and anything reported on lack of investigation by the government into "mass rape gangs". The Labour government plans to allow 16 and 17 year olds vote in the next election, benefitting them as young voters tend to swing more left. However, due to their age they won't be able to consume any of this type of political media, leaving them going to the polls not fully informed of the political landscape in the UK.

Ofcom and their powers

As mentioned Ofcom have been pushing for more powers for quite a few years, their public stand has been 'were not trying to censor the internet, but rather work with companies'.

"Our role is to make sure that regulated services take appropriate steps to protect their users. We don’t require companies to remove particular posts, images or videos, or to remove particular accounts. Our job is to build a safer life online by improving the systems companies use to prevent harm."

But if you don't take action... what we do have

What I do have are a very particular set of skills. - a movie "Taken" |  WordReference Forums
"We will have a range of tools to make sure services follow the rules. After consulting on them, we will set codes of practice and give guidance on how services can comply with their duties."

If they aren't wanting to censor the internet why do they need these powers that allow them to censor the internet.

  • Impose substantial fines: Ofcom has the authority to levy fines of up to £18 million or 10% of a company's qualifying global annual turnover, whichever is greater. This power is designed to ensure that even the largest tech giants face meaningful financial consequences for failing to protect their users.
  • Demand information and conduct audits: The regulator can compel companies to provide information about their services and the measures they have in place to comply with the Act. Ofcom can also conduct audits of companies' risk assessments and safety processes to ensure they are robust and effective.
  • Service Restriction and Access Restriction Orders: In cases of serious or repeated non-compliance, Ofcom can apply to the court for a service restriction order. This could involve requiring a platform to make specific changes to its service. In the most extreme cases, where there is a serious threat to public safety, Ofcom can seek an access restriction order, which could lead to an app being removed from app stores or an entire website being blocked in the UK.
  • Hold senior managers liable: The Act introduces provisions that can hold senior managers of non-compliant companies criminally liable. This "named manager" liability is intended to ensure that accountability for online safety rests at the highest levels of a company.
  • Issue notices and guidance: Ofcom can issue notices to companies requiring them to take specific actions to address failings. The regulator is also responsible for producing codes of practice and guidance to help companies understand their obligations under the Act.

To comply with Ofcom, what seems more likely. Implementing age verification and adjusting the experience for only UK users, on apply these measures to all of their userbase to reduce overheads. Lets look into the financial impact to these companies...

Payment Processors

So how does all of this affect payment processors? Well besides them coming right out and telling you.

People have focused on the hefty £18 million fine, or 10% of their global revenue. Well what about 100%? As that's what control over payment processors means.

Even back as far as Digital Economy Act 2017 banks and payments processors would have been aware of the governments attempts to influence their business operations in Part 3 where they talk about giving notice to payment-services and other ancillary service providers which do not comply with the regulator. Ultimately this was abandoned in favour of OSA.

So what does the legislation look like now?

Service restriction orders
OFCOM may apply to the court for an order under this section (a “service restriction order”) in relation to a regulated service
A service restriction order is an order imposing requirements on one or more persons who provide an ancillary service (whether from within or outside the United Kingdom) in relation to a regulated service
Examples of ancillary services include—
(a)services, provided (directly or indirectly) in the course of a business, which enable funds to be transferred in relation to a regulated service,
(b)search engines which generate search results displaying or promoting content relating to a regulated service,
(c)user-to-user services which make content relating to a regulated service available to users, and
(d)services which use technology to facilitate the display of advertising on a regulated service (for example, an ad server or an ad network).

So not only could a payment processor be court-mandated to restrict or stop service, but this could also apply to search engines, app stores, content delivery and ad networks. One of these alone could kill a website, which leads onto the next topic.

Jurisdiction

"The Act gives Ofcom the powers they need to take appropriate action against all companies in scope, no matter where they are based"

There is something to be said, if the UK government has the jurisdiction to compel a payment processor to remove service from a US hosted website of a US company, solely based on Ofcom's discretion there are a "significant number of UK users" to their site. Fines and internet blocks I could understand, but this comes across as major overreach.

When it comes to Visa and Mastercard, although US companies there is most certainly a case for UK law to apply to them, they are global payment processors with a physical presence in the UK, Europe and other parts of the world.

Now the Act does stipulate in terms of service restriction orders, the steps that may be specific or arrangements that may be required to be put in place:

are limited, so far as that is possible, to steps or arrangements relating to the operation of the relevant service as it affects United Kingdom users.

This could mean that search results, CDN traffic, ad networks or payments only affect users coming from the UK. But as I said the act creates systemic pressures. Companies are risk averse, especially those in any way related to the financial market due to their financial and legal obligations. Do they want the bad PR from being in the news or the cost of being dragged through the courts? Or is it much simpler to cut ties companies, likely smaller ones. And update their rules to limit the types of product that can be sold.

Visa and Mastercard

Following the recent video game censorship and looking into the online safety regulators with their push to global adoption. I can't help but feel this is more a symptom of UK legislation, putting rules and ToS in place that align with the laws of the largest economies. In the financial sector where risk and stability is everything, the minimisation of risk could include complying with a regulators like Ofcom to avoid negative press and legal fees.

Yes, they most likely have a predisposition towards not financing adult content, but even if more payment processors were to rise up it would be hard not to see them falling under intense pressure to align with the other major brands or face restrictions.

This is likely not the answer people want to hear and certainly not the one I was looking for, but with Visa and Mastercard not acknowledging or walking back the enforcement of their rules. It's likely they can't even if they wanted to, and certainly not for a seemingly small (financially) number of indie games compared to global financial markets.

The only way to combat this would of course be legislation to obligated payment processors to facilitate payments for legal content. But with censorship happy governments like the UK and others, it's unlikely to see that happen.

Age assurance and Government Tracking

How is age verification being implemented? Well like many parts of OSA it's being outsourced to third party businesses with expensive price tags. Companies like Yoti and Persona have stepped in to offer their services.

Companies such as YouTube and Spotify likely under pressure or seeing how the industry is going have started putting measures in place to age verify users around the world.

We've also seen the impact with smaller or unfunded sites not being able to afford these measures leading to them either blocked UK traffic or shutdown services completely. As I've highlighted one of the things both safety regulators and the WEF want to tackle is fraud, both of these services offer fraud prevention, if checks aren't already being done secondary legislation could easily mandate it.

People are understandably worried about how their biometric data and ID could be used to track them across the internet. Anonymity is already hard enough to achieve, try signing up for a google account or any social media without handing over your phone number or ID. But luckily your data only stays with the third party for a week, right?

Enter BritCard, the likely next step in government tracking. Coming straight from Labour's think tank Labour Together. Not only tracking people's right to work in the country but allowing the government greater oversight over it's citizens.

"Digital identity can help tackle some of the most difficult challenges facing government, such as online fraud, harmful online content, benefit fraud and waste, as well as making healthcare more effective and efficient."

Harmful content online? So presumably this could act as a drop in replacement for those third parties or even restrict your access to the internet, if you're providing your BritCard to your employer and landlord, what's to stop it required to purchase internet access or log on. There is talk of device based authentication, this could be integrated into your phone or operating system. Not sure I'd personally trust Microsoft with this knowing how much telemetry is sent back to base and their requirements for Microsoft Accounts to log in locally.

To support better awareness and uptake of the new credentials, the Gov.UK App and Gov.UK Wallet could be relaunched as the “BritCard App”. This would create an eye-catching, memorable brand for the new Government-issued credentials and wallet. The same app and wallet would be used for accessing the full range of digital credentials currently planned for delivery by 2027, with identity verification fulfilled by One Login.

Australia's Digital ID commenced in 2024 has been reported to fall short of global privacy standards.

End-to-end Encryption

We've already seen the open secret where the UK government has taken steps to break encryption by issuing a technical capability notice (TCN) to Apple under their Investigatory Powers Act 2016 to compel Apple to provide access to user data. The major problem with this besides jurisdiction, is under these powers Apple are legally prevented from even confirming the existence of this TCN, only coming to light after a whistle-blower leaked the information. Apple instead came to a compromise that they would withdraw Advanced Data Protection from the UK.

What about all the other TCNs we don't hear about?

Now for anyone who isn't very tech savvy or doesn't understand what E2E encryption is, and it's importance. In layman's terms it's a security method that ensures only the sender and recipient of a message can read it, with no one else, not the operator, your ISP or the government being able to peer into your conversation. E2E encryption protects privacy, prevents data breaches, secures communication and builds trust, this is why large tech companies have introduced it into their products. This includes but not limited to Apple iMessage, Facebook Messenger, WhatsApp, Telegram and Signal.

End-to-end Encryption

For well over the last decade governments, especially the UK government has wanted access to be able to peer into conversations, and be able to decrypt device and any and all SSL traffic on demand, arguing these are places where predators and terrorists fester without any consideration to the privacy of it's users.

Boot lickers and advocacy groups may argue "You've got nothing to hide right? I know I don't!" If people knew how much their elected officials and military personal use these services to transfer information securely I'm sure they would be very surprised. Breaking these types of encryption can expose bad actors, but would also harm the privacy of their citizens and has the potential to leak sensitive information related to any and all governments.

To their credit they have supposedly walked back the idea of breaking E2E encryption, instead opting for so called client-side scanning. What I would say to that is, we wouldn't stand for a bobby sitting across the table while we're down the pub with our mates, policing our conversation, so why should we stand for the same in the digital space?

Client-side scanning completely undermines the E2E encryption, users have limited control or visibility into how their data is scanned, how the scanning technology operates, how flagged data is handled and who gets access to it, not to mention how it could be abused as a means of surveillance. Client-side scanning could also flag innocent communications, such as a parent sharing a picture of their child with a doctor or a grandparent, leading to unwarranted scrutiny.