The world of information is having a moment
I first published this on September 17th - and it's already dated..
How we create, verify, share and compare information is one of the superpowers of the human species. But just as cave paintings gave way to the town crier, the printing press has over time given way to the modern combination of personal smartphones and unfathomable computing power.
Each successive Great Leap Forward in our information landscape requires a societal adjustment, as knowledge is created by more people and distributed faster and wider. These impacts may be economic, religious, political; they might improve education but also increase conflict as things known only to the few become known to many. They take time to emerge and even longer to understand or manage.
We are living through a period in which a generational great leap has taken place and the rules that govern the new adjustment are gradually being formed.
This is happening on many fronts and without an obvious overarching design - the list below are all examples of activity that is gradually setting principles and precedents for some of the biggest questions we face.
Key question: How will we keep news and information trusted and safe from foreign state actors? (governments of all types want more control of online information)
In their report of the state of communications for 2024, Ofcom noted that for the first time more adults got their news online (71%) than any other media channel. Social media is a significant component of that with more than half of adults using it as a news source.
88% of 16-24 year olds use online sources for news. The top individual sources are Instagram (41%), YouTube (37%), Facebook (35%), TikTok (33%) and ‘X’ (27%). Only 37% of this group believe news from social media is trustworthy, and only 37% believe that it is accurate.
The UK government passed legislation which prevented UK newspapers from being owned by 'foreign state actors', directly in response to a takeover bid for the Telegraph by Redbird IMI.
Russia began throttling YouTube services in the country to degrade the service, hoping to encourage Russians to migrate to "RuTube" services owned and controlled by Gazprom and other state affiliated companies, designating YouTube as "not a neutral platform; it operates at the political directives of Washington"
Brazilian judges upheld the decision to ban X from the country, after X management refused to stop accounts that the govermnent designated as sources of disinformation. The ruling also made the use of VPN access to circumvent local laws illegal.
The European Commission opened a formal investigation into Meta over political content on Facebook and Instagram, including suspected Russian influence campaigns running as advertising on the platforms. YouTube was similarly accused by AccessNow of not doing enough to prevent disinformation advertising running through the Indian election campaign.
Two employees from Russia Today were charged with covertly funding and directing US right wing commentators 'in furtherance of Russian interests". $10m was alleged to be spent to target "millions of Americans as unwitting victims of Russia’s psychological warfare".
Meta subsequently announced that it has banned Russia Today and other Russian outlets for "deceptive practices" on Facebook, Instagram, WhatsApp and Threads.
The CEO of Telegram Pavel Durov was arrested in France in August, and will be investigated for enabling criminal acts on the platform, linked to terrorism and human trafficking. Durov argued that as a tech provider he was not responsible for the activities that happen on the platform.
Following national riots in the UK the Technology Secretary met representatives from TikTok, Meta, Google and X "to make clear their responsibility to continue to work with us to stop the spread of hateful misinformation and incitement". Elon Musk argued that this was a suppression of free speech.
The US passed the Protecting Americans From Foreign Adversary Controlled Applications Act, requiring TikTok to be sold by its Chinese owner ByteDance, and published allegations that the data within the app could be used as a threat to national security. . TikTok creators also sued the government for being denied 'this distinctive means of expression', in a lawsuit funded by TikTok. TikTok is currently arguing in the appeals court that the act violates First Amendment free speech rights and is an unconstitutional precedent.
Key Question: How will we keep children safe from the impacts of harmful or overwhelming information? (general acceptance that smartphones and social media have been available too early and platforms have not delivered solutions)
Jonathan Haidt's book "The Anxious Generation" sold 15 million copies, focusing on the impact early smartphone adoption can have on childhood and capturing a sense of helplessness in the face of technology felt by many parents.
Parental groups like Smartphone Free Childhood mobilised parents to support collective action in this area. The group now has chapters in the US and Canada.
Many schools in the UK returned in September with full and partial bans on smartphones in schools having received government permission to do so.
In July the US senate passed the KOSPA bill in the Senate by 91 votes to 3, creating new laws in the US to protect child safety online with many parallels to the UK Online Safety Act which is moving through implementation and is likely to take effect in 2025.
How will national governments regulate global platforms and restore competitive digital markets? (more co-ordinated activity from regulatory bodies in UK / US / AUS / EU is turning the tide as global platforms now face similar interventions on many fronts)
Google lost a competition case brought by the Department of Justice (DOJ) in September, ruling that it ran an illegal monopoly in search. A second competition case is now in trial, with Google accused of running an illegal monopoly in the digital advertising market.
Google also lost an appeal against an EU $2.6bn fine for using its monopoly in search to favour its own shopping services. Apple similarly lost their appeal against a $14bn EU fine for taking tax breaks in Ireland that amounted to state subsidies. These appeals were early victories for the EU antitrust unit.
The UK equivalent unit (the Digital Markets unit) was given Royal Assent and is moving to implementation, which will begin by designating companies with Strategic Market Status who will fall under the jurisdiction of a full time active regulator.
How will we ensure artificial intelligence is used wisely in the information sphere? (AI is a superpower but comes with risks, unknown consequences and few legislative guardrails)
The European Union formally adopted the EU Artificial Intelligence Act which will determine the rules for products and services using AI in all associated countries.
President Biden agreed a voluntary code of conduct with seven US AI leaders (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) regarding safety, security and trust. This included sharing information on emerging risks and committing to external testing processes.
Colorado enacted the first state level AI bill which included guidance on managing bias and discrimination in AI models. The bill won't come into full effect until 2026.
Samsung predicted that by the end of 2024 there will be 200m Galaxy AI enabled devices in use.
Apple announced the new iPhone 16, "the first Apple iPhone designed for Apple Intelligence". But Apple AI will not be released yet for phones within the EU and China.
Meta has gained approval from the UK government to train it's AI system using public content from Facebook and Instagram, to "reflect British culture, history, and idiom, and (so) UK companies and institutions will be able to utilise the latest technology.”
All of the above has happened since March 2024. So what should we learn from it?
1) The internet of information is being carved up into three main branches
The autocratic web wants to control access, content and platforms to maintain control of populations (e.g Russia). The libertarian web wants unfettered free speech at all costs and wants to remove most external controls.
The legalistic web has to pick it way through the spectrum between those two poles, creating a country or bloc level view of how information should be managed and imposing new laws on platforms and the populations that use them in a given jurisdiction.
Most nation states are ending up in the third form, and right now the lead is being taken by national security concerns.
2) Platform and AI companies will act rationally in the face of legal and political pressure
The era where platforms made the rules is certainly over. This creates new challenges for tech platforms for whom the most efficient and profitable models are for one global platform and the lowest costs of compliance and moderation. Rationally we should expect companies in this position to adapt where they have to, but only where the costs can be justified.
We should expect delays in lobbying and also in product delivery, more cases where the US market is prioritised and more cases where trading in non-English language countries with high barriers to compliance is no longer supported.
3) The UK has a significant role to play in shaping the rules, but the new government has not yet shown it's hand
The Conservative government had split responsibility for AI and media (previously under DCMS) into two separate ministerial agencies, one for Science, Innovation, Technology which took the AI brief and one for Culture, Media and Sport.
It had laid out an aggressive AI development strategy, maximising the advantage of being the original English language market, and sitting somewhere between the EU and the US on legal approaches.
The new government committed in their manifesto to support for the AI industry but little else material on the areas described above. Whether they will be able to position the UK as a meaningful broker on these types of issues remains to be seen.
4) Laws will emerge slowly, AI will evolve fast
This is all relatively new territory for politicians, lawmakers and tech boards. There is little steer for them from businesses or citizens about what kind of information infrastructure we want as a nation, the subject is not a burning issue for most compared to cost of living or the NHS. We should expect regulatory development to be effective at some point in the 2030s.
By comparison the lightning speed of AI development and deployment will continue to affect the way that we create and trust information. Between now and 2030 we should expect processing power to multiply by 16 if Moore's law holds. Most of this deployment will be embedded in the technologies that we use in our smartphones, unseen but impactful.
How these four factors play out is likely to set the terms of the next generation of information on the internet. I believe that this is critical for how our societies develop, because humans rely on trusted information to build successful communities and to co-exist. History shows that good laws often take time to catch up on the impact of innovative technologies, but am optimistic that they mostly do in the end.
We are living through a Great Leap Forward - let's see where we land..
(Image: Chris Duncan in collaboration with ChatGPT)
Thanks Chris. Interesting insights :) thanks for sharing.