The running of the AI bulls
The US has published a muscular AI Action plan. What does it mean for media?
Back in January the UK released the AI Opportunities Action Plan. This week the US published their own national rallying cry, America’s AI Action Plan. Let’s examine what implications the combined offerings might have for media organisations, particularly those outside of the US of A.
First thing to note is the tone of the US plan - sub-heading “Winning the Race” - which has a forward by the President himself. He makes it clear that:
“It is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance”. (AAAP)
There are many similarities with the UK plan. High ambition for AI as a revolutionary technology. Much investment in infrastructure, energy, workforce reskilling, government adoption. Much urgency to get on with it before our adversaries do. One clear difference is the UK is only aiming to come in third behind the US and China (a strategy championed by our Olympic Games teams).
Personally I’m much more interested in how this all plays into the development of the media landscape as it moves from a dominant digital platform model to a generative AI world.
What are the implications of these plans for the balance of trade between AI companies and media creators? Does this plan support or prevent sustainable direct to consumer media distribution? So let’s pick out a few strands on that.
Copyright and IP
The Clifford report in the UK expended one of it’s fifty recommendations on the legal frameworks surrounding AI, using it to hand off the problem to the government to provide a more secure and certain legal footing that was attractive to AI developers and global talent.
The US plan takes a different tack; it actively ignores the question. It does sneak into the plan some federal control over state spending on regulation (as kicked out of the budget bill). But after that the only recommendation that speaks to copyright says
“Review all Federal Trade Commission (FTC) investigations commenced under the previous administration to ensure that they do not advance theories of liability that unduly burden AI innovation” (AAAP)
It’s easy to think that this is a bad thing, given how important this is to publishers across the globe. But on reflection it is also notable - given the impatient and bold nature of the rest of the plan - that waiving copyright protections is not mandated explicitly, as AI companies would have wanted.
Wired reported Trump comments at a launch event later the same day that did address it, saying that the US approach would be a “commonsense application” but:
“you can't be expected to have a successful AI program when every single article, book, or anything else that you've read or studied, you're supposed to pay for. We appreciate that, but just can't do it— because it's not doable.” (Donald Trump)
So the courts will continue to build up case law, as I’ve written about here. It likely heightens the importance of the UK government’s ongoing consultation process, as this will likely provide a set of principles on copyright faster than the US court process.
Information integrity / Disinformation
Early on in the introduction, AI is described as creating an revolution for information, and also a cultural renaissance where we will be “unravelling ancient scrolls once thought unreadable” which is a use case I maybe hadn't spent enough time on to date.
Sadly after that the impacts on trusted information are reduced to three points; a) Stopping deepfake nudes as per the bill sponsored by the First Lady, b) Removing ideological bias in models to protect “free speech and American values”, c) Dealing with synthetic evidence in the legal system.
We don’t have time here to get into the whole free speech debate. Safe to say the position established here is that big government contracts will go only to developers who replicate the preferred cultural norms of the US administration of the day.
Cultural sovereignty
It is one thing to dictate a domestic set of values, another to export them to the world. And this is clearly part of the plan - the world will be divided into spheres of influence in which America and China compete in a Cold Chip War. As they put it
“It is imperative that the United States leverage this advantage into an enduring global alliance.. Exporting its full AI stack - hardware, models, software, applications and standards to all countries willing to join America’s AI alliance… the distribution and diffusion of American technology will stop our strategic rivals from making our allies dependent on foreign adversary technology” (AAAP)
The ambition in plain English is to create an American operating system for AI, and make its adoption a requirement for its allies. This will prevent adoption of Chinese technology. But it is in conflict with any allied aims to build their own sovereign capability, as the UK plan so desperately wants to do.
The ‘standards’ part here could also extend to American values and free speech ideologies being exported and sovereign regulation being challenged as a result - it is very easy to see a Presidential conversation which seeks to reduce independent EU or UK AI regulatory standards in return for access to US technologies. Meta is already beating this drum hard.
Summary - one to watch, both in the actions of the US administration but also the response of UK and EU politicians and lawmakers.
Competition and Big Tech
There is some ambition to encourage innovation through Open Source and Open Weight models, and a “try-first” culture for AI in industry. But the overall sense from the plan is that this is a green light for those who command the most capital to invest and reap the rewards, which favours the incumbent digital platform players and a handful of new entrants they are integrated with (e.g Open AI, Perplexity, Anthropic).
For media companies who are hoping that the regulators will seek to rein in the big players, this signposts a different outcome; deep partnership between a handful of technology partners and the US government and military, forcibly exported to the world. The UK government deal with Open AI shows how this B2G strategy rolls out.
Trust and Transparency
Media companies want more transparency on data sources and attribution, rather than accepting a black box. AI developers often claim the inner workings of models are necessarily unknowable.
There are some intriguing passages in the plan that recognise that this is limiting, particularly in the military context;
‘This lack of predictability, in turn, can make it challenging to use advanced AI in defence, national security or other applications where lives are at stake’’ (AAAP)
This is a line of argument worth pursuing for media organisations concerned with the a lack of transparency on how AI models interpret source information in LLMs and RAG systems. I would argue lives are at stake if elections are compromised by malign actors or model collapse - the pen needs protection as well as the sword.
So what?
From a media perspective there isn’t much new news here; a restating of a grand ambition for AI, no meaningful pronouncements on copyright, and almost nothing on how trusted information will be protected in a world of AI discovery tools.
There will be great encouragement for Big Tech and the America First acolytes as AI and national security are tied ever closer. The message for domestic regulators and diplomats is pretty clear. There is little encouragement for Little Tech and all non-US technology companies.
Media, as ever, lives to fight another day. It is important though to understand the context in which that fight occurs. Our arguments must be adapted to the narrative in these plans if they are ever to be heard over the sound of charging AI bulls in Big Tech and Government.
America’s AI Action Plan is here