British fashion models and the threats posed by indiscriminate use of Artificial Intelligence

This paper is a call to action for a change to legislation.  More broadly, AI technology is moving very fast and poses substantial risks to fashion models’ careers. There are 10,000 in the UK.  The industry requires that retailers and brands agree a set of principles around the use of AI which would cover the collection, ownership and use of personal data including the terms of that use, the exclusion of personal data from large training datasets or similar technological developments.

The removal of fashion models from photoshoots also removes photographers, stylists, hair and make-up artists, studios and many other associated human resources.

Image Rights and AI in the UK: A Legal Gap in Urgent Need of Reform

In the United Kingdom, image rights do not exist as a standalone legal concept.

Instead, individuals must rely on a fragmented legal framework. A patchwork of passing off, copyright, trademark, breach of confidence, misuse of private information, data protection, and defamation laws. While these laws can be effective in some contexts, the current system is outdated and incomplete. It was not designed for a digital world, let alone the complexities of an AI-powered one.

Historically, this gap in protection was navigable. Talent rights were controlled through licensing agreements. Contracts gave individuals a degree of control over how their image was used, especially in creative industries. But with the rise of AI, the old playbook has to some extent been rendered obsolete.

The Data Dilemma

Artificial intelligence relies on enormous volumes of data. This data is acquired through two means: authorised permission or indiscriminate scraping of publicly available content. The latter is at the heart of over 40 ongoing legal disputes worldwide. It raises critical questions about consent, transparency, and exploitation.

Meanwhile, many users unknowingly consent to the use of their likeness, voice, image, and personal data when they sign up for online platforms. Some companies are transparent about using user-generated content to train AI models. Others are more opaque, burying sweeping rights within dense, legalistic terms of service.

The familiar adage “nothing in life is free” applies powerfully here. “Free” platforms may come at the cost of your identity.

The Deepfake Dilemma

But what about unauthorised use-cases that fall outside the bounds of contracts or terms of service?

The UK is facing what some are calling a deepfake epidemic. AI-generated fakes are appearing in everything from fraudulent ads and pornographic content to scam campaigns and political misinformation. The implications for individual privacy, public trust, and societal stability are profound.

The Advertising Standards Authority reported that over 50% of scam ad reports in 2024 involved impersonation or deepfakes. These technologies have become tools for white-collar crime, misinformation, and reputational sabotage.

There is no standalone regulation covering the use of AI in advertising. This absence of oversight is fuelling the problem.

From Commercial Clones to Surveillance Culture

Major fashion brands and media outlets are now creating AI models to replace humans. While this raises concerns around inclusivity, given the underrepresentation of diverse imagery in AI datasets, it also opens the door to widespread commercial exploitation.

But where do we draw the line with AI models? Are they simply the next evolution of digital tools, like Photoshop and CGI, or are they becoming an exploitative shortcut that displaces real people from real jobs. 

Actors, influencers, and models may begin licensing digital replicas of themselves (digital twins) for endorsements, ads, even editorials. A customisable model that can be delivered at scale and speed to market. This is something we are already seeing in ecommerce fashion.

But the issues go beyond commercial rights in the creative industries. We are witnessing a fundamental transformation in the role of the human face in society.

Faces are becoming biometric IDs, marketing assets, and targets for digital manipulation. We now live in a surveillance society where facial recognition and AI-powered glasses capture and process imagery at all times. Identity is becoming data.

The Bigger Picture: A Social Inflection Point

This is not an isolated legal issue. It is a constellation of cultural, technological, and ethical challenges. Consider:

  • The UK’s new Online Safety Act
  • Dating app hacks involving stolen photos
  • AI-generated influencers in global events (Wimbledon, anyone?)
  • A recent push to normalize face-based AI companions

We are moving into an era where the face is verification and where deepfakes can easily undermine that verification.

The questions we must ask now are existential, not just legal:

  • Should we ban certain AI capabilities? (Unlikely.)
  • Should we fundamentally change digital behavior?
  • Or do we need a new ethical framework and regulation around identity, privacy, and control?

The Legal Landscape: Global Momentum, Local Inertia

The UK government has acknowledged the urgency of this issue. In its “Copyright and AI” consultation, a dedicated chapter was reserved for digital replicas. But action remains slow.

Internationally, momentum is building.

  • Tennessee and California have enacted legislation requiring consent for the use of a celebrity’s digital likeness. Living or deceased.
  • The U.S. Copyright Office has recommended a Federal Digital Replica Law.
  • Denmark has proposed extending copyright to cover likeness rights.

These moves are promising but imperfect. Copyright is transferable, which could lead to exploitation if not paired with data protection principles. A hybrid legal solution, merging copyright and data rights, may offer a more secure foundation.

What Models, Creators, and Individuals Can Do Now

At the British Fashion Model Agents Association (BFMA), we continue to advocate for stronger protections, and against unauthorised use of likeness in AI.

Our petition calling for legislative reform has been signed by some 2,500 models.

But until laws catch up, individuals must take proactive steps:

  1. Own your digital likeness. If you commission a replica, retain full rights over its use.
  2. Scrutinise every release form. Beware of vague or sweeping language.
  3. Refuse AI training clauses. Ensure your likeness is excluded from large training datasets that are not ringfenced.
  4. Use tech to track misuse. Emerging platforms offer watermarking, tagging, and anti-scraping tools.
  5. Trademark yourself. Explore the use of trademarks to protect visual identity, silhouette, or even facial features. This is an emerging trend leaning on the greater protection of monopoly rights over copyright.

Final Thought

Human creativity must remain central to policy. We need clear-eyed conversations around IP, ethics, accountability, and the economic value of human creative work. Creativity built our culture. AI shouldn’t erase it.

The current legal vacuum in the UK around image rights is unsustainable in an AI-dominated world. As technology evolves, the laws and society at large must evolve with it.  However, it must be stressed that image rights are no longer a niche concern for the creative industries. They are becoming a foundational element of identity protection in the digital age. The law must catch up before the consequences become irreversible.

Associate Members with Legal Expertise