Artificial intelligence has made it possible to create images and videos that appear strikingly real, even when they are entirely fabricated. Among the most troubling developments is the rise of “deepfakes,” digitally generated media that can replicate a person’s face, voice, or likeness with remarkable accuracy. When such technology is used to create intimate or sexually explicit material without a person’s consent, the consequences can be devastating.
In response to these emerging risks, the United Kingdom has taken a decisive step. New legal provisions under the Online Safety Act 2023 now criminalise the creation and sharing of intimate deepfake images where the individual depicted has not given consent. The reform represents a meaningful expansion of earlier legal approaches. Previously, enforcement focused mainly on the distribution of such material. The law now recognises that the harm begins much earlier, at the moment the synthetic image or video is generated.
Importantly, the legislation is not intended to prohibit artificial intelligence technologies broadly. Deepfakes used in film production, satire, artistic expression, or legitimate technological applications are not automatically unlawful. The focus of the offence is narrow and deliberate: it targets the misuse of digital tools to create sexually explicit material involving identifiable individuals without their permission.
The United Kingdom’s response offers valuable insight for countries that have yet to address deepfake misuse through dedicated legislation. For Lesotho, where regulatory frameworks governing artificial intelligence are still developing, several policy lessons can be drawn from this approach.
First, modern legal systems must recognise that an individual’s identity extends beyond their physical presence. A person’s likeness, voice, and biometric characteristics can now be reproduced digitally with little difficulty. Without clear legal protections, these elements of identity can easily be exploited in ways that damage dignity, reputation, and personal safety. Future legislation may therefore need to recognise explicit consent as a cornerstone of digital identity protection.
Second, the UK model demonstrates the effectiveness of regulating harm rather than attempting to prohibit technology itself. Artificial intelligence is rapidly evolving and serves countless legitimate purposes. Attempting to outlaw the tools entirely would be impractical and potentially harmful to innovation. By focusing on abusive conduct, in this case, the creation of non-consensual intimate imagery, the law addresses the real social harm while allowing responsible technological development to continue.
Third, the pace of technological advancement suggests that governments must act proactively rather than reactively. Once deepfake tools become widely accessible, misuse can spread quickly through social media and online platforms. A legal framework that anticipates these risks can provide protection before widespread harm occurs.
The growing sophistication of artificial intelligence raises important questions about privacy, identity, and personal autonomy in the digital age. For policymakers in Lesotho, the conversation around deepfakes is no longer a distant or hypothetical concern. As AI tools become more accessible globally, the need for thoughtful governance becomes increasingly urgent.
Developing legal safeguards now may help ensure that technological innovation strengthens society without undermining the dignity and rights of individuals.