The rise and risks of artificial content
- danielweis
- Oct 3
- 2 min read
Its sad that we are now at the point where everything should be assumed as fake when it comes to videos & content online (and of course adverts). Sora is the latest example of this, OpenAI's social media app, where everything on it is fake and AI generated, all you need to do is upload a face video/image and it can be used to create all sorts of fake and reputation damaging videos. The Washington Post have a good writeup here: https://www.washingtonpost.com/technology/2025/10/02/sora-openai-video-face-fake/?utm_campaign=wp_the_technology_202&utm_medium=email&utm_source=newsletter
So how does this affect your organisation? Firstly users need to be trained to trust but verify all requests that they receive. If they get a call from the CEO or someone senior asking for something to be actioned, finish up the call then reach out to that person directly via a different channel and alternatively a backup person to verify the request. We use Deepfakes often on engagements and join or make teams/zoom calls to users masquerading as someone else, its very easy to orchestrate, but hard for most users to detect, and we often find organisations are lacking policies and guidance around AI and specifically deepfake and artificial content.
Train your users to look for physical inconsistencies in content, such as incorrect finger counts, unnatural movements, or warped lighting. Check for mismatched audio-to-lip movement, unnatural blinking, or unusual speech patterns in audio and video. Also, use reverse image searches (google, tineye etc) to see if the same image is being used in lots of places, and lastly verify content with trusted sources, and examine context for logical or physical impossibilities.
MIT have an awesome resource on detecting deepfakes and ai content here: https://www.media.mit.edu/projects/detect-fakes/overview/
We need to treat this new era of AI-generated content not as a futuristic threat, but as a present and urgent risk. As leaders, it's our responsibility to adapt and implement robust strategies that protect our people and our organisations. This includes not only technological solutions but also fostering a culture of critical thinking and healthy skepticism. The rise of AI deepfakes and generated content is not just a technology problem now, it's a human one that requires human-centered solutions.
If you would like to see how your organisation stacks up against these types of attacks, please reach out!
Comments