Every few months, platforms announce new safety features designed to catch fake accounts and AI-generated content.  We’re constantly hearing about groundbreaks with features like liveness checks, ID verification, behavioral analysis, and biometric matching.

And every few months, the tools that make these attacks possible advance even more. So much so that a finance worker was tricked into wiring $25 million by a deepfake video call.

The cycle has become predictable, and waiting for platforms to catch up is no longer a viable strategy. The verification model itself needs to change, because the tools meant to protect users were built for an earlier generation of threats, not the ones they face today.

Verification systems are a mismatch for today’s tech

Traditional verification was designed for an internet where fake documents took real skill to produce, and identity was something you confirmed once at sign-up. And neither of these assumptions is accurate anymore.

AI-generated fake IDs can now be made for as little as fifteen dollars in under an hour. A fraud prevention researcher recently demonstrated how a fully working synthetic identity could clear standard verification in seven minutes.

Synthetic identity fraud now costs US businesses an estimated $30 to $35 billion annually, and the number keeps climbing.

CAPTCHAs were once considered a reliable way to separate humans from bots, but that barrier has collapsed. Researchers at ETH Zurich showed that AI models can now solve Google’s reCAPTCHA v2 image challenges with 100 percent accuracy.

OpenAI’s ChatGPT Agent bypassed Cloudflare’s verification check without detection. The tools that were supposed to separate humans from bots are now being defeated by software that anyone can download for free.

We bypassed Tinder’s entire security stack

Earlier this year, my team ran an experiment to test how well dating platforms can verify that their users are real. A group of six people used publicly available AI tools to build four fake profiles from scratch, complete with generated photos, synthetic voice messages, and chatbot-driven conversations.

They deployed these profiles on Tinder and matched with 296 real users across multiple countries. Forty of those users agreed to meet in person for a date.

And keep in mind that we’re talking about the security measures of a company that regularly generates over $2 billion in annual revenue.

Before anyone got hurt, the team came clean at a restaurant in Lisbon and paid for dinner. But the fact that it worked at all should worry anyone responsible for platform safety.

Using nothing but publicly available tools, the team cleared every security check the platform offered, including the Face Check feature meant to confirm users are human.

The scale of the problem is no longer theoretical

We’ve also performed a global survey with over 11,000 people, and more than 42 percent have said that they’ve been personally affected by a scam involving AI-generated personas.

More than two-thirds reported contact from fake or bot identities at least once a month, and nearly a third said it happens daily. The majority of these encounters involved financial fraud, often disguised as customer support agents or authority figures.

These numbers align with what researchers are seeing across the industry. AI-enabled fraud surged 1,210 percent in 2025, and the FBI’s Internet Crime Complaint Center recorded $16.6 billion in cybercrime losses in one year alone.

Social media has become the most financially devastating channel. A Consumer Federation of America analysis estimates that Americans now lose $119 billion each year to online scams, and most of that fraud originates on social media.

Perhaps most striking is how difficult these fakes have become to detect. According to a 2025 study, only 0.1 percent of participants correctly identified all fake and real media shown to them.

More verification is not the answer

The instinct after reading these numbers is to call for stricter checks. More biometrics, more document scans, more liveness detection. But more layers won’t fix what’s fundamentally wrong.

Every new layer of verification creates a new surface for attackers to exploit. Biometric databases become high-value targets, document scans get fed into models that produce better fakes, and liveness detection only sets the benchmark that the next generation of deepfake tools will be trained to clear.

We have already seen this cycle play out. Platforms introduced selfie verification, and attackers responded with real-time face-swapping software. Companies added voice authentication, and criminals built tools that clone a voice from a few seconds of audio.

The verification industry is running an arms race it cannot win because every defense becomes a roadmap for the next attack.

The real issue is how these systems are built. Traditional verification requires users to hand over sensitive data to a central authority that stores it, processes it, and takes on the burden of protecting it. That concentrates risk in exactly the places attackers are most motivated to hit.

Cryptographic identity offers a way forward

There is another way to do this, and it works on completely different logic. Instead of proving identity by handing documents to a central database, cryptographic systems let users hold their own credentials and share only what a platform actually needs to know.

The core technology behind this is called zero-knowledge proofs. These allow someone to prove a fact about themselves without revealing the underlying data.

A user can confirm they are over 18 without sharing a birthdate, or confirm they are a unique human without exposing biometric information to any third party.

The security implications are significant. There is no centralized honeypot for attackers to target because sensitive data never leaves the user’s control. Platforms can verify claims without storing the information that makes identity theft possible in the first place.

Verification will also be much easier and more trustworthy. Instead of asking whether a document looks legitimate, platforms can check whether a credential was cryptographically issued by a trusted source. And that is a much harder thing to forge.

Several organizations are building this kind of infrastructure now, and early adoption is underway in financial services, healthcare, and government programs.

The longer platforms wait, the worse it gets

Most platforms still treat verification as a box to check during onboarding and forget about it. That made sense when fake accounts took effort to create, but it makes no sense when anyone can generate a convincing synthetic identity in under an hour.

The tools to build fake identities are improving faster than the tools to detect them, and that gap is only going to widen. Every month that passes, the synthetic personas get more convincing, the scams get harder to spot, and the losses get larger.

Regulators will eventually step in. But by the time legislation catches up, the damage will already be measured in the hundreds of billions.

Luckily, we don’t need a technological miracle to solve this; cryptography is a proven solution to this problem. What we need is pressure on platforms to actually implement it.


Terence Kwok is the Founder of Humanity, an organization dedicated to rebuilding trust on the internet. He is a visionary technology entrepreneur from Hong Kong and the founder of one of Asia’s first unicorns. With expertise in blockchain, Web3, and technology integration, Kwok’s mission goes beyond technological advancement. Recognized for his insightful vision, Kwok advocates for change that challenges conventions. 

TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.

Featured image: Edgar Nunley on Unsplash

Cleaning forward: How a once-manual industry is being rebuilt for the future