Years ago, when I was still practicing full-time, I came across a case that has stayed with me. An elderly woman with chronic lung disease had been unwell for several days. Her notes, test results, and history were all there, but no one recognized how serious her condition had become until it was too late.
It wasn’t neglect; everyone involved was doing their best. What failed her was a system that couldn’t connect simple clinical facts into meaningful action. The information existed, yet it sat in isolation, unlinked, unprioritized, unrecognized. That experience has colored how I see technology in medicine. Data alone doesn’t save lives. Understanding context does.
When technology meets the complexity of care
Artificial intelligence has entered healthcare with remarkable speed and promise. Algorithms now listen, transcribe, and summarize at a scale we couldn’t have imagined a decade ago. But technology often misunderstands what happens inside a consultation room.
Medicine isn’t neat or predictable. Patients speak in fragments, mix languages, and change their minds halfway through a sentence. The background hum of a busy clinic, or a patient’s hesitation before disclosing something sensitive, carries clinical weight that can’t always be parsed by pattern recognition. I’ve seen AI tools that perform beautifully in controlled pilots yet stumble the moment they encounter real-world noise, accents, or unstructured dialogue.
That doesn’t mean these tools have no place in healthcare. It means they must be built with a better grasp of the environment they’re entering. Accuracy isn’t only a technical metric; it’s a critical clinical metric when decisions downstream depend on it.
Why clinicians need to help shape the tools they use
For years, I’ve watched doctors spend more time documenting care than delivering it. Every system meant to “streamline” practice seemed to add another login screen, another checklist, another chance to misclick. It’s no wonder doctor burnout has become endemic.
AI offers a way forward not because it’s clever, but because it can finally handle the routine, repetitive work that clogs our days. Yet for it to work safely, clinicians must be involved from the start. They are the only ones who can tell whether an insight is useful or just decorative, whether a recommendation makes sense or distracts.
In my own experience helping to build an AI-assisted clinical scribe, I’ve seen how valuable that dialogue can be. The goal was never to build something flashy; it was to relieve cognitive and administrative load so doctors could refocus on the patient in front of them. What I learned through that process was how many assumptions developers bring to healthcare — and how quickly those assumptions fall away when you put them in a real consultation room. The best engineers I’ve met welcome that feedback. It’s how reliable systems are made.
Lessons from different health systems
Working across regions has shown me that every health system faces the same question: how do we integrate technology without losing the human core of care?
In Southeast Asia, where clinics are multilingual and fast-paced, localization is more than translation. A system that can interpret Malay, English, Mandarin, and Singlish accurately is inherently safer and more inclusive.
In Australia, the long journey towards telehealth adoption is a reminder that trust builds slowly. When the national telehealth platform first launched, many of my colleagues dismissed it as ineffective and impractical. Within a decade, it became indispensable. The shift happened because doctors were eventually brought into the design and governance of the service.
And in Europe, particularly in the UK and Nordic countries, rigorous regulation has helped AI developers understand that compliance is not a constraint; it’s a framework for credibility. Singapore’s Health Sciences Authority has begun taking a similar approach with its Software as a Medical Device guidance — a positive sign that our region is thinking seriously about the lifecycle of AI in medicine, not just its launch.
Across these examples, the pattern is consistent: when clinicians, engineers, and regulators work in isolation, progress stalls. When they collaborate early, systems evolve that both innovate and protect.
Building technology that earns trust
The next generation of AI in healthcare will not be defined by faster models or bigger datasets, but by the trust they can earn. That trust depends on a few simple, enduring principles:
First, co-design. Clinicians must be at the table from day one, defining the problems, shaping workflows, and testing outputs against real clinical reasoning.
Second, transparency and explainability. If an algorithm suggests a diagnosis or prompts a follow-up, the clinician should be able to see how that conclusion was reached. Hidden logic in a black box is unacceptable when patient safety is on the line.
Third, interoperability. Tools must talk to one another, integrating smoothly into existing systems instead of multiplying administrative silos.
And finally, humility. Technology should aim to assist, not assert. Medicine will always involve uncertainty, judgment, and empathy, things that resist automation. The value of AI lies in freeing clinicians to spend more of their limited time exercising those human strengths.
As a profession, we have a rare opportunity to guide how AI evolves before it defines how we work. That requires advocating for systems that mirror the way medicine unfolds in real clinics dynamic, nuanced, and human rather than the tidy, oversimplified pathways you find in a textbook diagram. If we stay anchored in clinical reality, informed by evidence, and guided by experience, we can build technology that genuinely supports both doctors and patients.

Dr. Nelson Lau is a medical doctor with more than 30 years of clinical experience and the co-founder of HealthBridge AI. A former practising physician and early adopter of telehealth across Southeast Asia and Australia, he is an alumnus of the BLOCK71 AI Accelerate program. Dr Lau continues to advocate for responsible, clinician-led innovation in healthcare technology.
TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.
Featured image: National Cancer Institute on Unsplash
AI-ready colocation: How providers are redesigning for enterprise AI

