Over the past two years, a flood of startups and incumbents have raced to build “AI copilots” for finance. Almost every demo shows a chatbot answering analyst questions or summarising a report. Yet despite billions in investment, adoption across financial institutions remains slow and productivity gains modest.

The reason is not a lack of ambition or data. It’s that most companies, founders, and technologists fundamentally misunderstand what it takes to turn AI into business value, particularly in a domain that prizes trust, precision, and accountability above all else.

The missing equation: Value and feasibility

Successful technology adoption depends on finding where business value meets real-world feasibility. Feasibility does not stop at algorithms; it lives in people, processes, and governance.

In banking and asset management, that balance is especially delicate. According to the Evident AI Index 2025, banks with the highest AI maturity, such as JPMorgan Chase, Capital One, and RBC, share one key trait. They invest as much in organisational enablement as they do in model development. These leaders report more use cases because employees trust and use their systems.

Contrast that with the many failed pilots elsewhere, where a 2025 MIT study found that over 95 per cent of generative AI pilots fail to scale because teams “avoid friction.” They chase flashy prototypes that collapse in production. Much of this friction comes from the lack of user trust and limited control over outputs.

Why finance resists the hype

Finance’s slower adoption of AI stems not from conservatism but from accountability. Every output, whether a risk score or a research summary, must be explainable, auditable, and defensible. That accountability clashes with the automation-first mindset many startups adopt. Replacing an analyst or risk officer with an opaque model erodes trust and invites regulatory risk.

As Evident Insights notes, only a few major banks, such as BNP Paribas, DBS, and JPMorgan, report both realised and projected ROI from AI projects. They succeed because they have governance and transparency frameworks that others lack. Oversight is not a bottleneck but the foundation of adoption, where the goal is not to replace human decision-making but to reinforce it through systems that enhance judgement and accountability.

Automation is easy, augmentation is hard

The default format of GenAI applications, the chatbot, reflects this misunderstanding. It promises frictionless automation but often creates new friction because users do not trust the answers, cannot audit the reasoning, and find the interface detached from their actual workflow.

Real progress lies in workflow-aware systems that amplify human expertise rather than replicate it — JPMorgan’s internal LLM Suite illustrates this well. It did not begin as a single grand platform but as a collection of focused, high-value tools for developers, researchers, and compliance officers. Each tool demonstrated its worth before being integrated into a secure workbench that now serves more than 200,000 employees and saves analysts and developers several hours each week.

The lesson is simple: the future belongs to systems that scale human insight, not those that try to substitute it.

The false promise of platforms

When startups pitch “AI platforms” for finance, they often repeat the same mistake that weakened earlier enterprise software. Platforms may look scalable and visionary, but they often turn into complex, cumbersome systems that users tolerate rather than appreciate.

History makes this clear. In the 2010s, tools such as Salesforce and Workday succeeded by solving one pressing problem deeply before expanding outward. Yet as they evolved into sprawling platforms, usability declined. Layers of plugins and integrations turned once-simple workflows into endless clicking and reconciliation, making them less effective the more they tried to do.

The same fatigue is now emerging in financial AI. Many products start and remain generic, from document summarisers to universal copilots and so-called AI operating systems that claim to serve every department but serve none well. The next generation of leaders will move in the opposite direction, building deep, vertical, and trust-focused systems that create real value in areas such as investment research, credit adjudication, and financial-crime detection.

Why startups keep missing the mark

Many so-called finance AI startups are led by former bankers, but most come from back-office or auxiliary roles rather than the front lines of research, trading, or client-facing decision-making. That gap in operational empathy shows, as they build tools that over-automate processes, undermine trust, and overlook the reasoning that drives real decision conviction.

Each time an AI system produces an unexplainable result, it erodes credibility. In finance, credibility is currency; once it is lost, adoption disappears. Human-in-the-loop design is not philosophical but commercial. Systems that allow users to trace reasoning, correct mistakes, and feed improvements back into models create feedback loops that build trust and long-term data advantages grounded in real use, not scraped content.

Augmenting judgement: The middle ground

Between full automation and manual work lies a wide, unexplored space where AI can enhance human judgement and creativity. In investment research, this means helping analysts link cause and effect, such as how a policy change in Washington might influence earnings in Shenzhen, rather than merely summarising data. In portfolio construction, it means simulating alternative narratives, while in risk management, it means contextualising anomalies instead of simply flagging them.

These are challenges of reasoning and workflow, not of chatbots. Solving them requires systems that understand how analysts think and how hypotheses, evidence, and implications interrelate. That is the true frontier of progress: AI as collaborator rather than correspondent.

The way forward

The next wave of financial AI will not emerge from chatbots or generic copilots. It will come from innovators who build workflow-specific products that respect trust, auditability, and regulation. These systems will turn analysts into super-analysts, not by automating their judgement but by strengthening it.

For innovators, the challenge is to design for credibility rather than convenience. For established institutions, it is to invest in what is feasible today rather than chase distant visions. Finance will be reshaped not by replacing people but by changing how good decisions are made and scaled. Those who recognise this will define the next decade of innovation. Those who do not will continue building tools for problems that never mattered.


Zhuang Qiang Bok is the Founder of Deep Insight Labs.

I’ve spent my career at the intersection of AI, quantitative finance, and enterprise innovation — building models that move markets and advising C-suites on how to harness them.

Long before “AI agents” were a buzzword, I bet my career on NLP — turning down safer paths in consulting and big tech to work with frontier ML teams at Primer.ai, Aaqua, and Dell. Each step was preparation: understanding how language models reason, how enterprises adopt them, and how financial systems could benefit if they truly did.

At Deep Insight Labs, we’re bringing that belief full circle — building AI agents for investment research that reason across data like world-class analysts, only faster and deeper.

Our mission: augment financial judgment with machine intelligence — helping capital flow with greater clarity, speed, and wisdom. If you’re a professional investor in hedge funds, asset management or advisory and live inside research workflows, we are building for you.

TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.

Featured image: Glen Michaelsen on Unsplash

Why Singapore’s real estate is stuck in the past