In September, the board added "AI ROI" as a standing item on the agenda. The implicit question was: what is our plan, is it enough, and can we measure whether it is working. My team spent one weekend writing down what we believe, then the next three months translating it into something we could ship. This is the shortest version of both.
Board members asking about AI are not usually asking about models. They are asking something that sounds like "are we keeping up" and something that sounds like "can you show me returns". Those are two different questions and they deserve two different answers.
Our position at Accrual is that we have been doing AI-adjacent work since 2016, when we stood up the automation COE. Our 420 bots are, in a precise sense, automation that benefits from AI where AI is useful. Our code-generating agent for bot authoring is clearly AI. Our hybrid integration pattern uses machine learning to decide when to fall back from API to UI. None of that counted as AI when we built it. All of it counts as AI now.
The first thing we told the board: "keeping up" is not a good framing. We are not behind. We have been incrementally investing for nearly a decade. The relevant question is what we do next.
We organized the roadmap into three horizons, because engineers love three horizons and boards love bullet points and the format serves both.
The coding agent already helps our COE author bots. We are extending it to help our platform engineers author orchestration flows. Same pattern. The agent drafts, the engineer reviews, the pipeline gates. This is the lowest-risk, highest-near-term-return investment on the list. It is also the least novel, which matters because boards sometimes conflate novelty with value. This is not a novel investment. It is a compounding one.
Our commercial lending case study showed that there are whole categories of work that were not automated because they did not fit a linear diagram. Case management plus structured AI-assisted decision support changes what we can cover. We are applying the same pattern to M&A advisory, internal investigations, complex claims, and a subset of relationship banking workflows that today are almost entirely manual.
This is the hardest horizon and the one the board cares about most. We are experimenting with AI-assisted customer interfaces in two specific places: retail self-service for complex questions, and commercial relationship banker co-pilots. Neither is in production at scale. Both are running with a small set of customers and bankers who are willing to give us honest feedback. We are measuring whether they reduce time-to-resolution, improve customer satisfaction, and do not erode the trust that has made Accrual what it is for forty years.
The board asked for a measurement framework. We gave them four numbers we commit to report on quarterly.
Labour hours returned. How much human effort our automation and AI investments return to the bank each quarter. This is our north star metric and has been since 2016. It is up and to the right and we want to keep it that way.
Cycle time on the five most important processes. Onboarding. KYC refresh. Commercial lending. Claims. HR onboarding. If AI is doing anything useful, these numbers should improve. If they do not, we are not getting the return.
Platform uptime. Because the best AI in the world does not matter if the platform it runs on is flaky. This is a defensive metric that catches when we have prioritized novelty over reliability.
Customer trust, measured by net promoter score. Because if we reshape the customer experience and customers hate it, we are making the bank worse. This metric is lagging and noisy. It is still the right metric.
If we invest in AI for a year and our NPS goes down, we were wrong. No clever second-order argument makes that okay. We would rather know.
The honest answer on ROI is that our 2016-to-now investment has returned well over a hundred million Singapore dollars in labour hours reclaimed, error reduction, and cycle-time improvements. The board knew this because our CFO puts it in the deck every quarter. What they wanted to know about was the incremental 2026 investment.
Our 2026 investment is roughly thirty-five million Singapore dollars, split across infrastructure, licensing, and a small set of hires. We modeled the expected return conservatively at 1.8x over three years and optimistically at 3.2x. We will report actuals quarterly. We will miss. We will adjust.
The board accepted this. They accepted it in part because we have done this before and our previous projections have tracked reasonably well. This is an unfair advantage that institutions new to AI do not have. We have a track record. We have a measurement culture. We have a COE with lower turnover than the bank average. None of that is AI. All of it matters for doing AI well.
Three things that keep me up.
Regulatory movement. The EU AI Act is settled. Singapore's Model AI Governance Framework is evolving. The US regulatory landscape is still unclear. We are a global bank. We have to comply with all of it. We are spending real time on responsible AI principles, governance, and auditability. This is not marketing. If we get this wrong we face structural consequences.
Vendor concentration. Our orchestration platform is central to everything we do. Our AI tooling, including our coding agent, mostly sits above it. If the vendor changes direction in a way that diverges from where we want to go, we have to know what our options are. We run an annual exercise to refresh that answer. Last exercise: options exist, the current vendor remains our best pick, we stay.
The other question the board will ask next year. They will. It will be a good question. We are already trying to figure out what it is so we can have an answer ready.