Forecasting & Liquidity Management From Prediction to Prescription
- Emmanuel de Rességuier

- 6 days ago
- 3 min read
AI in Treasury Series — From Fear to Strategic Liquidity Operating System
The Allure of the Forecast
Every treasurer dreams of visibility. Cash coming in, cash going out, liquidity buffers holding steady. Forecasts that are not just precise but dependable.
AI promises exactly that. By crunching payment histories, working capital flows, supplier terms, even macroeconomic indicators, AI claims it can forecast liquidity more accurately than humans ever could.
And yes, the pitch is tempting: better forecasts mean better decisions. Should we pay down debt, roll over short-term funding, or invest excess cash? If we can see further ahead, we can steer better..
But here’s the uncomfortable truth: forecasting is not the hard part. Trusting the forecast is.

The Forecasting Paradox
Treasurers know forecasts are wrong the minute they’re printed. That’s why we spend more time explaining variances than building models.
AI doesn’t magically solve this. It improves accuracy, but not perfection. Worse, AI has its own quirks: hallucinations, unexplained outliers, and results that sometimes can’t be reproduced. For auditors, that’s a nightmare. For regulators, it’s a red flag.
And then there’s the liability problem. Imagine standing before the CFO saying: “Liquidity fell short because the model said we’d be fine.” That’s not a conversation anyone wants to have.
So adoption stalls. Not because AI can’t forecast, but because treasurers don’t trust that they can defend it.
From Prediction to Prescription
The real breakthrough is not just predicting cash positions. It’s linking forecasts to actionable moves.
If AI predicts a liquidity dip in Europe, should we accelerate receivables in Asia?
If surplus cash builds up in the US, should we deploy it into ESG-linked deposits?
If working capital tightening is detected in one business line, can payables be stretched elsewhere?
This is where AI shifts from prediction to prescription. Not just telling you what might happen, but proposing what you might do about it.
But again, prescriptions are only useful if they’re explainable and reversible. Otherwise, you’ve traded variance reports for model excuses.
The Guardrails for Trust
How do we make AI forecasting safe enough to use—and credible enough to defend?
1. Grounded Forecasting.
Train AI on treasury’s actual ERP, bank feeds, and historical data. No black-box external sources. Every forecast must cite its drivers.
2. Uncertainty Disclosure.
Forecasts should come with confidence bands, not false precision. Boards prefer “80% probability of X” over “trust me, it’s Y.”
3. Challenger Models.
Run multiple algorithms side by side. If one diverges wildly, that’s a signal to investigate—not to act.
4. Human Override.
The AI proposes. The treasurer decides. Every override is logged.
This structure aligns with audit standards and satisfies regulators who fear “AI autopilot finance.”
The Cultural Shift
Here’s the twist: forecasting with AI doesn’t reduce work. It changes it. Instead of reconciling variances, treasury teams will need to curate models, monitor drift, and test scenarios.
That means new roles:
Model Portfolio Managers who track the health of the algorithms.
Liquidity Strategists who interpret AI proposals into real-world moves.
The skillset is shifting from spreadsheet mechanics to financial system designers.
Closing Thought: AI won’t make forecasting perfect
It will make it different. The winners will be those who stop treating forecasts as gospel and start treating them as decision engines with guardrails.
Treasury doesn’t need AI that predicts the future flawlessly. It needs AI that helps you act better when the future refuses to cooperate.
And yes, variance reports may finally become less of a Monday morning ritual. Unless, of course, you enjoy explaining to auditors why the model was “almost right.”

Comments