We implemented an AI forecasting tool six months ago and the results are more complicated than the vendor promised
I manage supply chain operations for a mid-size manufacturer. Six months ago we implemented an AI demand forecasting tool that was supposed to reduce our inventory carrying costs and improve our fill rates. The vendor showed us case studies with impressive numbers. We signed a significant contract.
Six months in the picture is mixed. Fill rates have improved modestly, less than projected. Inventory costs have not moved meaningfully. The tool is genuinely better than our previous spreadsheet-based forecasting in some categories and inexplicably worse in others, specifically seasonal products where the historical data pattern changed after COVID.
The vendor says we need more data. My team says the model assumptions do not fit our product mix. I suspect both are partly right. What I want to know is whether this kind of mixed outcome is typical for AI implementation in operational settings, and whether the organisations that report success got there through the initial deployment or through significant iteration after it.