Static Cling

In order to reach gigaton decarbonization goals, we need to make better decisions around product development, manufacturing, shipping, and disposal. Better decisions are possible by better pattern matching objectives, rewards, observations, and actions that respond to variability introduced into the system like demand changes and supply chain disruptions. Decarbonization would benefit from decision agents that keep learning in dynamic environments and not static rules. Static cling is not our friend.

Many supply chain, logistics, and manufacturing decisions today are not dynamic and still rely on static rules deemed best practices and implemented into enterprise resource planning (ERP) systems. Some refer to these rules as heuristics: mental shortcuts that enable people to make judgments efficiently and shorten decision making times. Four of the most used decision heuristics are: shortest queue, shortest path, first in first out, and last in first out.

Image: Different decision making options

Since many ERP systems implementations are associated with large system integrator projects, many companies simply set heuristics and forget it. In one project discovery session with a Fortune 1000 company, our system integrator partner Accenture confirmed that the client had not revised their ERP static rules in 18 to 24 months. These static rules controlled $B of product flow a year.

Image: Product delivery example using Reinforcement Learning (RL) decision agent

We proved that artificial intelligence (AI) trained decision agents were able to accommodate demand variability and increase fulfillment rates at a lower production cost. The hurdle was implementing AI-trained decision agents into the legacy workflows controlled by the existing manufacturing execution system (MES). Optimizers are used in workflows to calculate order quantities to manufacturing batch sizes to shipping routes. Optimizers work well for deterministic problems (i.e. known decision parameters, think Goal Seek in Excel), but not so well for stochastic problems (i.e. decision parameters have variability and probability distribution for optimization is needed).

AI is slowly being integrated into existing MES-driven workflows since computers can more efficiently pattern match the needs of meeting company goals, product requirements, and timelines. AI-trained decision agents running in parallel that lead to better outcomes continues to be validated.

Image: Parallel AI-trained decision agents scenario using reinforcement learning

Perhaps a DeepMind for sustainability decisions will be mainstream one day to liberate decisions from static cling.

Previous
Previous

Beauty and the Beat

Next
Next

Carbon Sense