Static control policies limited efficiency gains
Rule-based energy optimization struggled to adapt to dynamic occupancy, environmental conditions, and system interactions across complex building environments.
We reframed energy optimization as a learning system
Energy control was redesigned as a continuous learning problem, allowing policies to improve through interaction rather than manual tuning.
Multi-agent reinforcement learning for distributed control
Independent agents were trained using reinforcement learning with graph-based state representations. Simulation environments modeled coordination across interconnected systems to enable scalable control strategies.
32% reduction in energy consumption
Adaptive control delivered sustained efficiency improvements across real-world building environments.



