Demonstrating AI hardware performance was abstract
Customers needed a tangible way to understand the speed and flexibility of reconfigurable dataflow units when running LLM workloads.
We built an interactive AI starter experience
An open, configurable application was delivered to allow users to explore LLM performance across different parameters in real time.
Public-facing AI demo infrastructure
An AI Starter Kit was implemented and published in a public repository, integrating with hosted LLMs and enabling interactive configuration through a lightweight application layer.
High engagement from customers and internal teams
The demo became a focal point for customer conversations and continuous product feedback.



