llm-d in Action: Scaling Your Inference Performance

An overview of llm-d, a Kubernetes-native distributed inference serving stack, that addressing LLM deployment challenges.

May 25, 2025