Skip to content

Streaming

Every provider implements ILanguageModel.StreamAsync(LlmRequest) and emits LlmStreamChunk items.

Use this when you want raw provider streaming without the higher-level agent loop.

Set UseStreaming = true on Agent when you want streamed runtime events:

await agent.SendAsync("Say hello in 10 words");
await foreach (var evt in agent.ReceiveAsync())
{
if (evt is TextEvent text && text.Partial)
{
Console.Write(text.Content);
}
}

Streaming runs through the same event model as non-streaming execution. Depending on provider and task you may receive:

  • partial TextEvent
  • ThinkingEvent
  • ToolUseEvent
  • ToolResultEvent
  • ImageEvent
  • ResultEvent

Use streaming for:

  • interactive chat
  • long-running analysis
  • UI latency reduction
  • portal or SignalR-backed execution views