reference
An LLM trace is a concept in LLM observability that records the sequence of events and decisions related to a single AI task, including the original user prompt, system or context prompts, raw model output, and step-by-step reasoning if tools or agent frameworks are used.
Authors
Sources
- LLM Observability: How to Monitor AI When It Thinks in Tokens | TTMS ttms.com via serper
Referenced by nodes (3)
- reasoning concept
- LLM observability concept
- tools concept