Skip to main content

Install

dotnet add package Veval.Sdk

Get your API key

Go to dashboard.veval.dev → Settings → API Keys.

Part 1: Minimal trace (no LLM required)

The simplest possible integration — wrap any async operation and it appears in your dashboard.
using Veval.Sdk;

var veval = new VevalSdk(new VevalOptions
{
    ApiKey   = "YOUR_API_KEY",
    Endpoint = "https://api.veval.dev",
});

var result = await veval.RunAsync("my-agent", async ctx =>
{
    var output = await ctx.TrackStepAsync("summarize", input: "hello world", async () =>
    {
        // replace with your real LLM call
        return "Hello, world!";
    });

    return output;
});

veval.Dispose();
That’s it. Open your dashboard — you’ll see a trace with one step.
For real agents, pass the execution context into your service class so every LLM call is tracked automatically.
// Your agent service accepts VevalExecutionContext
public class MyAgentService(IVevalSdk veval, AnthropicClient claude)
{
    public async Task<string> ExecuteAsync(VevalExecutionContext ctx)
    {
        var answer = await ctx.TrackStepAsync("answer", ctx.Input, async handle =>
        {
            var response = await claude.Messages.CreateAsync(...);

            // attach LLM metadata to the step
            handle.SetMeta("model",      response.Model);
            handle.SetMeta("tokens_in",  response.Usage.InputTokens);
            handle.SetMeta("tokens_out", response.Usage.OutputTokens);
            handle.SetMeta("cost_usd",   ComputeCost(response.Usage));

            return response.Content[0].Text;
        });

        return answer;
    }
}

// Wire it up
var veval   = new VevalSdk(new VevalOptions { ApiKey = "YOUR_API_KEY" });
var service = new MyAgentService(veval, claude);

var result = await veval.RunAsync("my-agent", service.ExecuteAsync, input: userMessage);
Injecting IVevalSdk (the interface) lets you swap in VevalTestSdk in tests — no live LLM calls, no API cost. See Replay.