langfuse-core-workflow-a

0
0
Source

Execute Langfuse primary workflow: Tracing LLM calls and spans. Use when implementing LLM tracing, building traced AI features, or adding observability to existing LLM applications. Trigger with phrases like "langfuse tracing", "trace LLM calls", "add langfuse to openai", "langfuse spans", "track llm requests".

Install

mkdir -p .claude/skills/langfuse-core-workflow-a && curl -L -o skill.zip "https://mcp.directory/api/skills/download/8569" && unzip -o skill.zip -d .claude/skills/langfuse-core-workflow-a && rm skill.zip

Installs to .claude/skills/langfuse-core-workflow-a

About this skill

Langfuse Core Workflow A: Tracing LLM Calls

Overview

End-to-end tracing of LLM calls, chains, and agents. Covers the OpenAI drop-in wrapper, manual tracing with startActiveObservation, RAG pipeline instrumentation, streaming response tracking, and LangChain integration.

Prerequisites

  • Completed langfuse-install-auth setup
  • OpenAI SDK installed (npm install openai)
  • For v4+: @langfuse/openai, @langfuse/tracing, @langfuse/otel, @opentelemetry/sdk-node

Instructions

Step 1: OpenAI Drop-In Wrapper (Zero-Code Tracing)

import OpenAI from "openai";
import { observeOpenAI } from "@langfuse/openai";

// Wrap the OpenAI client -- all calls are now traced automatically
const openai = observeOpenAI(new OpenAI());

// Every call captures: model, input, output, tokens, latency, cost
const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: "What is Langfuse?" },
  ],
});

// Add metadata to traces
const res = await observeOpenAI(new OpenAI(), {
  generationName: "product-description",
  generationMetadata: { feature: "onboarding" },
  sessionId: "session-abc",
  userId: "user-123",
  tags: ["production", "onboarding"],
}).chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Describe this product" }],
});

Step 2: Manual Tracing -- RAG Pipeline (v4+ SDK)

import { startActiveObservation, updateActiveObservation } from "@langfuse/tracing";

async function ragPipeline(query: string) {
  return await startActiveObservation("rag-pipeline", async () => {
    updateActiveObservation({ input: { query }, metadata: { pipeline: "rag-v2" } });

    // Span: Query embedding
    const embedding = await startActiveObservation("embed-query", async () => {
      updateActiveObservation({ input: { text: query } });
      const vector = await embedText(query);
      updateActiveObservation({
        output: { dimensions: vector.length },
        metadata: { model: "text-embedding-3-small" },
      });
      return vector;
    });

    // Span: Vector search
    const documents = await startActiveObservation("vector-search", async () => {
      updateActiveObservation({ input: { dimensions: embedding.length } });
      const docs = await searchVectorDB(embedding);
      updateActiveObservation({
        output: { documentCount: docs.length, topScore: docs[0]?.score },
      });
      return docs;
    });

    // Generation: LLM call with context
    const answer = await startActiveObservation(
      { name: "generate-answer", asType: "generation" },
      async () => {
        updateActiveObservation({
          model: "gpt-4o",
          input: { query, context: documents.map((d) => d.content) },
        });

        const result = await generateAnswer(query, documents);

        updateActiveObservation({
          output: result.content,
          usage: {
            promptTokens: result.usage.prompt_tokens,
            completionTokens: result.usage.completion_tokens,
          },
        });
        return result.content;
      }
    );

    updateActiveObservation({ output: { answer } });
    return answer;
  });
}

Step 3: Manual Tracing -- RAG Pipeline (v3 Legacy)

import { Langfuse } from "langfuse";

const langfuse = new Langfuse();

async function ragPipeline(query: string) {
  const trace = langfuse.trace({
    name: "rag-pipeline",
    input: { query },
    metadata: { pipeline: "rag-v1" },
  });

  const embedSpan = trace.span({ name: "embed-query", input: { text: query } });
  const embedding = await embedText(query);
  embedSpan.end({ output: { dimensions: embedding.length } });

  const searchSpan = trace.span({ name: "vector-search" });
  const documents = await searchVectorDB(embedding);
  searchSpan.end({ output: { count: documents.length, topScore: documents[0]?.score } });

  const generation = trace.generation({
    name: "generate-answer",
    model: "gpt-4o",
    modelParameters: { temperature: 0.7, maxTokens: 500 },
    input: { query, context: documents.map((d) => d.content) },
  });

  const answer = await generateAnswer(query, documents);

  generation.end({
    output: answer.content,
    usage: {
      promptTokens: answer.usage.prompt_tokens,
      completionTokens: answer.usage.completion_tokens,
      totalTokens: answer.usage.total_tokens,
    },
  });

  trace.update({ output: { answer: answer.content } });
  await langfuse.flushAsync();
  return answer.content;
}

Step 4: Streaming Response Tracking

import OpenAI from "openai";
import { observeOpenAI } from "@langfuse/openai";

// The wrapper handles streaming automatically
const openai = observeOpenAI(new OpenAI());

const stream = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Tell me a story" }],
  stream: true,
  stream_options: { include_usage: true }, // Required for token tracking
});

let fullContent = "";
for await (const chunk of stream) {
  const content = chunk.choices[0]?.delta?.content || "";
  fullContent += content;
  process.stdout.write(content);
}
// Token usage and latency are captured automatically by the wrapper

Step 5: Anthropic Claude Tracing (Manual)

import Anthropic from "@anthropic-ai/sdk";
import { startActiveObservation, updateActiveObservation } from "@langfuse/tracing";

const anthropic = new Anthropic();

async function callClaude(prompt: string) {
  return await startActiveObservation(
    { name: "claude-call", asType: "generation" },
    async () => {
      updateActiveObservation({
        model: "claude-sonnet-4-20250514",
        input: [{ role: "user", content: prompt }],
      });

      const response = await anthropic.messages.create({
        model: "claude-sonnet-4-20250514",
        max_tokens: 1024,
        messages: [{ role: "user", content: prompt }],
      });

      updateActiveObservation({
        output: response.content[0].text,
        usage: {
          promptTokens: response.usage.input_tokens,
          completionTokens: response.usage.output_tokens,
        },
      });

      return response.content[0].text;
    }
  );
}

Step 6: LangChain Integration (Python)

from langfuse.callback import CallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

langfuse_handler = CallbackHandler()

llm = ChatOpenAI(model="gpt-4o")

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("human", "{input}"),
])

chain = prompt | llm

# All LangChain operations are automatically traced
result = chain.invoke(
    {"input": "What is Langfuse?"},
    config={"callbacks": [langfuse_handler]},
)

Error Handling

IssueCauseSolution
Missing generationsOpenAI wrapper not appliedUse observeOpenAI() from @langfuse/openai
Orphaned spansMissing end or callback finishUse startActiveObservation (auto-ends) or .end() in finally
No token usage on streamStream usage not requestedAdd stream_options: { include_usage: true }
Flat trace (no nesting)Missing OTel contextEnsure NodeSDK is started with LangfuseSpanProcessor

Resources

Next Steps

For evaluation and scoring workflows, see langfuse-core-workflow-b.

svg-icon-generator

jeremylongshore

Svg Icon Generator - Auto-activating skill for Visual Content. Triggers on: svg icon generator, svg icon generator Part of the Visual Content skill category.

8227

d2-diagram-creator

jeremylongshore

D2 Diagram Creator - Auto-activating skill for Visual Content. Triggers on: d2 diagram creator, d2 diagram creator Part of the Visual Content skill category.

4926

automating-mobile-app-testing

jeremylongshore

This skill enables automated testing of mobile applications on iOS and Android platforms using frameworks like Appium, Detox, XCUITest, and Espresso. It generates end-to-end tests, sets up page object models, and handles platform-specific elements. Use this skill when the user requests mobile app testing, test automation for iOS or Android, or needs assistance with setting up device farms and simulators. The skill is triggered by terms like "mobile testing", "appium", "detox", "xcuitest", "espresso", "android test", "ios test".

14217

performing-penetration-testing

jeremylongshore

This skill enables automated penetration testing of web applications. It uses the penetration-tester plugin to identify vulnerabilities, including OWASP Top 10 threats, and suggests exploitation techniques. Use this skill when the user requests a "penetration test", "pentest", "vulnerability assessment", or asks to "exploit" a web application. It provides comprehensive reporting on identified security flaws.

4615

designing-database-schemas

jeremylongshore

Design and visualize efficient database schemas, normalize data, map relationships, and generate ERD diagrams and SQL statements.

11514

analyzing-logs

jeremylongshore

Analyze application logs to detect performance issues, identify error patterns, and improve stability by extracting key insights.

11410

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

1,1421,171

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

969933

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

683829

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

691549

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

797540

pdf-to-markdown

aliceisjustplaying

Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.

697374

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.