Skip to content

feat(examples): migrate AI examples to OpenTelemetry instrumentation#482

Open
richardsolomou wants to merge 7 commits intomasterfrom
feat/otel-examples
Open

feat(examples): migrate AI examples to OpenTelemetry instrumentation#482
richardsolomou wants to merge 7 commits intomasterfrom
feat/otel-examples

Conversation

@richardsolomou
Copy link
Copy Markdown
Member

@richardsolomou richardsolomou commented Apr 7, 2026

Problem

Our AI examples use PostHog's direct SDK wrappers (posthog.ai.openai, posthog.ai.anthropic, etc.) for tracking LLM calls. We want to silently deprecate these in favor of standard OpenTelemetry instrumentation, which is more portable and follows industry conventions.

Changes

Migrates 28 Python AI examples from PostHog direct wrappers to OpenTelemetry auto-instrumentation:

  • 15 OpenAI-compatible providers (Groq, DeepSeek, Mistral, xAI, Together AI, Ollama, Cohere, Hugging Face, Perplexity, Cerebras, Fireworks AI, OpenRouter, Helicone, Vercel AI Gateway, Portkey) → opentelemetry-instrumentation-openai-v2
  • OpenAI (all 6 files), Azure OpenAI, Instructor, Mirascope, Smolagents, AutoGen, Semantic Kernelopentelemetry-instrumentation-openai-v2
  • Anthropic (chat, streaming, extended thinking) → opentelemetry-instrumentation-anthropic
  • Gemini (chat, streaming, image generation) → opentelemetry-instrumentation-google-generativeai
  • LangChain, LangGraphopentelemetry-instrumentation-langchain
  • LlamaIndexopentelemetry-instrumentation-llamaindex

Kept as-is: CrewAI (manages its own TracerProvider, uses LiteLLM callbacks), AWS Bedrock/Pydantic AI (already OTel), LiteLLM/DSPy (LiteLLM callbacks), OpenAI Agents/Claude Agent SDK (native PostHog integration).

Key implementation details:

  • Uses SimpleSpanProcessor instead of BatchSpanProcessor so spans export immediately without needing provider.shutdown()
  • Imports after Instrumentor().instrument() are marked with # noqa: E402
  • Provider SDK imports are intentionally placed after instrumentation setup

Switch from PostHog direct SDK wrappers to OpenTelemetry auto-instrumentation
for all AI provider examples where OTel instrumentations are available.

Uses opentelemetry-instrumentation-openai-v2 for OpenAI-compatible providers,
opentelemetry-instrumentation-anthropic for Anthropic, opentelemetry-instrumentation-google-generativeai
for Gemini, opentelemetry-instrumentation-langchain for LangChain/LangGraph,
opentelemetry-instrumentation-llamaindex for LlamaIndex, and opentelemetry-instrumentation-crewai for CrewAI.
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 7, 2026

posthog-python Compliance Report

Date: 2026-04-08 10:06:46 UTC
Duration: 197ms

✅ All Tests Passed!

0/0 tests passed


@socket-security
Copy link
Copy Markdown

socket-security bot commented Apr 7, 2026

Switch from BatchSpanProcessor to SimpleSpanProcessor so spans are
exported immediately. This removes the need for provider.shutdown()
which could throw export errors on exit.
CrewAI manages its own TracerProvider internally, which conflicts
with setting one externally. LiteLLM callbacks remain the correct
integration approach for CrewAI.
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Apr 8, 2026

Vulnerabilities

No security concerns identified. API keys are sourced exclusively from environment variables (os.environ[...]) and are not hardcoded. The OTLP exporter sends keys in an HTTP Authorization header over HTTPS to the PostHog endpoint, which is the expected pattern.

Prompt To Fix All With AI
This is a comment left during a code review.
Path: examples/example-ai-azure-openai/chat.py
Line: 28

Comment:
**Unrecognized Azure model name**

`grok-4-20-non-reasoning` does not correspond to any model ID in Azure AI Foundry's catalog. The available xAI Grok names there are `grok-4`, `grok-4-fast-non-reasoning`, `grok-4-fast-reasoning`, `grok-4.1-fast-non-reasoning`, etc. This looks like a typo or an accidental change — using this value in an example will mislead anyone who tries to replicate it and will fail against any standard deployment. This change also appears unrelated to the OTel migration that is the purpose of this PR.

```suggestion
    model="gpt-4o",
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: examples/example-ai-openai/transcription.py
Line: 31-34

Comment:
**Inconsistent `provider.shutdown()` on early exit path**

This `provider.shutdown()` before `sys.exit(0)` is the only remaining shutdown call across all 28 migrated examples. The PR description explicitly states that `SimpleSpanProcessor` is used so spans export immediately without needing `provider.shutdown()`. Remove it for consistency.

```suggestion
if not os.path.exists(audio_path):
    print(f"Skipping: audio file not found at '{audio_path}'")
    print("Set AUDIO_PATH to a valid audio file (mp3, wav, m4a, etc.)")
    sys.exit(0)
```

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "style(examples): add noqa: E402 for inte..." | Re-trigger Greptile

message = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
posthog_distinct_id="example-user",
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does the otel example show this?

Copy link
Copy Markdown
Member

@andrewm4894 andrewm4894 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think the otel examples should be as full featured as possible and show stuff like setting distinct id and any custom foo-bar properties - am i missing that or something?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants