Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -104,5 +104,8 @@ jobs:
sleep 10
ollama pull llama3.2:latest
- name: Check examples
run: |
tox -e examples
- name: Run integration tests
run: |
tox -e integration
33 changes: 21 additions & 12 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,9 @@ ext/ # Extension packages (each is a separate PyPI packa
└── flask_dapr/ # Flask integration ← see ext/flask_dapr/AGENTS.md

tests/ # Unit tests (mirrors dapr/ package structure)
examples/ # Integration test suite ← see examples/AGENTS.md
├── examples/ # Output-based tests that run examples and check stdout
├── integration/ # Programmatic SDK tests using DaprClient directly
examples/ # User-facing example applications ← see examples/AGENTS.md
docs/ # Sphinx documentation source
tools/ # Build and release scripts
```
Expand All @@ -59,16 +61,19 @@ Each extension is a **separate PyPI package** with its own `setup.cfg`, `setup.p
| `dapr-ext-langgraph` | `dapr.ext.langgraph` | LangGraph checkpoint persistence to Dapr state store | Moderate |
| `dapr-ext-strands` | `dapr.ext.strands` | Strands agent session management via Dapr state store | New |

## Examples (integration test suite)
## Examples and testing

The `examples/` directory serves as both user-facing documentation and the project's integration test suite. Examples are validated by pytest-based integration tests in `tests/integration/`.
The `examples/` directory contains user-facing example applications. These are validated by two test suites:

**See `examples/AGENTS.md`** for the full guide on example structure and how to add new examples.
- **`tests/examples/`** — Output-based tests that run examples via `dapr run` and check stdout for expected strings. Uses a `DaprRunner` helper to manage process lifecycle. See `examples/AGENTS.md`.
- **`tests/integration/`** — Programmatic SDK tests that call `DaprClient` methods directly and assert on return values, gRPC status codes, and SDK types. More reliable than output-based tests since they don't depend on print statement formatting. See `tests/integration/AGENTS.md`.

Quick reference:
```bash
tox -e integration # Run all examples (needs Dapr runtime)
tox -e integration -- test_state_store.py # Run a single example
tox -e examples # Run output-based example tests
tox -e examples -- test_state_store.py # Run a single example test
tox -e integration # Run programmatic SDK tests
tox -e integration -- test_state_store.py # Run a single integration test
```

## Python version support
Expand Down Expand Up @@ -106,7 +111,10 @@ tox -e ruff
# Run type checking
tox -e type

# Run integration tests / validate examples (requires Dapr runtime)
# Run output-based example tests (requires Dapr runtime)
tox -e examples

# Run programmatic integration tests (requires Dapr runtime)
tox -e integration
```

Expand Down Expand Up @@ -189,8 +197,8 @@ When completing any task on this project, work through this checklist. Not every
### Examples (integration tests)

- [ ] If you added a new user-facing feature or building block, add or update an example in `examples/`
- [ ] Add a corresponding pytest integration test in `tests/integration/`
- [ ] If you changed output format of existing functionality, update expected output in the affected integration tests
- [ ] Add a corresponding pytest test in `tests/examples/` (output-based) and/or `tests/integration/` (programmatic)
- [ ] If you changed output format of existing functionality, update expected output in `tests/examples/`
- [ ] See `examples/AGENTS.md` for full details on writing examples

### Documentation
Expand All @@ -202,7 +210,7 @@ When completing any task on this project, work through this checklist. Not every

- [ ] Run `tox -e ruff` — linting must be clean
- [ ] Run `tox -e py311` (or your Python version) — all unit tests must pass
- [ ] If you touched examples: `tox -e integration -- test_<example-name>.py` to validate locally
- [ ] If you touched examples: `tox -e examples -- test_<example-name>.py` to validate locally
- [ ] Commits must be signed off for DCO: `git commit -s`

## Important files
Expand All @@ -217,7 +225,8 @@ When completing any task on this project, work through this checklist. Not every
| `dev-requirements.txt` | Development/test dependencies |
| `dapr/version/__init__.py` | SDK version string |
| `ext/*/setup.cfg` | Extension package metadata and dependencies |
| `tests/integration/` | Pytest-based integration tests that validate examples |
| `tests/examples/` | Output-based tests that validate examples by checking stdout |
| `tests/integration/` | Programmatic SDK tests using DaprClient directly |

## Gotchas

Expand All @@ -226,6 +235,6 @@ When completing any task on this project, work through this checklist. Not every
- **Extension independence**: Each extension is a separate PyPI package. Core SDK changes should not break extensions; extension changes should not require core SDK changes unless intentional.
- **DCO signoff**: PRs will be blocked by the DCO bot if commits lack `Signed-off-by`. Always use `git commit -s`.
- **Ruff version pinned**: Dev requirements pin `ruff === 0.14.1`. Use this exact version to match CI.
- **Examples are integration tests**: Changing output format (log messages, print statements) can break integration tests. Always check expected output in `tests/integration/` when modifying user-visible output.
- **Examples are tested by output matching**: Changing output format (log messages, print statements) can break `tests/examples/`. Always check expected output there when modifying user-visible output.
- **Background processes in examples**: Examples that start background services (servers, subscribers) must include a cleanup step to stop them, or CI will hang.
- **Workflow is the most active area**: See `ext/dapr-ext-workflow/AGENTS.md` for workflow-specific architecture and constraints.
12 changes: 9 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,17 +121,23 @@ tox -e py311
tox -e type
```

8. Run integration tests (validates the examples)
8. Run integration tests

```bash
tox -e integration
```

If you need to run the examples against a pre-released version of the runtime, you can use the following command:
9. Validate the examples

```bash
tox -e examples
```

If you need to run the examples or integration tests against a pre-released version of the runtime, you can use the following command:
- Get your daprd runtime binary from [here](https://github.com/dapr/dapr/releases) for your platform.
- Copy the binary to your dapr home folder at $HOME/.dapr/bin/daprd.
Or using dapr cli directly: `dapr init --runtime-version <release version>`
- Now you can run the examples with `tox -e integration`.
- Now you can run the examples with `tox -e examples` or the integration tests with `tox -e integration`.


## Documentation
Expand Down
18 changes: 9 additions & 9 deletions examples/AGENTS.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
# AGENTS.md — Dapr Python SDK Examples

The `examples/` directory serves as both **user-facing documentation** and the project's **integration test suite**. Each example is a self-contained application validated by pytest-based integration tests in `tests/integration/`.
The `examples/` directory serves as the **user-facing documentation**. Each example is a self-contained application validated by pytest-based tests in `tests/examples/`.

## How validation works

1. Each example has a corresponding test file in `tests/integration/` (e.g., `test_state_store.py`)
2. Tests use a `DaprRunner` helper (defined in `conftest.py`) that wraps `dapr run` commands
1. Each example has a corresponding test file in `tests/examples/` (e.g., `test_state_store.py`)
2. Tests use a `DaprRunner` helper (defined in `tests/examples/conftest.py`) that wraps `dapr run` commands
3. `DaprRunner.run()` executes a command and captures stdout; `DaprRunner.start()`/`stop()` manage background services
4. Tests assert that expected output lines appear in the captured output

Run examples locally (requires a running Dapr runtime via `dapr init`):

```bash
# All examples
tox -e integration
tox -e examples

# Single example
tox -e integration -- test_state_store.py
tox -e examples -- test_state_store.py
```

In CI (`validate_examples.yaml`), examples run on all supported Python versions (3.10-3.14) on Ubuntu with a full Dapr runtime including Docker, Redis, and (for LLM examples) Ollama.
Expand Down Expand Up @@ -132,17 +132,17 @@ The `workflow` example includes: `simple.py`, `task_chaining.py`, `fan_out_fan_i
2. Add Python source files and a `requirements.txt` referencing the needed SDK packages
3. Add Dapr component YAMLs in a `components/` subdirectory if the example uses state, pubsub, etc.
4. Write a `README.md` with introduction, pre-requisites, install instructions, and running instructions
5. Add a corresponding test in `tests/integration/test_<example_name>.py`:
5. Add a corresponding test in `tests/examples/test_<example_name>.py`:
- Use the `@pytest.mark.example_dir('<example-name>')` marker to set the working directory
- Use `dapr.run()` for scripts that exit on their own, `dapr.start()`/`dapr.stop()` for long-running services
- Assert expected output lines appear in the captured output
6. Test locally: `tox -e integration -- test_<example_name>.py`
6. Test locally: `tox -e examples -- test_<example_name>.py`

## Gotchas

- **Output format changes break tests**: If you modify print statements or log output in SDK code, check whether any integration test's expected lines depend on that output.
- **Output format changes break tests**: If you modify print statements or log output in SDK code, check whether any test's expected lines in `tests/examples/` depend on that output.
- **Background processes must be cleaned up**: The `DaprRunner` fixture handles cleanup on teardown, but tests should still call `dapr.stop()` to capture output.
- **Dapr prefixes output**: Application stdout appears as `== APP == <line>` when run via `dapr run`.
- **Redis is available in CI**: The CI environment has Redis running on `localhost:6379` — most component YAMLs use this.
- **Some examples need special setup**: `crypto` generates keys, `configuration` seeds Redis, `conversation` needs LLM config — check individual READMEs.
- **Infinite-loop example scripts**: Some example scripts (e.g., `invoke-caller.py`) have `while True` loops for demo purposes. Integration tests must either bypass these with HTTP API calls or use `dapr.run(until=...)` for early termination.
- **Infinite-loop example scripts**: Some example scripts (e.g., `invoke-caller.py`) have `while True` loops for demo purposes. Tests must either bypass these with HTTP API calls or use `dapr.run(until=...)` for early termination.
140 changes: 140 additions & 0 deletions tests/examples/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
import shlex
import subprocess
import tempfile
import threading
import time
from pathlib import Path
from typing import IO, Any, Generator

import pytest

from tests._process_utils import get_kwargs_for_process_group, terminate_process_group

REPO_ROOT = Path(__file__).resolve().parent.parent.parent
EXAMPLES_DIR = REPO_ROOT / 'examples'


def pytest_configure(config: pytest.Config) -> None:
config.addinivalue_line('markers', 'example_dir(name): set the example directory for a test')


class DaprRunner:
"""Helper to run `dapr run` commands and capture output."""

def __init__(self, cwd: Path) -> None:
self._cwd = cwd
self._bg_process: subprocess.Popen[str] | None = None
self._bg_output_file: IO[str] | None = None

@staticmethod
def _terminate(proc: subprocess.Popen[str]) -> None:
if proc.poll() is not None:
return

terminate_process_group(proc)
try:
proc.wait(timeout=10)
except subprocess.TimeoutExpired:
terminate_process_group(proc, force=True)
proc.wait()

def run(self, args: str, *, timeout: int = 30, until: list[str] | None = None) -> str:
"""Run a foreground command, block until it finishes, and return output.

Use this for short-lived processes (e.g. a publisher that exits on its
own). For long-lived background services, use ``start()``/``stop()``.

Args:
args: Arguments passed to ``dapr run``.
timeout: Maximum seconds to wait before killing the process.
until: If provided, the process is terminated as soon as every
string in this list has appeared in the accumulated output.
"""
proc = subprocess.Popen(
args=('dapr', 'run', *shlex.split(args)),
cwd=self._cwd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
**get_kwargs_for_process_group(),
)
lines: list[str] = []
assert proc.stdout is not None

# Kill the process if it exceeds the timeout. A background timer is
# needed because `for line in proc.stdout` blocks indefinitely when
# the child never exits.
timer = threading.Timer(
interval=timeout, function=lambda: terminate_process_group(proc, force=True)
)
timer.start()

try:
for line in proc.stdout:
print(line, end='', flush=True)
lines.append(line)
if until and all(exp in ''.join(lines) for exp in until):
break
finally:
timer.cancel()
self._terminate(proc)

return ''.join(lines)

def start(self, args: str, *, wait: int = 5) -> None:
"""Start a long-lived background service.

Use this for servers/subscribers that must stay alive while a second
process runs via ``run()``. Call ``stop()`` to terminate and collect
output. Stdout is written to a temp file to avoid pipe-buffer deadlocks.
"""
output_file = tempfile.NamedTemporaryFile(mode='w+', suffix='.log')
proc = subprocess.Popen(
args=('dapr', 'run', *shlex.split(args)),
cwd=self._cwd,
stdout=output_file,
stderr=subprocess.STDOUT,
text=True,
**get_kwargs_for_process_group(),
)
self._bg_process = proc
self._bg_output_file = output_file
time.sleep(wait)

def stop(self) -> str:
"""Stop the background service and return its captured output."""
if self._bg_process is None:
return ''
self._terminate(self._bg_process)
self._bg_process = None
return self._read_and_close_output()

def _read_and_close_output(self) -> str:
if self._bg_output_file is None:
return ''
self._bg_output_file.seek(0)
output = self._bg_output_file.read()
self._bg_output_file.close()
self._bg_output_file = None
print(output, end='', flush=True)
return output


@pytest.fixture
def dapr(request: pytest.FixtureRequest) -> Generator[DaprRunner, Any, None]:
"""Provides a DaprRunner scoped to an example directory.

Use the ``example_dir`` marker to select which example:

@pytest.mark.example_dir('state_store')
def test_something(dapr):
...

Defaults to the examples root if no marker is set.
"""
marker = request.node.get_closest_marker('example_dir')
cwd = EXAMPLES_DIR / marker.args[0] if marker else EXAMPLES_DIR

runner = DaprRunner(cwd)
yield runner
runner.stop()
49 changes: 49 additions & 0 deletions tests/examples/test_configuration.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
import subprocess
import time

import pytest

REDIS_CONTAINER = 'dapr_redis'

EXPECTED_LINES = [
'Got key=orderId1 value=100 version=1 metadata={}',
'Got key=orderId2 value=200 version=1 metadata={}',
'Subscribe key=orderId2 value=210 version=2 metadata={}',
'Unsubscribed successfully? True',
]


@pytest.fixture()
def redis_config():
"""Seed configuration values in Redis before the test."""
subprocess.run(
('docker', 'exec', 'dapr_redis', 'redis-cli', 'SET', 'orderId1', '100||1'),
check=True,
capture_output=True,
)
subprocess.run(
('docker', 'exec', 'dapr_redis', 'redis-cli', 'SET', 'orderId2', '200||1'),
check=True,
capture_output=True,
)
Comment thread
seherv marked this conversation as resolved.


@pytest.mark.example_dir('configuration')
def test_configuration(dapr, redis_config):
dapr.start(
'--app-id configexample --resources-path components/ -- python3 configuration.py',
wait=5,
)
# Update Redis to trigger the subscription notification
subprocess.run(
('docker', 'exec', 'dapr_redis', 'redis-cli', 'SET', 'orderId2', '210||2'),
check=True,
capture_output=True,
)
# configuration.py sleeps 10s after subscribing before it unsubscribes.
# Wait long enough for the full script to finish.
time.sleep(10)

output = dapr.stop()
for line in EXPECTED_LINES:
assert line in output, f'Missing in output: {line}'
File renamed without changes.
21 changes: 21 additions & 0 deletions tests/examples/test_distributed_lock.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
import pytest

EXPECTED_LINES = [
'Will try to acquire a lock from lock store named [lockstore]',
'The lock is for a resource named [example-lock-resource]',
'The client identifier is [example-client-id]',
'The lock will expire in 60 seconds.',
'Lock acquired successfully!!!',
'We already released the lock so unlocking will not work.',
'We tried to unlock it anyway and got back [UnlockResponseStatus.lock_does_not_exist]',
]


@pytest.mark.example_dir('distributed_lock')
def test_distributed_lock(dapr):
output = dapr.run(
'--app-id=locksapp --app-protocol grpc --resources-path components/ -- python3 lock.py',
timeout=10,
)
for line in EXPECTED_LINES:
assert line in output, f'Missing in output: {line}'
File renamed without changes.
Loading
Loading