Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ This document provides comprehensive instructions for coding agents working on t

## Overview

This repository contains a collection of Python scripts that demonstrate how to use the OpenAI API (and compatible APIs like Azure OpenAI and Ollama) to generate chat completions. The repository includes examples of:
This repository contains a collection of Python scripts that demonstrate how to use the OpenAI Responses API (and compatible APIs like Azure OpenAI and Ollama). The repository includes examples of:

- Basic chat completions (streaming, async, history)
- Basic responses (streaming, async, history)
- Function calling (basic to advanced multi-function scenarios)
- Structured outputs using Pydantic models
- Retrieval-Augmented Generation (RAG) with various complexity levels
Expand All @@ -20,10 +20,10 @@ The scripts are designed to be educational and can run with multiple LLM provide

All example scripts are located in the root directory. They follow a consistent pattern of setting up an OpenAI client based on environment variables, then demonstrating specific API features.

**Chat Completion Scripts:**
- `chat.py` - Simple chat completion example
- `chat_stream.py` - Streaming chat completions
- `chat_async.py` - Async chat completions with `asyncio.gather` examples
**Chat Scripts:**
- `chat.py` - Simple response example
- `chat_stream.py` - Streaming responses
- `chat_async.py` - Async responses with `asyncio.gather` examples
- `chat_history.py` - Multi-turn chat with message history
- `chat_history_stream.py` - Multi-turn chat with streaming
- `chat_safety.py` - Content safety filter exception handling
Expand Down
20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Python OpenAI demos

This repository contains a collection of Python scripts that demonstrate how to use the OpenAI API to generate chat completions.
This repository contains a collection of Python scripts that demonstrate how to use the OpenAI Responses API.
[📺 Watch this video walkthrough of running these demos in GitHub Codespaces](https://www.youtube.com/watch?v=_daw48A-RZI)

* [Examples](#examples)
* [OpenAI Chat Completions](#openai-chat-completions)
* [OpenAI Responses](#openai-responses)
* [Function calling](#function-calling)
* [Structured outputs](#structured-outputs)
* [Retrieval-Augmented Generation (RAG)](#retrieval-augmented-generation-rag)
Expand All @@ -17,14 +17,14 @@ This repository contains a collection of Python scripts that demonstrate how to

## Examples

### OpenAI Chat Completions
### OpenAI Responses

These scripts use the openai Python package to demonstrate how to use the OpenAI Chat Completions API.
These scripts use the openai Python package to demonstrate how to use the OpenAI Responses API.
In increasing order of complexity, the scripts are:

1. [`chat.py`](./chat.py): A simple script that demonstrates how to use the OpenAI API to generate chat completions.
2. [`chat_stream.py`](./chat_stream.py): Adds `stream=True` to the API call to return a generator that streams the completion as it is being generated.
3. [`chat_history.py`](./chat_history.py): Adds a back-and-forth chat interface using `input()` which keeps track of past messages and sends them with each chat completion call.
1. [`chat.py`](./chat.py): A simple script that demonstrates how to use the OpenAI Responses API to generate a response.
2. [`chat_stream.py`](./chat_stream.py): Adds `stream=True` to the API call to return a generator that streams the response text as it is being generated.
3. [`chat_history.py`](./chat_history.py): Adds a back-and-forth chat interface using `input()` which keeps track of past messages and sends them with each API call.
4. [`chat_history_stream.py`](./chat_history_stream.py): The same idea, but with `stream=True` enabled.

Plus these scripts to demonstrate additional features:
Expand All @@ -34,9 +34,9 @@ Plus these scripts to demonstrate additional features:

### Function calling

These scripts demonstrate using the Chat Completions API "tools" (a.k.a. function calling) feature, which lets the model decide when to call developer-defined functions and return structured arguments instead of (or before) a natural language answer.
These scripts demonstrate using the Responses API "tools" (a.k.a. function calling) feature, which lets the model decide when to call developer-defined functions and return structured arguments instead of (or before) a natural language answer.

In all of these examples, a list of functions is declared in the `tools` parameter. The model may respond with `message.tool_calls` containing one or more tool calls. Each tool call includes the function `name` and a JSON string of `arguments` that match the declared schema. Your application is responsible for: (1) detecting tool calls, (2) executing the corresponding local / external logic, and (3) (optionally) sending the tool result back to the model for a final answer.
In all of these examples, a list of functions is declared in the `tools` parameter. The model may respond with one or more tool calls as items in `response.output` (for example, items where `type == "function_call"`). Each tool call item includes the function `name` and a JSON string of `arguments` that match the declared schema. Your application is responsible for: (1) detecting tool calls, (2) executing the corresponding local / external logic, and (3) (optionally) sending the tool result back to the model for a final answer.

Scripts (in increasing order of capability):

Expand All @@ -62,7 +62,7 @@ python -m pip install -r requirements-rag.txt
Then run the scripts (in order of increasing complexity):

* [`rag_csv.py`](./rag_csv.py): Retrieves matching results from a CSV file and uses them to answer user's question.
* [`rag_multiturn.py`](./rag_multiturn.py): The same idea, but with a back-and-forth chat interface using `input()` which keeps track of past messages and sends them with each chat completion call.
* [`rag_multiturn.py`](./rag_multiturn.py): The same idea, but with a back-and-forth chat interface using `input()` which keeps track of past messages and sends them with each API call.
* [`rag_queryrewrite.py`](./rag_queryrewrite.py): Adds a query rewriting step to the RAG process, where the user's question is rewritten to improve the retrieval results.
* [`rag_documents_ingestion.py`](./rag_documents_ingestion.py): Ingests PDFs by using pymupdf to convert to markdown, then using Langchain to split into chunks, then using OpenAI to embed the chunks, and finally storing in a local JSON file.
* [`rag_documents_flow.py`](./rag_documents_flow.py): A RAG flow that retrieves matching results from the local JSON file created by `rag_documents_ingestion.py`.
Expand Down
16 changes: 8 additions & 8 deletions spanish/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Demos de Python con OpenAI

Este repositorio contiene una colección de scripts en Python que demuestran cómo usar la API de OpenAI (y modelos compatibles) para generar completados de chat. 📺 [Video tutorial de como usar este repositorio](https://youtu.be/0WwpMFMHEOo?si=9K4jFdBYdj-kb_GL)
Este repositorio contiene una colección de scripts en Python que demuestran cómo usar la API de Responses de OpenAI (y modelos compatibles). 📺 [Video tutorial de cómo usar este repositorio](https://youtu.be/0WwpMFMHEOo?si=9K4jFdBYdj-kb_GL)

* [Ejemplos](#ejemplos)
* [Completados de chat de OpenAI](#completados-de-chat-de-openai)
* [Responses de OpenAI](#responses-de-openai)
* [Llamadas a funciones (Function calling)](#llamadas-a-funciones-function-calling)
* [Generación aumentada con recuperación (RAG)](#generación-aumentada-con-recuperación-rag)
* [Salidas estructuradas](#salidas-estructuradas)
Expand All @@ -16,11 +16,11 @@ Este repositorio contiene una colección de scripts en Python que demuestran có

## Ejemplos

### Completados de chat de OpenAI
### Responses de OpenAI

Estos scripts usan el paquete `openai` de Python para demostrar cómo utilizar la API de Chat Completions. En orden creciente de complejidad:
1. [`chat.py`](chat.py): Script simple que muestra cómo generar un completado de chat.
2. [`chat_stream.py`](chat_stream.py): Añade `stream=True` para recibir el completado progresivamente.
Estos scripts usan el paquete `openai` de Python para demostrar cómo utilizar la API de Responses. En orden creciente de complejidad:
1. [`chat.py`](chat.py): Script simple que muestra cómo generar una respuesta.
2. [`chat_stream.py`](chat_stream.py): Añade `stream=True` para recibir la respuesta progresivamente.
3. [`chat_history.py`](chat_history.py): Añade un chat bidireccional que conserva el historial y lo reenvía en cada llamada.
4. [`chat_history_stream.py`](chat_history_stream.py): Igual que el anterior pero además con `stream=True`.

Expand All @@ -32,9 +32,9 @@ Scripts adicionales de características:

### Llamadas a funciones (Function calling)

Estos scripts muestran cómo usar la característica "tools" (function calling) de la API de Chat Completions. Permite que el modelo decida si invoca funciones definidas por el desarrollador y devolver argumentos estructurados en lugar (o antes) de una respuesta en lenguaje natural.
Estos scripts muestran cómo usar la característica "tools" (function calling) de la API de Responses. Permite que el modelo decida si invoca funciones definidas por el desarrollador y devolver argumentos estructurados en lugar (o antes) de una respuesta en lenguaje natural.

En todos los ejemplos se declara una lista de funciones en el parámetro `tools`. El modelo puede responder con `message.tool_calls` que contiene una o más llamadas. Cada llamada incluye el `name` de la función y una cadena JSON con `arguments` que respetan el esquema declarado. Tu aplicación debe: (1) detectar las llamadas, (2) ejecutar la lógica local/externa correspondiente y (3) (opcionalmente) enviar el resultado de la herramienta de vuelta al modelo para una respuesta final.
En todos los ejemplos se declara una lista de funciones en el parámetro `tools`. En estos demos con Responses, las llamadas a herramientas aparecen en `response.output`, por ejemplo como elementos con `type == "function_call"`. Cada una de esas llamadas incluye el `name` de la función y una cadena JSON con `arguments` que respetan el esquema declarado. Tu aplicación debe: (1) detectar las llamadas, (2) ejecutar la lógica local/externa correspondiente y (3) (opcionalmente) enviar el resultado de la herramienta de vuelta al modelo para una respuesta final.

Scripts (en orden de capacidad):

Expand Down