From 63b34e2f9d3f22710410399f0d865392855f0de7 Mon Sep 17 00:00:00 2001 From: Pamela Fox Date: Mon, 6 Apr 2026 15:23:27 -0700 Subject: [PATCH 1/3] Update docs to reference Responses API instead of Chat Completions --- AGENTS.md | 12 ++++++------ README.md | 16 ++++++++-------- spanish/README.md | 10 +++++----- 3 files changed, 19 insertions(+), 19 deletions(-) diff --git a/AGENTS.md b/AGENTS.md index 95f2856..2bff102 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -4,9 +4,9 @@ This document provides comprehensive instructions for coding agents working on t ## Overview -This repository contains a collection of Python scripts that demonstrate how to use the OpenAI API (and compatible APIs like Azure OpenAI and Ollama) to generate chat completions. The repository includes examples of: +This repository contains a collection of Python scripts that demonstrate how to use the OpenAI Responses API (and compatible APIs like Azure OpenAI and Ollama). The repository includes examples of: -- Basic chat completions (streaming, async, history) +- Basic responses (streaming, async, history) - Function calling (basic to advanced multi-function scenarios) - Structured outputs using Pydantic models - Retrieval-Augmented Generation (RAG) with various complexity levels @@ -20,10 +20,10 @@ The scripts are designed to be educational and can run with multiple LLM provide All example scripts are located in the root directory. They follow a consistent pattern of setting up an OpenAI client based on environment variables, then demonstrating specific API features. -**Chat Completion Scripts:** -- `chat.py` - Simple chat completion example -- `chat_stream.py` - Streaming chat completions -- `chat_async.py` - Async chat completions with `asyncio.gather` examples +**Chat Scripts:** +- `chat.py` - Simple response example +- `chat_stream.py` - Streaming responses +- `chat_async.py` - Async responses with `asyncio.gather` examples - `chat_history.py` - Multi-turn chat with message history - `chat_history_stream.py` - Multi-turn chat with streaming - `chat_safety.py` - Content safety filter exception handling diff --git a/README.md b/README.md index 01cb4c4..31dad1d 100644 --- a/README.md +++ b/README.md @@ -1,10 +1,10 @@ # Python OpenAI demos -This repository contains a collection of Python scripts that demonstrate how to use the OpenAI API to generate chat completions. +This repository contains a collection of Python scripts that demonstrate how to use the OpenAI Responses API. [馃摵 Watch this video walkthrough of running these demos in GitHub Codespaces](https://www.youtube.com/watch?v=_daw48A-RZI) * [Examples](#examples) - * [OpenAI Chat Completions](#openai-chat-completions) + * [OpenAI Responses](#openai-responses) * [Function calling](#function-calling) * [Structured outputs](#structured-outputs) * [Retrieval-Augmented Generation (RAG)](#retrieval-augmented-generation-rag) @@ -17,14 +17,14 @@ This repository contains a collection of Python scripts that demonstrate how to ## Examples -### OpenAI Chat Completions +### OpenAI Responses -These scripts use the openai Python package to demonstrate how to use the OpenAI Chat Completions API. +These scripts use the openai Python package to demonstrate how to use the OpenAI Responses API. In increasing order of complexity, the scripts are: -1. [`chat.py`](./chat.py): A simple script that demonstrates how to use the OpenAI API to generate chat completions. +1. [`chat.py`](./chat.py): A simple script that demonstrates how to use the OpenAI Responses API to generate a response. 2. [`chat_stream.py`](./chat_stream.py): Adds `stream=True` to the API call to return a generator that streams the completion as it is being generated. -3. [`chat_history.py`](./chat_history.py): Adds a back-and-forth chat interface using `input()` which keeps track of past messages and sends them with each chat completion call. +3. [`chat_history.py`](./chat_history.py): Adds a back-and-forth chat interface using `input()` which keeps track of past messages and sends them with each API call. 4. [`chat_history_stream.py`](./chat_history_stream.py): The same idea, but with `stream=True` enabled. Plus these scripts to demonstrate additional features: @@ -34,7 +34,7 @@ Plus these scripts to demonstrate additional features: ### Function calling -These scripts demonstrate using the Chat Completions API "tools" (a.k.a. function calling) feature, which lets the model decide when to call developer-defined functions and return structured arguments instead of (or before) a natural language answer. +These scripts demonstrate using the Responses API "tools" (a.k.a. function calling) feature, which lets the model decide when to call developer-defined functions and return structured arguments instead of (or before) a natural language answer. In all of these examples, a list of functions is declared in the `tools` parameter. The model may respond with `message.tool_calls` containing one or more tool calls. Each tool call includes the function `name` and a JSON string of `arguments` that match the declared schema. Your application is responsible for: (1) detecting tool calls, (2) executing the corresponding local / external logic, and (3) (optionally) sending the tool result back to the model for a final answer. @@ -62,7 +62,7 @@ python -m pip install -r requirements-rag.txt Then run the scripts (in order of increasing complexity): * [`rag_csv.py`](./rag_csv.py): Retrieves matching results from a CSV file and uses them to answer user's question. -* [`rag_multiturn.py`](./rag_multiturn.py): The same idea, but with a back-and-forth chat interface using `input()` which keeps track of past messages and sends them with each chat completion call. +* [`rag_multiturn.py`](./rag_multiturn.py): The same idea, but with a back-and-forth chat interface using `input()` which keeps track of past messages and sends them with each API call. * [`rag_queryrewrite.py`](./rag_queryrewrite.py): Adds a query rewriting step to the RAG process, where the user's question is rewritten to improve the retrieval results. * [`rag_documents_ingestion.py`](./rag_documents_ingestion.py): Ingests PDFs by using pymupdf to convert to markdown, then using Langchain to split into chunks, then using OpenAI to embed the chunks, and finally storing in a local JSON file. * [`rag_documents_flow.py`](./rag_documents_flow.py): A RAG flow that retrieves matching results from the local JSON file created by `rag_documents_ingestion.py`. diff --git a/spanish/README.md b/spanish/README.md index 8a60259..8237fb6 100644 --- a/spanish/README.md +++ b/spanish/README.md @@ -1,9 +1,9 @@ # Demos de Python con OpenAI -Este repositorio contiene una colecci贸n de scripts en Python que demuestran c贸mo usar la API de OpenAI (y modelos compatibles) para generar completados de chat. 馃摵 [Video tutorial de como usar este repositorio](https://youtu.be/0WwpMFMHEOo?si=9K4jFdBYdj-kb_GL) +Este repositorio contiene una colecci贸n de scripts en Python que demuestran c贸mo usar la API de Responses de OpenAI (y modelos compatibles). 馃摵 [Video tutorial de como usar este repositorio](https://youtu.be/0WwpMFMHEOo?si=9K4jFdBYdj-kb_GL) * [Ejemplos](#ejemplos) - * [Completados de chat de OpenAI](#completados-de-chat-de-openai) + * [Responses de OpenAI](#responses-de-openai) * [Llamadas a funciones (Function calling)](#llamadas-a-funciones-function-calling) * [Generaci贸n aumentada con recuperaci贸n (RAG)](#generaci贸n-aumentada-con-recuperaci贸n-rag) * [Salidas estructuradas](#salidas-estructuradas) @@ -16,9 +16,9 @@ Este repositorio contiene una colecci贸n de scripts en Python que demuestran c贸 ## Ejemplos -### Completados de chat de OpenAI +### Responses de OpenAI -Estos scripts usan el paquete `openai` de Python para demostrar c贸mo utilizar la API de Chat Completions. En orden creciente de complejidad: +Estos scripts usan el paquete `openai` de Python para demostrar c贸mo utilizar la API de Responses. En orden creciente de complejidad: 1. [`chat.py`](chat.py): Script simple que muestra c贸mo generar un completado de chat. 2. [`chat_stream.py`](chat_stream.py): A帽ade `stream=True` para recibir el completado progresivamente. 3. [`chat_history.py`](chat_history.py): A帽ade un chat bidireccional que conserva el historial y lo reenv铆a en cada llamada. @@ -32,7 +32,7 @@ Scripts adicionales de caracter铆sticas: ### Llamadas a funciones (Function calling) -Estos scripts muestran c贸mo usar la caracter铆stica "tools" (function calling) de la API de Chat Completions. Permite que el modelo decida si invoca funciones definidas por el desarrollador y devolver argumentos estructurados en lugar (o antes) de una respuesta en lenguaje natural. +Estos scripts muestran c贸mo usar la caracter铆stica "tools" (function calling) de la API de Responses. Permite que el modelo decida si invoca funciones definidas por el desarrollador y devolver argumentos estructurados en lugar (o antes) de una respuesta en lenguaje natural. En todos los ejemplos se declara una lista de funciones en el par谩metro `tools`. El modelo puede responder con `message.tool_calls` que contiene una o m谩s llamadas. Cada llamada incluye el `name` de la funci贸n y una cadena JSON con `arguments` que respetan el esquema declarado. Tu aplicaci贸n debe: (1) detectar las llamadas, (2) ejecutar la l贸gica local/externa correspondiente y (3) (opcionalmente) enviar el resultado de la herramienta de vuelta al modelo para una respuesta final. From 123d4e7c90acb23a936c09c5d02a4d33615b27df Mon Sep 17 00:00:00 2001 From: Pamela Fox Date: Mon, 6 Apr 2026 21:42:16 -0700 Subject: [PATCH 2/3] Address PR review: fix remaining completion wording and Spanish typo --- README.md | 2 +- spanish/README.md | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 31dad1d..bbf1eef 100644 --- a/README.md +++ b/README.md @@ -23,7 +23,7 @@ These scripts use the openai Python package to demonstrate how to use the OpenAI In increasing order of complexity, the scripts are: 1. [`chat.py`](./chat.py): A simple script that demonstrates how to use the OpenAI Responses API to generate a response. -2. [`chat_stream.py`](./chat_stream.py): Adds `stream=True` to the API call to return a generator that streams the completion as it is being generated. +2. [`chat_stream.py`](./chat_stream.py): Adds `stream=True` to the API call to return a generator that streams the response text as it is being generated. 3. [`chat_history.py`](./chat_history.py): Adds a back-and-forth chat interface using `input()` which keeps track of past messages and sends them with each API call. 4. [`chat_history_stream.py`](./chat_history_stream.py): The same idea, but with `stream=True` enabled. diff --git a/spanish/README.md b/spanish/README.md index 8237fb6..c4684f3 100644 --- a/spanish/README.md +++ b/spanish/README.md @@ -1,6 +1,6 @@ # Demos de Python con OpenAI -Este repositorio contiene una colecci贸n de scripts en Python que demuestran c贸mo usar la API de Responses de OpenAI (y modelos compatibles). 馃摵 [Video tutorial de como usar este repositorio](https://youtu.be/0WwpMFMHEOo?si=9K4jFdBYdj-kb_GL) +Este repositorio contiene una colecci贸n de scripts en Python que demuestran c贸mo usar la API de Responses de OpenAI (y modelos compatibles). 馃摵 [Video tutorial de c贸mo usar este repositorio](https://youtu.be/0WwpMFMHEOo?si=9K4jFdBYdj-kb_GL) * [Ejemplos](#ejemplos) * [Responses de OpenAI](#responses-de-openai) @@ -19,8 +19,8 @@ Este repositorio contiene una colecci贸n de scripts en Python que demuestran c贸 ### Responses de OpenAI Estos scripts usan el paquete `openai` de Python para demostrar c贸mo utilizar la API de Responses. En orden creciente de complejidad: -1. [`chat.py`](chat.py): Script simple que muestra c贸mo generar un completado de chat. -2. [`chat_stream.py`](chat_stream.py): A帽ade `stream=True` para recibir el completado progresivamente. +1. [`chat.py`](chat.py): Script simple que muestra c贸mo generar una respuesta. +2. [`chat_stream.py`](chat_stream.py): A帽ade `stream=True` para recibir la respuesta progresivamente. 3. [`chat_history.py`](chat_history.py): A帽ade un chat bidireccional que conserva el historial y lo reenv铆a en cada llamada. 4. [`chat_history_stream.py`](chat_history_stream.py): Igual que el anterior pero adem谩s con `stream=True`. From bf92dd000de41fd89e9cc0a9762fb67e8e83b61a Mon Sep 17 00:00:00 2001 From: Pamela Fox Date: Mon, 6 Apr 2026 21:53:56 -0700 Subject: [PATCH 3/3] Fix function calling docs: message.tool_calls -> response.output --- README.md | 2 +- spanish/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index bbf1eef..51f115d 100644 --- a/README.md +++ b/README.md @@ -36,7 +36,7 @@ Plus these scripts to demonstrate additional features: These scripts demonstrate using the Responses API "tools" (a.k.a. function calling) feature, which lets the model decide when to call developer-defined functions and return structured arguments instead of (or before) a natural language answer. -In all of these examples, a list of functions is declared in the `tools` parameter. The model may respond with `message.tool_calls` containing one or more tool calls. Each tool call includes the function `name` and a JSON string of `arguments` that match the declared schema. Your application is responsible for: (1) detecting tool calls, (2) executing the corresponding local / external logic, and (3) (optionally) sending the tool result back to the model for a final answer. +In all of these examples, a list of functions is declared in the `tools` parameter. The model may respond with one or more tool calls as items in `response.output` (for example, items where `type == "function_call"`). Each tool call item includes the function `name` and a JSON string of `arguments` that match the declared schema. Your application is responsible for: (1) detecting tool calls, (2) executing the corresponding local / external logic, and (3) (optionally) sending the tool result back to the model for a final answer. Scripts (in increasing order of capability): diff --git a/spanish/README.md b/spanish/README.md index c4684f3..b7c67ea 100644 --- a/spanish/README.md +++ b/spanish/README.md @@ -34,7 +34,7 @@ Scripts adicionales de caracter铆sticas: Estos scripts muestran c贸mo usar la caracter铆stica "tools" (function calling) de la API de Responses. Permite que el modelo decida si invoca funciones definidas por el desarrollador y devolver argumentos estructurados en lugar (o antes) de una respuesta en lenguaje natural. -En todos los ejemplos se declara una lista de funciones en el par谩metro `tools`. El modelo puede responder con `message.tool_calls` que contiene una o m谩s llamadas. Cada llamada incluye el `name` de la funci贸n y una cadena JSON con `arguments` que respetan el esquema declarado. Tu aplicaci贸n debe: (1) detectar las llamadas, (2) ejecutar la l贸gica local/externa correspondiente y (3) (opcionalmente) enviar el resultado de la herramienta de vuelta al modelo para una respuesta final. +En todos los ejemplos se declara una lista de funciones en el par谩metro `tools`. En estos demos con Responses, las llamadas a herramientas aparecen en `response.output`, por ejemplo como elementos con `type == "function_call"`. Cada una de esas llamadas incluye el `name` de la funci贸n y una cadena JSON con `arguments` que respetan el esquema declarado. Tu aplicaci贸n debe: (1) detectar las llamadas, (2) ejecutar la l贸gica local/externa correspondiente y (3) (opcionalmente) enviar el resultado de la herramienta de vuelta al modelo para una respuesta final. Scripts (en orden de capacidad):