This repository contains a Python CLI chatbot that uses OpenAI models and a weather tool integration.
The project is structured with clear layers:
src/app: startup and configuration loading.src/api: CLI interaction loop.src/core: chatbot orchestration and tool definitions.src/infra: external clients (OpenAI and Open-Meteo).src/services: application services used by the API layer.
The application entry point is python -m src.app.main.
The assistant is designed to:
- hold conversational interactions through the terminal,
- answer in Spanish with a poetic style,
- call a weather tool when weather-related requests appear,
- switch between OpenAI Responses API and Chat Completions API with an environment flag.
In short, this is a practical baseline for a tool-enabled conversational agent with clean separation between domain, services, and infrastructure.
- Python 3.12+ (Docker image uses
python:3.12-slim) - Internet access (OpenAI + Open-Meteo APIs)
- OpenAI API key
Dependencies are pinned in requirements.txt.
Install locally:
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtUse .env.example as a template.
Required:
OPENAI_API_KEY
Optional (with defaults):
OPENAI_MODEL(default:gpt-5.4-mini)OPENAI_MAX_OUTPUT_TOKENS(default:500)OPENAI_TEMPERATURE(default:0.7)OPENAI_MODE(responsesorchat_completions, default:responses)LOG_LEVEL(default:INFO)
Create your local environment file:
cp .env.example .env
# then edit .env and set OPENAI_API_KEYpython -m src.app.mainYou will get an interactive prompt:
- type a message and press Enter,
- type
exit,quit, orsalirto stop.
Build and run:
make build
make upOr run one-off container sessions:
make runUseful commands:
make down: stop and remove containersmake logs: follow app logsmake shell: open shell in the app container
Set OPENAI_MODE in .env:
responses: uses the Responses API client.chat_completions: uses the Chat Completions API client.
No code change is needed; restart the app after updating the variable.