AI Backend¶
A simple interface to easily swap between AI providers and specific models.
Documentation: https://jcmullwh.github.io/ai_backend/
Source Code: https://github.com/ai_backend
Purpose:¶
The purpose of AI Backend is to provide a generic API for accessing AI models that is intuitive and easy to use, build on, maintain, and extend. Complete abstraction when desired combined with fine-grain control when needed.
Kind of like a slice of LangChain but not absolutely terrible.
Examples¶
text_ai = TextAI(backend="openai")
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "This is a test message. Please respond with 'Test response'."},
]
default_model_response = text_ai.text_chat(messages)
gpt_5_1_response = text_ai.text_chat(messages, model="gpt-5.1")
text_ai.set_default("chat", model="gpt-5.1")
also_gpt_5_1_response = text_ai.text_chat(messages)
# At any point, parameters can be passed as kwargs.
# If it's a call to the model it will use these parameters.
response_with_a_bunch_of_parameters = text_ai.text_chat(messages,
frequency_penalty=1.1,
max_tokens=42,
presence_penalty=1.2,
n=4,
)
# Parameters can be set as default at initialization and using set_default()
new_text_ai = TextAI(frequency_penalty=1.1)
response_with_modified_frequency_penalty = new_text_ai(messages)
new_text_ai.set_default(frequency_penalty=1)
response_with_frequency_penalty_1 = new_text_ai(messages)
# Future:
text_ai.set_backend("google")
default_google_response = text_ai.text_chat(messages)
OpenAI text calls¶
For modern OpenAI text models (e.g., GPT-4.x / GPT-5.x), TextAI.text_chat now uses the Responses API under the hood. Older models still fall back to Chat Completions. The public text_chat interface is unchanged—keep passing a messages list and receive a string (or "full" for the full object).
Default OpenAI models¶
- Text chat: gpt-5.1 (temperature 0.2)
- Embeddings: text-embedding-3-large
- Image generation: dall-e-3 (1024x1024, standard quality)
- Audio: gpt-4o-transcribe (verbose_json, segment timestamps) and gpt-4o-mini-tts (voice alloy)
OpenAI tools (Chat Completions)¶
You can prototype OpenAI tool/function calling without changing ai_backend by driving the Chat Completions path and doing a small loop in your app:
import json
from ai_backend import TextAI
def add(a: int, b: int) -> int:
return a + b
tools = [
{
"type": "function",
"function": {
"name": "add",
"description": "Add two integers.",
"parameters": {
"type": "object",
"properties": {"a": {"type": "integer"}, "b": {"type": "integer"}},
"required": ["a", "b"],
},
},
}
]
text_ai = TextAI()
messages = [
{"role": "system", "content": "Use the 'add' tool when adding numbers."},
{"role": "user", "content": "What is 2 + 3?"},
]
# 1) Ask the model for a tool call (force Chat Completions)
choice = text_ai.text_chat(
messages,
tools=tools,
tool_choice="auto",
response_type="full",
use_responses=False, # required to avoid the Responses API, which does not surface tool_calls yet
temperature=0,
)
tool_call = choice.message.tool_calls[0]
args = json.loads(tool_call.function.arguments)
# 2) Run the Python function locally
result = add(**args)
# 3) Feed the tool result back and get the final answer
messages.extend(
[
{
"role": "assistant",
"content": choice.message.content or "",
"tool_calls": [
{
"id": tool_call.id,
"type": "function",
"function": {
"name": tool_call.function.name,
"arguments": tool_call.function.arguments,
},
}
],
},
{
"role": "tool",
"tool_call_id": tool_call.id,
"name": "add",
"content": json.dumps({"result": result}),
},
]
)
final = text_ai.text_chat(
messages,
tools=tools,
tool_choice="auto",
response_type="full",
use_responses=False,
temperature=0,
)
print(final.message.content)
Limitations (current state):
- OpenAI-only; no tool support wired for other backends yet.
- Must force Chat Completions (use_responses=False or a non–gpt-4/5 model); the Responses branch currently strips tool metadata.
- Orchestration loop lives in your code; TextAI does not yet manage tool execution or message augmentation for you.
- Live tests are marked @pytest.mark.live_api and expect a working OpenAI key.
PDM scripts¶
Defined under [tool.pdm.scripts] in pyproject.toml (run with pdm run <name>):
test: run pytest while skippinglive_apitests.test-live: run only thelive_api-marked suite.test-cov-xml: same astestbut emits an XML coverage report.lint: execute the project lint script (scripts/lint.py).lint-check: lint in check/CI mode (scripts/lint-check.py).docs-serve: serve docs locally viamkdocs serve.docs-build: build static docs viamkdocs build.
[tool.pdm.scripts]
test = "pytest -m 'not live_api'"
test-live = "pytest -m live_api"
test-cov-xml = "pytest -m 'not live_api' --cov-report=xml"
lint = "scripts/lint.py"
lint-check = "scripts/lint-check.py"
docs-serve = "mkdocs serve"
docs-build = "mkdocs build"
Status¶
Basic Functionality: - [x] OpenAI Backend - [x] Capability API - [x] Text - [x] Image - [ ] Audio - [ ] 90% Test Coverage - [ ] Clear Logging of all changes to model parameters - [ ] Message/Memory Handling
Addtional Backends:
- Anthropic
- Meta
- Stability
- Midjourney
Additional Capabilities: - [ ] Functions - [ ] Embeddings