-
-
Notifications
You must be signed in to change notification settings - Fork 286
Switch to Responses API for OpenAI #325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Some model (incl. `o4-mini-deep-research`) aren't compatible with the chat/completion API. This PR introduces a new `Response` class, which - similarly to `Chat` (and inheriting from the same base `Conversation` class) - allows a user to target the `responses` endpoint.
…e had Getting this: ruby(19317,0x206ace0c0) malloc: Double free of object 0x10afc39e0 ruby(19317,0x206ace0c0) malloc: *** set a breakpoint in malloc_error_break to debug
Hey Paul, thanks for your work so far! Just to be totally transparent: This is a big change. I feel like this needs a bit more time in the oven and I want to think thoroughly how to have one provider that has two different APIs as RubyLLM was not designed with that in mind. Responses API not supporting audio is a big no-no. Backwards compatibility is king. Not using a new shiny model that's only supported on the responses API is a lot less inconvenient than suddenly pulling the rug under audio support. I'll take a look at this when it hurts. |
This change would be backward compatible as it falls back to the chat completion API when you provide audio. |
Just some heads up, support for audio in the Responses API might be coming:
Don't see it in the docs yet. |
I'm interested in Responses API support so we can more easily connect to remote MCP servers. |
What this does
As explained here there are numerous reasons to use the newer Responses API instead of the Chat Completions API. Features we get by switching include:
o4-mini-deep-research
There is one feature not yet available - audio inputs are not supported by the Responses API. So, the library will detect any audio inputs and fall back to the Chat Completions API if they exist.
Type of change
Scope check
Quality check
overcommit --install
and all hooks passmodels.json
,aliases.json
)API changes
Related issues
Replaces #290
Should enable resolution of #213