Oracle Cloud Infrastructure Data Science
Oracle Cloud Infrastructure (OCI) Data Science is a fully managed, serverless platform for data science teams to build, train, and manage machine learning models in Oracle Cloud Infrastructure.
It offers AI Quick Actions, which can be used to deploy, evaluate, and fine-tune foundation LLM models in OCI Data Science. AI Quick Actions target users who want to quickly leverage the capabilities of AI. They aim to expand the reach of foundation models to a broader set of users by providing a streamlined, code-free, and efficient environment for working with foundation models. AI Quick Actions can be accessed from the Data Science Notebook.
Detailed documentation on how to deploy LLM models in OCI Data Science using AI Quick Actions is available here and here.
This notebook explains how to use OCIβs Data Science models with LlamaIndex.
If youβre opening this Notebook on colab, you will probably need to install LlamaIndex π¦.
%pip install llama-index-llms-oci-data-science!pip install llama-indexYou will also need to install the oracle-ads SDK.
!pip install -U oracle-adsAuthentication
Section titled βAuthenticationβThe authentication methods supported for LlamaIndex are equivalent to those used with other OCI services and follow the standard SDK authentication methods, specifically API Key, session token, instance principal, and resource principal. More details can be found here. Make sure to have the required policies to access the OCI Data Science Model Deployment endpoint. The oracle-ads helps to simplify the authentication within OCI Data Science.
Basic Usage
Section titled βBasic UsageβUsing LLMs offered by OCI Data Science AI with LlamaIndex only requires you to initialize the OCIDataScience interface with your Data Science Model Deployment endpoint and model ID. By default the all deployed models in AI Quick Actions get odsc-model ID. However this ID cna be changed during the deployment.
Call complete with a prompt
Section titled βCall complete with a promptβimport adsfrom llama_index.llms.oci_data_science import OCIDataScience
ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")
llm = OCIDataScience( model="odsc-llm", endpoint="https://<MD_OCID>/predict",)response = llm.complete("Tell me a joke")
print(response)Call chat with a list of messages
Section titled βCall chat with a list of messagesβimport adsfrom llama_index.llms.oci_data_science import OCIDataSciencefrom llama_index.core.base.llms.types import ChatMessage
ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")
llm = OCIDataScience( model="odsc-llm", endpoint="https://<MD_OCID>/predict",)response = llm.chat( [ ChatMessage(role="user", content="Tell me a joke"), ChatMessage( role="assistant", content="Why did the chicken cross the road?" ), ChatMessage(role="user", content="I don't know, why?"), ])
print(response)Streaming
Section titled βStreamingβUsing stream_complete endpoint
Section titled βUsing stream_complete endpointβimport adsfrom llama_index.llms.oci_data_science import OCIDataScience
ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")
llm = OCIDataScience( model="odsc-llm", endpoint="https://<MD_OCID>/predict",)
for chunk in llm.stream_complete("Tell me a joke"): print(chunk.delta, end="")Using stream_chat endpoint
Section titled βUsing stream_chat endpointβimport adsfrom llama_index.llms.oci_data_science import OCIDataSciencefrom llama_index.core.base.llms.types import ChatMessage
ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")
llm = OCIDataScience( model="odsc-llm", endpoint="https://<MD_OCID>/predict",)response = llm.stream_chat( [ ChatMessage(role="user", content="Tell me a joke"), ChatMessage( role="assistant", content="Why did the chicken cross the road?" ), ChatMessage(role="user", content="I don't know, why?"), ])
for chunk in response: print(chunk.delta, end="")Call acomplete with a prompt
Section titled βCall acomplete with a promptβimport adsfrom llama_index.llms.oci_data_science import OCIDataScience
ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")
llm = OCIDataScience( model="odsc-llm", endpoint="https://<MD_OCID>/predict",)response = await llm.acomplete("Tell me a joke")
print(response)Call achat with a list of messages
Section titled βCall achat with a list of messagesβimport adsfrom llama_index.llms.oci_data_science import OCIDataSciencefrom llama_index.core.base.llms.types import ChatMessage
ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")
llm = OCIDataScience( model="odsc-llm", endpoint="https://<MD_OCID>/predict",)response = await llm.achat( [ ChatMessage(role="user", content="Tell me a joke"), ChatMessage( role="assistant", content="Why did the chicken cross the road?" ), ChatMessage(role="user", content="I don't know, why?"), ])
print(response)Using astream_complete endpoint
Section titled βUsing astream_complete endpointβimport adsfrom llama_index.llms.oci_data_science import OCIDataScience
ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")
llm = OCIDataScience( model="odsc-llm", endpoint="https://<MD_OCID>/predict",)
async for chunk in await llm.astream_complete("Tell me a joke"): print(chunk.delta, end="")Using astream_chat endpoint
Section titled βUsing astream_chat endpointβimport adsfrom llama_index.llms.oci_data_science import OCIDataSciencefrom llama_index.core.base.llms.types import ChatMessage
ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")
llm = OCIDataScience( model="odsc-llm", endpoint="https://<MD_OCID>/predict",)response = await llm.stream_chat( [ ChatMessage(role="user", content="Tell me a joke"), ChatMessage( role="assistant", content="Why did the chicken cross the road?" ), ChatMessage(role="user", content="I don't know, why?"), ])
async for chunk in response: print(chunk.delta, end="")Configure Model
Section titled βConfigure Modelβimport adsfrom llama_index.llms.oci_data_science import OCIDataScience
ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")
llm = OCIDataScience( model="odsc-llm", endpoint="https://<MD_OCID>/predict", temperature=0.2, max_tokens=500, timeout=120, context_window=2500, additional_kwargs={ "top_p": 0.75, "logprobs": True, "top_logprobs": 3, },)response = llm.chat( [ ChatMessage(role="user", content="Tell me a joke"), ])print(response)Function Calling
Section titled βFunction CallingβThe AI Quick Actions offers prebuilt service containers that make deploying and serving a large language model very easy. Either one of vLLM (a high-throughput and memory-efficient inference and serving engine for LLMs) or TGI (a high-performance text generation server for the popular open-source LLMs) is used in the service container to host the model, the end point created supports the OpenAI API protocol. This allows the model deployment to be used as a drop-in replacement for applications using OpenAI API. If the deployed model supports function calling, then integration with LlamaIndex tools, through the predict_and_call function on the llm allows to attach any tools and let the LLM decide which tools to call (if any).
import adsfrom llama_index.llms.oci_data_science import OCIDataSciencefrom llama_index.core.tools import FunctionTool
ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")
llm = OCIDataScience( model="odsc-llm", endpoint="https://<MD_OCID>/predict", temperature=0.2, max_tokens=500, timeout=120, context_window=2500, additional_kwargs={ "top_p": 0.75, "logprobs": True, "top_logprobs": 3, },)
def multiply(a: float, b: float) -> float: print(f"---> {a} * {b}") return a * b
def add(a: float, b: float) -> float: print(f"---> {a} + {b}") return a + b
def subtract(a: float, b: float) -> float: print(f"---> {a} - {b}") return a - b
def divide(a: float, b: float) -> float: print(f"---> {a} / {b}") return a / b
multiply_tool = FunctionTool.from_defaults(fn=multiply)add_tool = FunctionTool.from_defaults(fn=add)sub_tool = FunctionTool.from_defaults(fn=subtract)divide_tool = FunctionTool.from_defaults(fn=divide)
response = llm.predict_and_call( [multiply_tool, add_tool, sub_tool, divide_tool], user_msg="Calculate the result of `8 + 2 - 6`.", verbose=True,)
print(response)Using FunctionAgent
Section titled βUsing FunctionAgentβimport adsfrom llama_index.llms.oci_data_science import OCIDataSciencefrom llama_index.core.tools import FunctionToolfrom llama_index.core.agent.workflow import FunctionAgent
ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")
llm = OCIDataScience( model="odsc-llm", endpoint="https://<MD_OCID>/predict", temperature=0.2, max_tokens=500, timeout=120, context_window=2500, additional_kwargs={ "top_p": 0.75, "logprobs": True, "top_logprobs": 3, },)
def multiply(a: float, b: float) -> float: print(f"---> {a} * {b}") return a * b
def add(a: float, b: float) -> float: print(f"---> {a} + {b}") return a + b
def subtract(a: float, b: float) -> float: print(f"---> {a} - {b}") return a - b
def divide(a: float, b: float) -> float: print(f"---> {a} / {b}") return a / b
multiply_tool = FunctionTool.from_defaults(fn=multiply)add_tool = FunctionTool.from_defaults(fn=add)sub_tool = FunctionTool.from_defaults(fn=subtract)divide_tool = FunctionTool.from_defaults(fn=divide)
agent = FunctionAgent( tools=[multiply_tool, add_tool, sub_tool, divide_tool], llm=llm,)response = await agent.run( "Calculate the result of `8 + 2 - 6`. Use tools. Return the calculated result.")
print(response)