Skip to content

Caches

caches ΒΆ

Optional caching layer for language models.

Distinct from provider-based prompt caching.

Beta feature

This is a beta feature. Please be wary of deploying experimental code to production unless you've taken appropriate precautions.

A cache is useful for two reasons:

  1. It can save you money by reducing the number of API calls you make to the LLM provider if you're often requesting the same completion multiple times.
  2. It can speed up your application by reducing the number of API calls you make to the LLM provider.

InMemoryCache ΒΆ

Bases: BaseCache

Cache that stores things in memory.

Example
from langchain_core.caches import InMemoryCache
from langchain_core.outputs import Generation

# Initialize cache
cache = InMemoryCache()

# Update cache
cache.update(
    prompt="What is the capital of France?",
    llm_string="model='gpt-3.5-turbo', temperature=0.1",
    return_val=[Generation(text="Paris")],
)

# Lookup cache
result = cache.lookup(
    prompt="What is the capital of France?",
    llm_string="model='gpt-3.5-turbo', temperature=0.1",
)
# result is [Generation(text="Paris")]
METHOD DESCRIPTION
__init__

Initialize with empty cache.

lookup

Look up based on prompt and llm_string.

update

Update cache based on prompt and llm_string.

clear

Clear cache.

alookup

Async look up based on prompt and llm_string.

aupdate

Async update cache based on prompt and llm_string.

aclear

Async clear cache.

__init__ ΒΆ

__init__(*, maxsize: int | None = None) -> None

Initialize with empty cache.

PARAMETER DESCRIPTION
maxsize

The maximum number of items to store in the cache.

If None, the cache has no maximum size.

If the cache exceeds the maximum size, the oldest items are removed.

TYPE: int | None DEFAULT: None

RAISES DESCRIPTION
ValueError

If maxsize is less than or equal to 0.

lookup ΒΆ

lookup(prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None

Look up based on prompt and llm_string.

PARAMETER DESCRIPTION
prompt

A string representation of the prompt.

In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

TYPE: str

RETURNS DESCRIPTION
RETURN_VAL_TYPE | None

On a cache miss, return None. On a cache hit, return the cached value.

update ΒΆ

update(prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None

Update cache based on prompt and llm_string.

PARAMETER DESCRIPTION
prompt

A string representation of the prompt.

In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

TYPE: str

return_val

The value to be cached.

The value is a list of Generation (or subclasses).

TYPE: RETURN_VAL_TYPE

clear ΒΆ

clear(**kwargs: Any) -> None

Clear cache.

alookup async ΒΆ

alookup(prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None

Async look up based on prompt and llm_string.

PARAMETER DESCRIPTION
prompt

A string representation of the prompt.

In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

TYPE: str

RETURNS DESCRIPTION
RETURN_VAL_TYPE | None

On a cache miss, return None. On a cache hit, return the cached value.

aupdate async ΒΆ

aupdate(prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None

Async update cache based on prompt and llm_string.

PARAMETER DESCRIPTION
prompt

A string representation of the prompt.

In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

TYPE: str

return_val

The value to be cached. The value is a list of Generation (or subclasses).

TYPE: RETURN_VAL_TYPE

aclear async ΒΆ

aclear(**kwargs: Any) -> None

Async clear cache.

BaseCache ΒΆ

Bases: ABC

Interface for a caching layer for LLMs and Chat models.

The cache interface consists of the following methods:

  • lookup: Look up a value based on a prompt and llm_string.
  • update: Update the cache based on a prompt and llm_string.
  • clear: Clear the cache.

In addition, the cache interface provides an async version of each method.

The default implementation of the async methods is to run the synchronous method in an executor. It's recommended to override the async methods and provide async implementations to avoid unnecessary overhead.

METHOD DESCRIPTION
lookup

Look up based on prompt and llm_string.

update

Update cache based on prompt and llm_string.

clear

Clear cache that can take additional keyword arguments.

alookup

Async look up based on prompt and llm_string.

aupdate

Async update cache based on prompt and llm_string.

aclear

Async clear cache that can take additional keyword arguments.

lookup abstractmethod ΒΆ

lookup(prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None

Look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

PARAMETER DESCRIPTION
prompt

A string representation of the prompt.

In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.).

These invocation parameters are serialized into a string representation.

TYPE: str

RETURNS DESCRIPTION
RETURN_VAL_TYPE | None

On a cache miss, return None. On a cache hit, return the cached value. The cached value is a list of Generation (or subclasses).

update abstractmethod ΒΆ

update(prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None

Update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the lookup method.

PARAMETER DESCRIPTION
prompt

A string representation of the prompt.

In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.).

These invocation parameters are serialized into a string representation.

TYPE: str

return_val

The value to be cached.

The value is a list of Generation (or subclasses).

TYPE: RETURN_VAL_TYPE

clear abstractmethod ΒΆ

clear(**kwargs: Any) -> None

Clear cache that can take additional keyword arguments.

alookup async ΒΆ

alookup(prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None

Async look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

PARAMETER DESCRIPTION
prompt

A string representation of the prompt.

In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.).

These invocation parameters are serialized into a string representation.

TYPE: str

RETURNS DESCRIPTION
RETURN_VAL_TYPE | None

On a cache miss, return None. On a cache hit, return the cached value. The cached value is a list of Generation (or subclasses).

aupdate async ΒΆ

aupdate(prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None

Async update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the look up method.

PARAMETER DESCRIPTION
prompt

A string representation of the prompt.

In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.).

These invocation parameters are serialized into a string representation.

TYPE: str

return_val

The value to be cached. The value is a list of Generation (or subclasses).

TYPE: RETURN_VAL_TYPE

aclear async ΒΆ

aclear(**kwargs: Any) -> None

Async clear cache that can take additional keyword arguments.