SambaNova
SambaNova's Sambastudio is a platform for running your own open-source models
This example goes over how to use LangChain to interact with SambaNova models
SambaStudioโ
SambaStudio allows you to train, run batch inference jobs, and deploy online inference endpoints to run open source models that you fine tuned yourself.
A SambaStudio environment is required to deploy a model. Get more information at sambanova.ai/products/enterprise-ai-platform-sambanova-suite
The sseclient-py package is required to run streaming predictions
%pip install --quiet sseclient-py==1.8.0
Register your environment variables:
import os
sambastudio_base_url = "<Your SambaStudio environment URL>"
sambastudio_base_uri = "<Your SambaStudio endpoint base URI>" # optional, "api/predict/generic" set as default
sambastudio_project_id = "<Your SambaStudio project id>"
sambastudio_endpoint_id = "<Your SambaStudio endpoint id>"
sambastudio_api_key = "<Your SambaStudio endpoint API key>"
# Set the environment variables
os.environ["SAMBASTUDIO_BASE_URL"] = sambastudio_base_url
os.environ["SAMBASTUDIO_BASE_URI"] = sambastudio_base_uri
os.environ["SAMBASTUDIO_PROJECT_ID"] = sambastudio_project_id
os.environ["SAMBASTUDIO_ENDPOINT_ID"] = sambastudio_endpoint_id
os.environ["SAMBASTUDIO_API_KEY"] = sambastudio_api_key
Call SambaStudio models directly from LangChain!
from langchain_community.llms.sambanova import SambaStudio
llm = SambaStudio(
streaming=False,
model_kwargs={
"do_sample": True,
"max_tokens_to_generate": 1000,
"temperature": 0.01,
# "repetition_penalty": 1.0,
# "top_k": 50,
# "top_logprobs": 0,
# "top_p": 1.0
},
)
print(llm.invoke("Why should I use open source models?"))
API Reference:SambaStudio
# Streaming response
from langchain_community.llms.sambanova import SambaStudio
llm = SambaStudio(
streaming=True,
model_kwargs={
"do_sample": True,
"max_tokens_to_generate": 1000,
"temperature": 0.01,
# "repetition_penalty": 1.0,
# "top_k": 50,
# "top_logprobs": 0,
# "top_p": 1.0
},
)
for chunk in llm.stream("Why should I use open source models?"):
print(chunk, end="", flush=True)
API Reference:SambaStudio
You can also call a CoE endpoint expert model
# Using a CoE endpoint
from langchain_community.llms.sambanova import SambaStudio
llm = SambaStudio(
streaming=False,
model_kwargs={
"do_sample": True,
"max_tokens_to_generate": 1000,
"temperature": 0.01,
"process_prompt": False,
"select_expert": "Meta-Llama-3-8B-Instruct",
# "repetition_penalty": 1.0,
# "top_k": 50,
# "top_logprobs": 0,
# "top_p": 1.0
},
)
print(llm.invoke("Why should I use open source models?"))
API Reference:SambaStudio
Relatedโ
- LLM conceptual guide
- LLM how-to guides