Google Cloud SQL for PostgreSQL
Cloud SQL is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers PostgreSQL, PostgreSQL, and SQL Server database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.
This notebook goes over how to use Cloud SQL for PostgreSQL
to store vector embeddings with the PostgresVectorStore
class.
Learn more about the package on GitHub.
Before you beginโ
To run this notebook, you will need to do the following:
- Create a Google Cloud Project
- Enable the Cloud SQL Admin API.
- Create a Cloud SQL instance.
- Create a Cloud SQL database.
- Add a User to the database.
๐ฆ๐ Library Installationโ
Install the integration library, langchain-google-cloud-sql-pg
, and the library for the embedding service, langchain-google-vertexai
.
%pip install --upgrade --quiet langchain-google-cloud-sql-pg langchain-google-vertexai
Colab only: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top.
# # Automatically restart kernel after installs so that your environment can access the new packages
# import IPython
# app = IPython.Application.instance()
# app.kernel.do_shutdown(True)
๐ Authenticationโ
Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.
- If you are using Colab to run this notebook, use the cell below and continue.
- If you are using Vertex AI Workbench, check out the setup instructions here.
from google.colab import auth
auth.authenticate_user()
โ Set Your Google Cloud Projectโ
Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.
If you don't know your project ID, try the following:
- Run
gcloud config list
. - Run
gcloud projects list
. - See the support page: Locate the project ID.
# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.
PROJECT_ID = "my-project-id" # @param {type:"string"}
# Set the project id
!gcloud config set project {PROJECT_ID}
Basic Usageโ
Set Cloud SQL database valuesโ
Find your database values, in the Cloud SQL Instances page.
# @title Set Your Values Here { display-mode: "form" }
REGION = "us-central1" # @param {type: "string"}
INSTANCE = "my-pg-instance" # @param {type: "string"}
DATABASE = "my-database" # @param {type: "string"}
TABLE_NAME = "vector_store" # @param {type: "string"}
PostgresEngine Connection Poolโ
One of the requirements and arguments to establish Cloud SQL as a vector store is a PostgresEngine
object. The PostgresEngine
configures a connection pool to your Cloud SQL database, enabling successful connections from your application and following industry best practices.
To create a PostgresEngine
using PostgresEngine.from_instance()
you need to provide only 4 things:
project_id
: Project ID of the Google Cloud Project where the Cloud SQL instance is located.region
: Region where the Cloud SQL instance is located.instance
: The name of the Cloud SQL instance.database
: The name of the database to connect to on the Cloud SQL instance.
By default, IAM database authentication will be used as the method of database authentication. This library uses the IAM principal belonging to the Application Default Credentials (ADC) sourced from the envionment.
For more informatin on IAM database authentication please see:
Optionally, built-in database authentication using a username and password to access the Cloud SQL database can also be used. Just provide the optional user
and password
arguments to PostgresEngine.from_instance()
:
user
: Database user to use for built-in database authentication and loginpassword
: Database password to use for built-in database authentication and login.
"Note: This tutorial demonstrates the async interface. All async methods have corresponding sync methods."
from langchain_google_cloud_sql_pg import PostgresEngine
engine = await PostgresEngine.afrom_instance(
project_id=PROJECT_ID, region=REGION, instance=INSTANCE, database=DATABASE
)
Initialize a tableโ
The PostgresVectorStore
class requires a database table. The PostgresEngine
engine has a helper method init_vectorstore_table()
that can be used to create a table with the proper schema for you.
from langchain_google_cloud_sql_pg import PostgresEngine
await engine.ainit_vectorstore_table(
table_name=TABLE_NAME,
vector_size=768, # Vector size for VertexAI model(textembedding-gecko@latest)
)
Create an embedding class instanceโ
You can use any LangChain embeddings model.
You may need to enable Vertex AI API to use VertexAIEmbeddings
. We recommend setting the embedding model's version for production, learn more about the Text embeddings models.
# enable Vertex AI API
!gcloud services enable aiplatform.googleapis.com
from langchain_google_vertexai import VertexAIEmbeddings
embedding = VertexAIEmbeddings(
model_name="textembedding-gecko@latest", project=PROJECT_ID
)
Initialize a default PostgresVectorStoreโ
from langchain_google_cloud_sql_pg import PostgresVectorStore
store = await PostgresVectorStore.create( # Use .create() to initialize an async vector store
engine=engine,
table_name=TABLE_NAME,
embedding_service=embedding,
)
Add textsโ
import uuid
all_texts = ["Apples and oranges", "Cars and airplanes", "Pineapple", "Train", "Banana"]
metadatas = [{"len": len(t)} for t in all_texts]
ids = [str(uuid.uuid4()) for _ in all_texts]
await store.aadd_texts(all_texts, metadatas=metadatas, ids=ids)
Delete textsโ
await store.adelete([ids[1]])
Search for documentsโ
query = "I'd like a fruit."
docs = await store.asimilarity_search(query)
print(docs)
Search for documents by vectorโ
query_vector = embedding.embed_query(query)
docs = await store.asimilarity_search_by_vector(query_vector, k=2)
print(docs)
Add a Indexโ
Speed up vector search queries by applying a vector index. Learn more about vector indexes.
from langchain_google_cloud_sql_pg.indexes import IVFFlatIndex
index = IVFFlatIndex()
await store.aapply_vector_index(index)
Re-indexโ
await store.areindex() # Re-index using default index name
Remove an indexโ
await store.aadrop_vector_index() # Delete index using default name
Create a custom Vector Storeโ
A Vector Store can take advantage of relational data to filter similarity searches.
Create a table with custom metadata columns.
from langchain_google_cloud_sql_pg import Column
# Set table name
TABLE_NAME = "vectorstore_custom"
await engine.ainit_vectorstore_table(
table_name=TABLE_NAME,
vector_size=768, # VertexAI model: textembedding-gecko@latest
metadata_columns=[Column("len", "INTEGER")],
)
# Initialize PostgresVectorStore
custom_store = await PostgresVectorStore.create(
engine=engine,
table_name=TABLE_NAME,
embedding_service=embedding,
metadata_columns=["len"],
# Connect to a existing VectorStore by customizing the table schema:
# id_column="uuid",
# content_column="documents",
# embedding_column="vectors",
)
Search for documents with metadata filterโ
import uuid
# Add texts to the Vector Store
all_texts = ["Apples and oranges", "Cars and airplanes", "Pineapple", "Train", "Banana"]
metadatas = [{"len": len(t)} for t in all_texts]
ids = [str(uuid.uuid4()) for _ in all_texts]
await store.aadd_texts(all_texts, metadatas=metadatas, ids=ids)
# Use filter on search
docs = await custom_store.asimilarity_search_by_vector(query_vector, filter="len >= 6")
print(docs)
Relatedโ
- Vector store conceptual guide
- Vector store how-to guides