OceanBase OceanBase offers features such as vector storage, vector indexing, and embedding-based vector search. You can store vectorized data in OceanBase Database, making it available for fast and efficient search.
Google Gemini AI is a series of multimodal large language models (LLMs) developed by Google. It is designed to understand and process various types of data, including text, code, images, audio, and video.
Prerequisites
You have deployed OceanBase Database V4.4.0 or later, and created a MySQL-compatible tenant. After creating the tenant, continue with the steps below.
Your environment includes an active MySQL-compatible tenant, a MySQL database, and a user account with read and write privileges.
Python 3.11 or above is installed.
Required dependencies are installed:
python3 -m pip install pyobvector requests google-genai tqdmMake sure you have set the
ob_vector_memory_limit_percentageparameter in your tenant to enable vector search. A recommended value is30. For details on configuring this parameter, refer to ob_vector_memory_limit_percentage.
Step 1: Get your database connection information
Reach out to your OceanBase administrator or deployment team to obtain the database connection string, for example:
obclient -h$host -P$port -u$user_name -p$password -D$database_name
Parameters:
$host: The IP address for connecting to OceanBase Database. If you are using OceanBase Database Proxy (ODP), use the ODP address. For direct connections, use the OBServer node IP.$port: The port number for connecting to OceanBase Database. The default for ODP is2883(can be customized during ODP deployment). For direct connections, the default is2881(customizable during OceanBase deployment).$database_name: The name of the database you want to access.Notice
The user connecting to the tenant must have
CREATE,INSERT,DROP, andSELECTprivileges on the database. For more details on user privileges, see privilege types in MySQL-compatible mode.$user_name: The user account for connecting to the tenant. For ODP, common formats areusername@tenant_name#cluster_nameorcluster_name:tenant_name:username; for direct connections, useusername@tenant_name.$password: The password for the account.
For more details about connection strings, see Connect to an OceanBase tenant using OBClient.
Step 2: Build your AI assistant
Set environment variables
Obtain the Google Gemini API and OceanBase connection information and set the environment variables.
export OCEANBASE_DATABASE_URL=YOUR_OCEANBASE_DATABASE_URL
export OCEANBASE_DATABASE_USER=YOUR_OCEANBASE_DATABASE_USER
export OCEANBASE_DATABASE_DB_NAME=YOUR_OCEANBASE_DATABASE_DB_NAME
export OCEANBASE_DATABASE_PASSWORD=YOUR_OCEANBASE_DATABASE_PASSWORD
export GEMINI_API_KEY=YOUR_GEMINI_API_KEY
Sample code snippets
Load data
Here is an example of using the Google Gemini AI embedding API to generate vector data with the text-embedding-004 model:
from google import genai
from glob import glob
from tqdm import tqdm
import os
from google.genai import types
from sqlalchemy import Column, Integer, String
from pyobvector import ObVecClient, VECTOR, IndexParam, l2_distance
documents = [
"Artificial intelligence was founded as an academic discipline in 1956.",
"Alan Turing was the first person to conduct substantial research in AI.",
"Born in Maida Vale, London, Turing was raised in southern England.",
]
genai_client=genai.Client(api_key=os.environ["GEMINI_API_KEY"])
def embed_documents_in_batches(documents, batch_size=100):
all_embeddings = []
for i in range(0, len(documents), batch_size):
batch = documents[i:i + batch_size]
response = genai_client.models.embed_content(
model="text-embedding-004",
contents=batch,
config=types.EmbedContentConfig(output_dimensionality=768)
)
all_embeddings.extend(response.embeddings)
return all_embeddings
embeddings = embed_documents_in_batches(documents)
data = []
for i, line in enumerate(tqdm(documents, desc="Creating embeddings")):
data.append({"text": line ,"embedding": embeddings[i].values})
Define the vector table structure and store the vector data in OceanBase
Create a table named gemini_oceanbase_demo_documents that contains a text column for storing text, an embedding column for storing the embedding vector, and a vector index. Store the vector data in OceanBase:
OCEANBASE_DATABASE_URL = os.getenv('OCEANBASE_DATABASE_URL')
OCEANBASE_DATABASE_USER = os.getenv('OCEANBASE_DATABASE_USER')
OCEANBASE_DATABASE_DB_NAME = os.getenv('OCEANBASE_DATABASE_DB_NAME')
OCEANBASE_DATABASE_PASSWORD = os.getenv('OCEANBASE_DATABASE_PASSWORD')
client = ObVecClient(uri=OCEANBASE_DATABASE_URL, user=OCEANBASE_DATABASE_USER,password=OCEANBASE_DATABASE_PASSWORD,db_name=OCEANBASE_DATABASE_DB_NAME)
table_name = "gemini_oceanbase_demo_documents"
client.drop_table_if_exist(table_name)
cols = [
Column("id", Integer, primary_key=True, autoincrement=True),
Column("text", String(10000), nullable=False),
Column("embedding", VECTOR(768))
]
# Create vector index
vector_index_params = IndexParam(
index_name="idx_question_embedding",
field_name="embedding",
index_type="HNSW",
distance_metric="l2"
)
client.create_table_with_index_params(
table_name=table_name,
columns=cols,
vidxs=[vector_index_params]
)
print('- Inserting Data to OceanBase...')
client.insert(table_name, data=data)
Perform semantic search
Generate an embedding vector for the query text using the text-embedding-004 embedding model. Then, search for the most relevant documents based on the l2 distance between the query text's embedding vector and each embedding vector in the vector table:
question = "When was artificial intelligence founded?"
quest_embed = genai_client.models.embed_content(model="text-embedding-004", contents=question)
search_res = client.ann_search(
table_name,
vec_data=quest_embed.embeddings[0].values,
vec_column_name="embedding",
distance_func=l2_distance,
with_dist=True,
topk=1,
output_column_names=["id","text"],
)
print('- The Most Relevant Document and Its Distance to the Query:')
for row in search_res.fetchall():
print(f' - ID: {row[0]}\n'
f' content: {row[1]}\n'
f' distance: {row[2]}')
Expected result
- ID: 1
content: Artificial intelligence was founded as an academic discipline in 1956.
distance: 0.6019276093082409