The OceanBase Spark Connector supports the Spark catalog since version 1.1. By using the Spark catalog, you can access and operate OceanBase Database in a more concise and consistent manner.
This topic describes how to configure and use the OceanBase Spark catalog. After you complete the tutorial, you will be able to access and operate OceanBase Database by using the Spark catalog.
Prerequisites
You have deployed OceanBase Database and created a MySQL-compatible user tenant. For more information about how to create a user tenant, see Create a user tenant.
Procedure
Step 1: Obtain the database connection information
Contact the deployment personnel or administrator of OceanBase Database to obtain the database connection string. For example:
obclient -h$host -P$port -u$user_name -p$password -D$database_name
Parameter description:
$host: the IP address for connecting to OceanBase Database. For the connection through OceanBase Database Proxy (ODP), use the IP address of an ODP. For direct connection, use the IP address of an OBServer node.$port: the port for connecting to OceanBase Database. For the connection through ODP, the default port is2883, which can be customized during ODP deployment. For direct connection, the default port is2881, which can be customized during OceanBase Database deployment.$database_name: the name of the database to be accessed.Notice
The user for connecting to the tenant must have the
CREATE,INSERT,DROP, andSELECTprivileges on the database. For more information about user privileges, see Privilege types in MySQL-compatible mode.$user_name: the tenant connection account. For the connection through ODP, the common format isusername@tenant name#cluster nameorcluster name:tenant name:username. For direct connection, the format isusername@tenant name.$password: the password of the account.
For more information about the connection string, see Connect to an OceanBase tenant by using OBClient.
Step 2: Prepare the Spark environment
Download Spark 3.4.4 and extract it to the specified directory. Use the following commands to navigate to the Spark directory and set the SPARK_HOME environment variable to the extracted directory:
Follow these steps to download and configure Spark:
Download Spark 3.4.4.
wget https://dlcdn.apache.org/spark/spark-3.4.4/spark-3.4.4-bin-hadoop3.tgzExtract
spark-3.4.4-bin-hadoop3.tgzto the specified directory.tar -zxvf spark-3.4.4-bin-hadoop3.tgz -C $SPARK_HOMEExample:
Extract
spark-3.4.4-bin-hadoop3.tgzto the specified directory/home/admin/test_spark_catalog.tar -zxvf spark-3.4.4-bin-hadoop3.tgz -C /home/admin/test_spark_catalogSet the
SPARK_HOMEenvironment variable to the extracted directory.export SPARK_HOME=$(pwd)Example:
export SPARK_HOME=/home/admin/test_spark_catalog/spark-3.4.4-bin-hadoop3
Step 3: Configure OceanBase Catalog
Download the OceanBase Spark Connector.
Download the spark-connector-oceanbase JAR file.
Move the JAR file to the jars directory of Spark Home:
cp spark-connector-oceanbase-3.4_2.12-1.1.jar $SPARK_HOME/jars/
Alternatively, you can directly download the spark-connector-oceanbase JAR file in the specified directory after decompressing
spark-3.4.4-bin-hadoop3.tgz.Navigate to the jars directory of Spark Home.
Example:
cd /home/admin/test_spark_catalog/spark-3.4.4-bin-hadoop3/jarsDownload the spark-connector-oceanbase.
wget https://repo1.maven.org/maven2/com/oceanbase/spark-connector-oceanbase-3.4_2.12/1.1/spark-connector-oceanbase-3.4_2.12-1.1.jar
Download the MySQL driver.
Download the MySQL driver JAR file.
Move the JAR file to the jars directory of Spark Home:
cp mysql-connector-j-8.2.0.jar $SPARK_HOME/jars/
Alternatively, you can directly download the MySQL driver JAR file in the specified directory after decompressing
spark-3.4.4-bin-hadoop3.tgz.Navigate to the jars directory of Spark Home.
Example:
cd /home/admin/test_spark_catalog/spark-3.4.4-bin-hadoop3/jarsDownload the MySQL driver.
wget https://repo1.maven.org/maven2/com/mysql/mysql-connector-j/8.2.0/mysql-connector-j-8.2.0.jar
Edit the Spark configuration file, which is located at
$SPARK_HOME/conf/spark-defaults.confby default.Example:
Note
The IP address and password in the example are desensitized. You need to replace them with the actual values in your environment.
Navigate to the
$SPARK_HOME/confdirectory.cd /home/admin/test_spark_catalog/spark-3.4.4-bin-hadoop3/confConfigure OceanBase Catalog.
vi spark-defaults.confEnter the following content:
spark.sql.catalog.ob=com.oceanbase.spark.catalog.OceanBaseCatalog spark.sql.catalog.ob.url=jdbc:mysql://10.10.10.1:2881 spark.sql.catalog.ob.username=root@mysql001 spark.sql.catalog.ob.password=****** spark.sql.catalog.ob.schema-name=test
Start the Spark SQL CLI.
$SPARK_HOME/bin/spark-sqlHere is an example:
/home/admin/test_spark_catalog/spark-3.4.4-bin-hadoop3/bin/spark-sqlThe return result is as follows:
Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 25/04/07 15:52:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 25/04/07 15:52:05 WARN HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist 25/04/07 15:52:05 WARN HiveConf: HiveConf of name hive.stats.retries.wait does not exist 25/04/07 15:52:09 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.3.0 25/04/07 15:52:09 WARN ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.3.0, comment = Set by MetaStore admin@10.10.10.1 25/04/07 15:52:09 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException Spark master: local[*], Application Id: local-1744012324296 spark-sql (default)>Enter
use ob;in the Spark SQL CLI to switch to the OceanBase catalog configured in step 3.spark-sql (default)> use ob;The return result is as follows:
25/04/07 16:11:06 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException Time taken: 2.544 seconds spark-sql (test)>
Step 4: Use OceanBase Catalog with Spark CLI
Create an OceanBase database table by using Spark SQL
In the Spark SQL CLI, execute the following statement:
spark-sql (test)> CREATE TABLE orders (
order_id INT COMMENT 'order id',
order_date TIMESTAMP,
customer_name STRING,
price DOUBLE,
product_id INT,
order_status BOOLEAN);
The execution result is as follows:
25/04/07 16:14:25 WARN OceanBaseMySQLDialect: Ignored unsupported table property: owner
Time taken: 0.085 seconds
After the execution is successful, connect to the OceanBase database by using OBClient. You can see that the
orderstable has been created.$obclient -h10.10.10.1 -P2881 -uroot@mysql001 -p****** -A test Welcome to the OceanBase. Commands end with ; or \g. Your OceanBase connection id is 3221487689 Server version: OceanBase 4.2.1.8 (r108020012024111712-585a11c3514ac7882b041453a529050ac62c6180) (Built Nov 17 2024 12:49:45) Copyright (c) 2000, 2018, OceanBase and/or its affiliates. All rights reserved. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. obclient [test]> show tables; +----------------+ | Tables_in_test | +----------------+ | orders | +----------------+ 1 row in setYou can also execute
show tables;in the Spark SQL CLI to query theorderstable:spark-sql (test)> show tables;The execution result is as follows:
orders Time taken: 0.044 seconds, Fetched 1 row(s)
Write data to an OceanBase Database table by using Spark SQL
Run the following write statement in the Spark SQL CLI:
spark-sql (test)> INSERT INTO orders VALUES
(1, now(), 'zs', 12.2, 12, true),
(2, now(), 'ls', 121.2, 12, true),
(3, now(), 'xx', 123.2, 12, true),
(4, now(), 'jac', 124.2, 12, false),
(5, now(), 'dot', 111.25, 12, true);
The return result is as follows:
Time taken: 1.433 seconds
Then, connect to OceanBase Database by using OBClient and query the data in the orders table.
obclient [test]> SELECT * FROM orders;
The return result is as follows:
+----------+---------------------+---------------+--------+------------+--------------+
| order_id | order_date | customer_name | price | product_id | order_status |
+----------+---------------------+---------------+--------+------------+--------------+
| 1 | 2025-04-07 16:22:34 | zs | 12.2 | 12 | 1 |
| 2 | 2025-04-07 16:22:34 | ls | 121.2 | 12 | 1 |
| 4 | 2025-04-07 16:22:34 | jac | 124.2 | 12 | 0 |
| 5 | 2025-04-07 16:22:34 | dot | 111.25 | 12 | 1 |
| 3 | 2025-04-07 16:22:34 | xx | 123.2 | 12 | 1 |
+----------+---------------------+---------------+--------+------------+--------------+
5 rows in set
Query an OceanBase Database table by using Spark SQL
Run the following query statement in the Spark SQL CLI:
spark-sql (test)> SELECT * FROM orders;
The return result is as follows:
1 2025-04-07 16:22:34 zs 12.2 12 true
2 2025-04-07 16:22:34 ls 121.2 12 true
4 2025-04-07 16:22:34 jac 124.2 12 false
5 2025-04-07 16:22:34 dot 111.25 12 true
3 2025-04-07 16:22:34 xx 123.2 12 true
Time taken: 0.462 seconds, Fetched 5 row(s)
Step 5: Synchronize external system data to OceanBase Database.
Create a Hive table and insert data
In the Spark SQL CLI, execute the following statements to create a Hive table:
spark-sql (test)> DROP TABLE spark_catalog.default.orders; Time taken: 0.827 seconds spark-sql (test)> CREATE TABLE spark_catalog.default.orders ( order_id INT, order_date TIMESTAMP, customer_name STRING, price DOUBLE, product_id INT, order_status BOOLEAN ) USING PARQUET;The output is as follows:
25/04/07 16:37:54 WARN SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory. 25/04/07 16:37:54 WARN HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist 25/04/07 16:37:54 WARN HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist 25/04/07 16:37:54 WARN HiveConf: HiveConf of name hive.stats.retries.wait does not exist Time taken: 0.457 secondsIn the Spark SQL CLI, execute the following statements to insert data into the Hive table:
spark-sql (test)> INSERT INTO spark_catalog.default.orders VALUES (1, now(), 'zs', 12.2, 12, true), (2, now(), 'ls', 121.2, 12, true), (3, now(), 'xx', 123.2, 12, true), (4, now(), 'jac', 124.2, 12, false), (5, now(), 'dot', 111.25, 12, true);The output is as follows:
Time taken: 1.378 seconds
Synchronize data from Hive tables to OceanBase Database
Execute the following write statement in Spark SQL CLI.
spark-sql (test)> INSERT INTO orders
SELECT * FROM spark_catalog.default.orders;
The return result is as follows:
Time taken: 0.523 seconds
Connect to OceanBase Database by using OBClient and query the orders table. The query result shows that the data is synchronized.
obclient [test]> SELECT * FROM orders;
The return result is as follows:
+----------+---------------------+---------------+--------+------------+--------------+
| order_id | order_date | customer_name | price | product_id | order_status |
+----------+---------------------+---------------+--------+------------+--------------+
| 1 | 2025-04-07 16:22:34 | zs | 12.2 | 12 | 1 |
| 2 | 2025-04-07 16:22:34 | ls | 121.2 | 12 | 1 |
| 4 | 2025-04-07 16:22:34 | jac | 124.2 | 12 | 0 |
| 5 | 2025-04-07 16:22:34 | dot | 111.25 | 12 | 1 |
| 3 | 2025-04-07 16:22:34 | xx | 123.2 | 12 | 1 |
| 4 | 2025-04-07 16:42:04 | jac | 124.2 | 12 | 0 |
| 5 | 2025-04-07 16:42:04 | dot | 111.25 | 12 | 1 |
| 1 | 2025-04-07 16:42:04 | zs | 12.2 | 12 | 1 |
| 2 | 2025-04-07 16:42:04 | ls | 121.2 | 12 | 1 |
| 3 | 2025-04-07 16:42:04 | xx | 123.2 | 12 | 1 |
+----------+---------------------+---------------+--------+------------+--------------+
10 rows in set
Clean up the environment
After you complete this tutorial, you can stop the Spark SQL CLI as needed by using the following command.
spark-sql (test)> quit;