Purpose
This statement is used to query data from one or more tables.
This section mainly describes the general SELECT syntax. For other SELECT related syntax, see:
Privilege requirements
To execute the SELECT statement, the current user must have the SELECT privilege. For more information about the privileges of OceanBase Database, see Privilege types in MySQL mode.
Syntax
select_stmt:
SELECT [hint_options]
[{DISTINCT | UNIQUE} | ALL]
[SQL_CALC_FOUND_ROWS]
[STRAIGHT_JOIN]
select_expr_list
[FROM from_list]
[WHERE where_condition]
[GROUP BY group_by_condition_list [WITH ROLLUP]
| GROUP BY [group_by_condition_list] group_by_summary_option (expression_list)]
[HAVING having_condition]
[window_clause]
[ORDER BY order_by_condition_list]
[LIMIT limit_clause]
[lock_option];
select_expr_list:
select_expr [, select_expr ...]
select_expr:
*
| table_name.{* | column_name}
| table_alias_name.{* | column_name}
| expr [[AS] column_alias_name]
from_list:
DUAL
| table_references
| { location_url | table_function }
location_url:
'<string>'
(
{FORMAT = (
TYPE = 'CSV',
LINE_DELIMITER = '<string>' | <expr>,
FIELD_DELIMITER = '<string>' | <expr>,
ESCAPE = '<character>' | <expr>,
FIELD_OPTIONALLY_ENCLOSED_BY = '<character>' | <expr>,
ENCODING = 'charset',
NULL_IF = ('<string>' | <expr>, '<string>' | <expr> ...),
SKIP_HEADER = <int>,
SKIP_BLANK_LINES = { TRUE | FALSE },
PARSE_HEADER = { TRUE | FALSE },
TRIM_SPACE = { TRUE | FALSE },
EMPTY_FIELD_AS_NULL = { TRUE | FALSE }
)
| FORMAT = ( TYPE = 'PARQUET' | 'ORC' )
}
[PATTERN = '<regex_pattern>']
)
table_function:
{
FILES (
LOCATION = '<string>',
{
FORMAT = (
TYPE = 'CSV',
LINE_DELIMITER = '<string>' | <expr>,
FIELD_DELIMITER = '<string>' | <expr>,
ESCAPE = '<character>' | <expr>,
FIELD_OPTIONALLY_ENCLOSED_BY = '<character>' | <expr>,
ENCODING = 'charset',
NULL_IF = ('<string>' | <expr>, '<string>' | <expr> ...),
SKIP_HEADER = <int>,
SKIP_BLANK_LINES = { TRUE | FALSE },
PARSE_HEADER = { TRUE | FALSE },
TRIM_SPACE = { TRUE | FALSE },
EMPTY_FIELD_AS_NULL = { TRUE | FALSE }
)
| FORMAT = ( TYPE = 'PARQUET' | 'ORC' )
},
[PATTERN = '<regex_pattern>']
)
| SOURCE (
TYPE = 'ODPS',
ACCESSID = '<string>',
ACCESSKEY = '<string>',
ENDPOINT = '<string>',
TUNNEL_ENDPOINT = '<string>',
PROJECT_NAME = '<string>',
SCHEMA_NAME = '<string>',
TABLE_NAME = '<string>',
QUOTA_NAME = '<string>',
COMPRESSION_CODE = '<string>'
)
}
where_condition:
expression
group_by_condition_list:
group_by_condition [, group_by_condition ...]
group_by_condition:
expression [ASC | DESC]
group_by_summary_option:
GROUPING SETS
| ROLLUP
| CUBE
expression_list:
expression [, expression ...]
having_condition:
expression
order_by_condition_list:
order_by_condition [, order_by_condition ...]
order_by_condition:
expression [ASC | DESC]
limit_clause:
[offset,] row_count
| row_count OFFSET offset
lock_option:
FOR UPDATE [opt_for_update_wait]
| LOCK IN SHARE MODE
opt_for_update_wait:
WAIT {decimal | intnum}
| NOWAIT
| NO_WAIT
| SKIP LOCKED
Parameters
| Parameter | Description |
|---|---|
| select_stmt | The SQL statement part that specifies which data to retrieve from the database. |
| hint_options | Optional. Specifies the hint option. For more information about hints, see Optimizer Hint. |
| {DISTINCT | UNIQUE} | ALL | Optional. Specifies whether to return duplicate rows in the result set.
|
| SQL_CACHE | SQL_NO_CACHE | Optional. Specifies whether to cache the query result.
|
| SQL_CALC_FOUND_ROWS | Optional. Specifies that the total number of rows returned in a query with the LIMIT clause is recorded, and the FOUND_ROWS() function can be used to obtain the total number of rows. |
| STRAIGHT_JOIN | Optional. Forces the optimizer to join tables in the order specified in the FROM clause. This command does not affect the execution result.
NoticeOceanBase Database is not fully compatible with the |
| select_expr_list | The list of columns or expressions to display in the query result. For more information about columns and expressions, see select_expr. |
| column_name | The column name. |
| FROM from_list | Optional. Specifies the data source for the query, which can be a table, view, or subquery. For more information about data sources, see from_list. |
| { table_function | location_url } | Optional. The current URL external table supports two syntax forms. For more information, see the following:
|
| WHERE where_condition | Optional. Specifies the filter condition for the query. For more information, see where_condition. |
| GROUP BY group_by_condition_list | Optional. Specifies the columns to group the result set by. Typically used with aggregate functions. For more information, see group_by_condition. |
| WITH ROLLUP | Optional. Specifies the grouping level for the result set. |
| GROUP BY [group_by_condition_list] group_by_summary_option (expression_list) | Optional. Specifies the grouping level for the result set.
NoteFor OceanBase Database V4.3.5, advanced grouping operations are supported starting from V4.3.5 BP1. |
| HAVING having_condition | Optional. Specifies the filter condition for the grouped result set. The HAVING clause is similar to the WHERE clause, but it can use aggregate functions such as SUM and AVG. For more information, see having_condition. |
| window_clause | Optional. Specifies the window definition for the analytic function (also known as window function in some databases). For more information about the window_clause syntax, see WINDOW clause. |
| ORDER BY order_by_condition_list | Optional. Specifies the sorting order for the result set. You can specify one or more columns for sorting. For more information, see order_by_condition. |
| LIMIT limit_clause | Optional. Specifies the maximum number of rows to return. For more information, see limit_clause. |
| lock_option | Optional. Specifies the lock option for the query result. For more information, see lock_option. |
select_expr
select_expr specifies the columns or expressions in the query result. Multiple expressions or column names are separated by commas (,). Valid values:
*: specifies all columns.Example:
Create tables
tbl1andtbl2.CREATE TABLE tbl1(col1 INT PRIMARY KEY, col2 VARCHAR(20), col3 INT) PARTITION BY HASH(col1) PARTITIONS 5;CREATE TABLE tbl2(col1 INT PRIMARY KEY, col2 VARCHAR(20), col3 INT);Insert test data into table
tbl1.INSERT INTO tbl1 VALUES(1, 'A1', 1),(2, 'A2', 2),(3, 'A3', 3);INSERT INTO tbl2 VALUES(1, 'A1', 1),(2, 'A2', 22),(3, 'A3', 33);View the data in table
tbl1.SELECT * FROM tbl1;The returned result is as follows:
+------+------+------+ | col1 | col2 | col3 | +------+------+------+ | 1 | A1 | 1 | | 2 | A2 | 2 | | 3 | A3 | 3 | +------+------+------+ 3 rows in set
table_name.{* | column_name}: specifies all columns or a specific column in the specified table.table_name.*: specifies all columns in the specified table.table_name.column_name: specifies a specific column in the specified table.Example:
Read data from column
col1in tabletbl1.SELECT tbl1.col1 FROM tbl1;The returned result is as follows:
+------+ | col1 | +------+ | 1 | | 2 | | 3 | +------+ 3 rows in set
table_alias_name.{* | column_name}: similar to the previous format, but uses the table alias when selecting columns.expr [[AS] column_alias_name]: specifies an expression and allows you to assign an alias to the selected expression as the column name.expr: specifies a column name, function, or calculation expression.expr AS column_alias_name/expr column_alias_name: specifies the expression and assigns the aliascolumn_alias_nameto it.
Example:
Read data from column
col1in tabletbl1, calculate a new columncol1_addwith the valuecol1+1, and calculate another new columncol3_mulwith the valuecol3*2.SELECT col1, col1+1 AS col1_add, col3*2 col3_mul FROM tbl1;The returned result is as follows:
+------+----------+----------+ | col1 | col1_add | col3_mul | +------+----------+----------+ | 1 | 2 | 2 | | 2 | 3 | 4 | | 3 | 4 | 6 | +------+----------+----------+ 3 rows in set
from_list
DUAL: specifies a virtual table name. It is typically used to perform calculations or call functions without an actual table.Example:
SELECT 1+1, SYSDATE() FROM DUAL;The returned result is as follows:
+------+---------------------+ | 1+1 | SYSDATE() | +------+---------------------+ | 2 | 2024-08-28 15:20:59 | +------+---------------------+ 1 row in settable_references: specifies a list of table references. Multiple table references are separated by commas (,). This parameter specifies the list of data source tables for the query, which can be a combination of one or more tables. For more information about the syntax oftable_references, see JOIN clause.
where_condition
expression: specifies the condition expression for filtering the rows to be updated.
Example:
Select all rows from tbl1 where col1 > 1 and col2 = 'A3'.
SELECT * FROM tbl1
WHERE col1 > 1
AND col2 = 'A3';
The returned result is as follows:
+------+------+------+
| col1 | col2 | col3 |
+------+------+------+
| 3 | A3 | 3 |
+------+------+------+
1 row in set
table_function
The LOCATION clause specifies the path where the external table files are stored. Usually, the data files of an external table are stored in a separate directory, which can contain subdirectories. When you create an external table, the system automatically collects all files in the directory.
- The format of the local
LOCATIONclause isLOCATION = '[file://] local_file_path', wherelocal_file_pathcan be a relative or absolute path. If you specify a relative path, the current directory must be the installation directory of OceanBase Database. Thesecure_file_privparameter specifies the file paths that OBServer nodes have access permissions to.local_file_pathmust be a subpath of thesecure_file_privpath. - The format of the remote
LOCATIONclause is:LOCATION = '{oss\|s3}://$ACCESS_ID:$ACCESS_KEY@$HOST:s3_region/remote_file_path', where$ACCESS_ID,$ACCESS_KEY, and$HOSTare the access information required to access Alibaba Cloud OSS, AWS S3, and object storage services compatible with the S3 protocol, ands3_regionis the region information selected when using S3. These sensitive access information are stored in the system tables of the database in an encrypted manner.LOCATION = hdfs://localhost:port/PATH, wherelocalhostis the address of HDFS,portis the port number of HDFS, andPATHis the directory path in HDFS.- For Kerberos authentication:
LOCATION = 'hdfs://localhost:port/user?principal=xxx&keytab=xxx&krb5conf=xxx&configs=xxx'. The parameters are described as follows:principal: the user for login and authentication.keytab: the path of the key file for user authentication.krb5conf: the path of the description file for the Kerberos environment.configs: the additional HDFS configurations. By default, this parameter is empty. However, in a Kerberos environment, this parameter usually has a value and needs to be configured. For example,dfs.data.transfer.protection=authentication,privacyspecifies the data transfer protection level asauthenticationandprivacy.
- For Kerberos authentication:
- The format of the local
The FORMAT clause specifies the properties related to the file reading format. It supports three file formats: CSV, PARQUET, and ORC.
- When TYPE = 'CSV', the following fields are included:
LINE_DELIMITER: specifies the line delimiter for the CSV file. The default value isLINE_DELIMITER='\n'.FIELD_DELIMITER: an optional parameter that specifies the column delimiter for the CSV file. The default value isFIELD_DELIMITER='\t'.ESCAPE: specifies the escape character for the CSV file. It can only be one byte, with the default value ofESCAPE ='\'.FIELD_OPTIONALLY_ENCLOSED_BY: an optional parameter that specifies the symbol used to enclose field values in the CSV file. The default value is empty. Using this option indicates that only certain types of fields (such as CHAR, VARCHAR, TEXT, and JSON) are enclosed.ENCODING: specifies the character set encoding format of the file. If not specified, the default value isUTF8MB4.NULL_IF: specifies the string that is treated asNULL. The default value is empty.SKIP_HEADER: skips the file header and specifies the number of rows to skip.SKIP_BLANK_LINES: specifies whether to skip blank lines. The default value isFALSE, indicating that blank lines are not skipped.TRIM_SPACE: specifies whether to remove leading and trailing spaces from fields in the file. The default value isFALSE, indicating that leading and trailing spaces are not removed.EMPTY_FIELD_AS_NULL: specifies whether to treat empty strings asNULL. The default value isFALSE, indicating that empty strings are not treated asNULL.PARSE_HEADER: specifies that the first line of the CSV file is directly used as the column names for each column.Notice
PARSE_HEADERcannot be used withSKIP_HEADER, as they have conflicting semantics.
- When TYPE = 'PARQUET/ORC', there are no additional fields.
- When TYPE = 'CSV', the following fields are included:
The
PATTERNclause specifies a regular expression pattern to filter files in theLOCATIONdirectory. For each file path in theLOCATIONdirectory, if the file matches the pattern, the external table accesses the file; otherwise, it skips the file. If this parameter is not specified, the external table can access all files in theLOCATIONdirectory by default.
For the ODPS format, data is not retrieved through files, and there is no meaningful URL path. Therefore, only the source form of the table_function is supported.
- When TYPE = 'ODPS', the following fields are included:
ACCESSID: specifies the ID of the ODPS user.ACCESSKEY: specifies the password of the ODPS user.ENDPOINT: specifies the connection address of the ODPS service.TUNNEL_ENDPOINT: specifies the connection address of the Tunnel data transmission service.PROJECT_NAME: specifies the project where the queried table is located.SCHEMA_NAME: an optional parameter that specifies the schema of the queried table.TABLE_NAME: specifies the name of the queried table.QUOTA_NAME: an optional parameter that specifies whether to use the specified quota.COMPRESSION_CODE: an optional parameter that specifies the compression format of the data source. It supports four compression formats: ZLIB, ZSTD, LZ4, and ODPS_LZ4. If not specified, compression is not enabled.
Here is an example:
Read data in the ODPS format.
SELECT * FROM
source (
type = 'ODPS',
accessid = '$ODPS_ACCESSID',
accesskey = '$ODPS_ACCESSKEY',
endpoint= '$ODPS_ENDPOINT',
project_name = 'example_project',
schema_name = '',
table_name = 'example_table',
quota_name = '',
compression_code = ''
);
location_url
- The FORMAT clause specifies the properties related to the file reading format. It supports three file formats: CSV, PARQUET, and ORC.
- For CSV format: configure parse_header=true/false to specify whether to parse the first row of the file as the column headers (default is false). Additionally, use TYPE=CSV to declare the file format (required during export).
- For PARQUET/ORC format: the file structure is automatically recognized, and no additional configuration for the header row is needed.
Here is an example:
SELECT * FROM
FILES( location = '/data/',
format (TYPE = 'csv', field_delimiter = ',', parse_header = true),
pattern = 'datafiles$';
group_by_condition
expression [ASC | DESC]: specifies the grouping condition expression. Adding ASC or DESC controls the order of the groups in the result set, either ascending or descending.
Here is an example:
Query col1 and col2 from the tbl1 table, group by col2, and sum col3, then output the results.
SELECT col1, col2, SUM(col3)
FROM tbl1
GROUP BY col2 DESC;
The result is as follows:
+------+------+-----------+
| col1 | col2 | SUM(col3) |
+------+------+-----------+
| 3 | A3 | 3 |
| 2 | A2 | 2 |
| 1 | A1 | 1 |
+------+------+-----------+
3 rows in set
group_by_summary_option
Note
For OceanBase Database V4.3.5, advanced grouping and summarization operations are supported starting from V4.3.5 BP1.
GROUPING SETS: specifies a custom grouping method that allows you to define multiple grouping combinations. It has the following characteristics:- You can flexibly define the required grouping combinations.
- The result set includes only the specified grouping combinations.
Note
GROUP BY GROUPING SETS ((col1), (col2), (col3))is equivalent toGROUP BY GROUPING SETS (col1, col2, col3).Here is an example:
SELECT col1, col2, col3, COUNT(*) FROM tbl1 GROUP BY GROUPING SETS ((col1), (col2), (col3));The returned result is as follows:
+------+------+------+----------+ | col1 | col2 | col3 | COUNT(*) | +------+------+------+----------+ | 1 | NULL | NULL | 1 | | 2 | NULL | NULL | 1 | | 3 | NULL | NULL | 1 | | NULL | A1 | NULL | 1 | | NULL | A2 | NULL | 1 | | NULL | A3 | NULL | 1 | | NULL | NULL | 1 | 1 | | NULL | NULL | 2 | 1 | | NULL | NULL | 3 | 1 | +------+------+------+----------+ 9 rows in setROLLUP: specifies a hierarchical summarization method. It reduces the number of grouping columns layer by layer based on the order of the grouping columns and generates multi-level summary results. It has the following characteristics:The grouping columns are reduced in the order specified in
expression_listuntil only the total row remains.The result set includes:
- Details of each grouping.
- Sub-summary data of each grouping.
- The total row.
ROLLUPcan be considered a special case ofGROUPING SETS. For example:GROUP BY ROLLUP(col1, col2, col3) is equivalent to GROUP BY GROUPING SETS ((col1, col2, col3), (col1, col2), (col1), ())ROLLUPcan also be used in combination withGROUPING SETS. For example:GROUP BY GROUPING SETS((col2), (col3), ROLLUP(col1, col2, col3)) is equivalent to GROUP BY GROUPING SETS((col2), (col3), (col1, col2, col3), (col1, col2), (col1), ())Here is an example:
SELECT col1, col2, col3, COUNT(*) FROM tbl1 GROUP BY ROLLUP (col1, col2, col3);The returned result is as follows:
+------+------+------+----------+ | col1 | col2 | col3 | COUNT(*) | +------+------+------+----------+ | 1 | A1 | 1 | 1 | | 1 | A1 | NULL | 1 | | 1 | NULL | NULL | 1 | | 2 | A2 | 2 | 1 | | 2 | A2 | NULL | 1 | | 2 | NULL | NULL | 1 | | 3 | A3 | 3 | 1 | | 3 | A3 | NULL | 1 | | 3 | NULL | NULL | 1 | | NULL | NULL | NULL | 3 | +------+------+------+----------+ 10 rows in setCUBE: specifies a multidimensional summarization method. It generates all possible grouping combinations. It has the following characteristics:- Details of each grouping.
- Summary data of all possible grouping combinations.
- The total row.
CUBEcan be considered a special case ofGROUPING SETS. For example:GROUP BY CUBE(col1, col2, col3) is equivalent to GROUP BY GROUPING SETS ((col1, col2, col3), (col1, col2), (col1, col3), (col2, col3), (col1), (col2), (col3), ())CUBEcan also be used in combination withGROUPING SETS. For example:GROUP BY col1, CUBE(col2, col3), GROUPING SETS((col4), (col5)) is equivalent to GROUP BY GROUPING SETS( (col1, col2, col3, col4), (col1, col2, col3, col5), (col1, col2, col4), (col1, col2, col5), (col1, col3, col4), (col1, col3, col5), (col1, col4), (col1, col5))Here is an example:
SELECT col1, col2, col3, COUNT(*) FROM tbl1 GROUP BY CUBE (col1, col2, col3);The returned result is as follows:
+------+------+------+----------+ | col1 | col2 | col3 | COUNT(*) | +------+------+------+----------+ | NULL | NULL | NULL | 3 | | NULL | NULL | 1 | 1 | | NULL | NULL | 2 | 1 | | NULL | NULL | 3 | 1 | | NULL | A1 | NULL | 1 | | NULL | A2 | NULL | 1 | | NULL | A3 | NULL | 1 | | NULL | A1 | 1 | 1 | | NULL | A2 | 2 | 1 | | NULL | A3 | 3 | 1 | | 1 | NULL | NULL | 1 | | 2 | NULL | NULL | 1 | | 3 | NULL | NULL | 1 | | 1 | NULL | 1 | 1 | | 2 | NULL | 2 | 1 | | 3 | NULL | 3 | 1 | | 1 | A1 | NULL | 1 | | 2 | A2 | NULL | 1 | | 3 | A3 | NULL | 1 | | 1 | A1 | 1 | 1 | | 2 | A2 | 2 | 1 | | 3 | A3 | 3 | 1 | +------+------+------+----------+ 22 rows in set
having_condition
expression: specifies the condition expression for filtering the grouped results.
Here is an example:
Query col1 and col2 from table tbl1, group by col2, and calculate the sum of col3. Return rows where the sum of col3 is less than 3.
SELECT col1, col2, SUM(col3)
FROM tbl1
GROUP BY col2
HAVING SUM(col3) < 3;
The returned result is as follows:
+------+------+-----------+
| col1 | col2 | SUM(col3) |
+------+------+-----------+
| 1 | A1 | 1 |
| 2 | A2 | 2 |
+------+------+-----------+
2 rows in set
order_by_condition
expression [ASC | DESC]: specifies the condition expression for sorting the result set.
ASC | DESC: optional. ASC specifies ascending order (default), and DESC specifies descending order.
Here is an example:
Query data from table tbl1 and output the query result in descending order (DESC) based on col3.
SELECT * FROM tbl1
ORDER BY col3 DESC;
The returned result is as follows:
+------+------+------+
| col1 | col2 | col3 |
+------+------+------+
| 3 | A3 | 3 |
| 2 | A2 | 2 |
| 1 | A1 | 1 |
+------+------+------+
3 rows in set
limit_clause
[offset,] row_count: must be an integer constant. The following table describes the parameters.offset: optional. Specifies the number of rows to skip from the start of the result set (offset). The default offset is 0.row_count: specifies the number of rows to return. If onlyrow_countis specified, the firstrow_countrows of the result set are returned.
row_count OFFSET offset: specifies to skipoffsetrows from the start of the result set and return the nextrow_countrows.
Here is an example:
Query data from table tbl1, start from the second row, and return two rows.
SELECT * FROM tbl1
LIMIT 1, 2;
The returned result is as follows:
+------+------+------+
| col1 | col2 | col3 |
+------+------+------+
| 2 | A2 | 2 |
| 3 | A3 | 3 |
+------+------+------+
2 rows in set
lock_option
FOR UPDATE [opt_for_update_wait]: This option adds exclusive locks to all rows of the query result set to prevent concurrent modifications by other transactions or concurrent reads in certain transaction isolation levels. For more information about using exclusive locks, see Lock query results with SELECT FOR UPDATE.opt_for_update_wait: An optional parameter that specifies the behavior for acquiring locks. The options are as follows:WAIT {decimal | intnum}: Specifies the amount of time to wait for other transactions to release resources. If the time limit is exceeded, a resource-unavailable error is returned. The unit is seconds.NOWAIT/NO_WAIT: Specifies not to wait for other transactions to release resources and to return a resource-unavailable error immediately.SKIP LOCKED: Specifies not to wait for row locks. The query is executed immediately, and locked rows are removed from the result set.
Here are some examples:
You can use
SELECT ... FOR UPDATEto add row-level locks to a table. If you use theLIMIT 1clause, the optimizer pushes the operator down to the table scan step and adds locks only to the rows returned byLIMIT.SELECT * FROM tbl1 LIMIT 1 FOR UPDATE;If you use the
ORDER BYclause to sort the query results, the results are sorted first, and then theLIMIT 1clause is executed. In this case, locks are added to all selected rows.SELECT * FROM tbl1 ORDER BY col1 LIMIT 1 FOR UPDATE;
LOCK IN SHARE MODE: This option adds shared locks to data during queries to prevent other transactions from writing to the data, but allows other transactions to read the data. For more information about using shared locks, see Lock query results with LOCK IN SHARE MODE.
Examples
Simple table query example
Create a table named
test_tbl1.CREATE TABLE test_tbl1(col1 INT PRIMARY KEY, col2 VARCHAR(20), col3 INT);Insert test data into the
test_tbl1table.INSERT INTO test_tbl1 VALUES (1, 'A1', 10),(2, 'A2', 15),(3, 'A1', 8);View the data in the
col2column of thetest_tbl1table and perform deduplication.SELECT DISTINCT col2 FROM test_tbl1;The result is as follows:
+------+ | col2 | +------+ | A1 | | A2 | +------+ 2 rows in set
Example of querying data using dblink
Read data from the
number_ttable in the remotetestdatabase.SELECT * FROM number_t@ob_dblink;The result is as follows:
+--------+-------------+---------+---------------------+--------------------------------+ | c_int | c_bigint | c_float | c_double | c_decimal | +--------+-------------+---------+---------------------+--------------------------------+ | 214748 | 92233720368 | 3.40282 | 1.7976931348623157 | 123456789123456789.1234567890 | +--------+-------------+---------+---------------------+--------------------------------+ 1 row in setRead data from the
number_ttable in the remotetestdatabase and join it with the localnumber_ttable.SELECT a.c_decimal, b.c_double FROM number_t a, number_t@ob_dblink b WHERE a.c_int = b.c_int;The result is as follows:
+--------------------------------+---------------------+ | c_decimal | c_double | +---------------------+--------------------------------+ | 123456789123456789.1234567890 | 1.7976931348623157 | +---------------------+--------------------------------+ 1 row in setRead data from the
datetime_ttable in the remotemysqldatabase.SELECT * FROM mysql.datetime_t@ob_dblink;The result is as follows:
+-------------+------------+---------+----------------------+----------------------+ | c_date | c_time | c_year | c_datetime | c_timestamp | +-------------+------------+---------+-----------------------+---------------------+ | 2023-04-13 | 12:12:12 | 2078 | 2100-11-01 12:12:13 | 2100-12-01 21:14:15 | +-------------+------------+---------+----------------------+----------------------+ 1 row in set