Run the TPC-DS benchmark test with OceanBase Database

2026-01-19 13:13:23  Updated

This topic introduces the software requirements, tenant specifications, and detailed test methods for running TPC-DS tests on OceanBase Database.

What is TPC-DS

TPC-DS (Transaction Processing Performance Council Decision Support Benchmark) is a data warehouse performance benchmark standard developed by the international Transaction Processing Performance Council (TPC). It is designed using a hybrid of snowflake and star schemas, comprising 7 fact tables and 17 dimension tables, and covers business scenarios such as retail and finance. The testing process includes generating scalable test data (supporting scale factors from 1TB to 100TB), executing 99 complex SQL queries (covering scenarios such as statistics, reporting, and data mining), and ultimately evaluating system performance using metrics such as queries per hour (QphDS).

As an upgraded version of TPC-H, TPC-DS features a more complex data model (TPC-H has only 8 tables), a wider range of query types, and support for Online Analytical Processing (OLAP) operations. It has become the mainstream benchmark for decision support systems.

Note

To improve user experience and usability, and to ensure that every developer achieves optimal performance when using the database, OceanBase Database has undergone extensive optimizations since version V4.0.0. This performance testing method relies on tuning with basic parameters, enabling developers to achieve good database performance.

Environment preparation

Before you start the test, prepare the test environment as required:

Note

This example is based on a MySQL-compatible tenant.

Software requirements

  • JDK: we recommend that you use JDK 1.8u131 or later.

  • make: run the yum install make command to install it.

  • GCC: run the yum install gcc command to install it.

  • mysql-devel: run the yum install mysql-devel command to install it.

  • Python database driver: run the sudo yum install MySQL-python command to install it.

  • prettytable: run the pip install prettytable command to install it.

  • JDBC: we recommend that you use the mysql-connector-java-5.1.47 version.

  • TPC-DS Tool: download it from Download address. If you use the OBD one-click test, you can skip this tool.

  • OBClient: for more information, see OBClient documentation.

  • OceanBase Database: for more information, see Quickly experience OceanBase Database.

  • IOPS: we recommend that you use a disk with IOPS exceeding 10,000.

Tenant specifications

The tenant specifications in this example are based on the hardware configuration described in the OceanBase Database TPC-DS benchmark report. You need to dynamically adjust the tenant specifications based on the hardware configuration of your own database.

  • Cluster deployment

    1. In this test, four servers are required. TPC-DS and OBD are deployed on one server as the client pressure server. To deploy an OceanBase cluster by using OBD, you need three servers. The OceanBase cluster is of the 1:1:1 scale.

      Note

      • For the TPC-DS test, you only need a server with 4 cores and 16 GB of memory.
      • When you deploy a cluster, we recommend that you do not use the obd cluster autodeploy command. This command does not maximize the resource utilization (such as memory) to ensure the stability of the cluster. We recommend that you optimize the configuration file to maximize the resource utilization.
    2. After the deployment is completed, create a tenant and a user for the TPC-DS test. Do not use the sys tenant for the test. The sys tenant is an internal system tenant for managing the cluster. Set the primary_zone parameter of the tenant to RANDOM. RANDOM indicates that the Leader of a newly created table partition is randomly assigned to one of the three servers.

  • Tenant creation

    You can run the OBD CLUSTER TENANT CREATE command to create a test tenant. The command is as follows:

    obd cluster tenant create <DEPLOY_NAME> -n <TENANT_NAME> --max-cpu=28 --memory-size=180G -–zone-list=zone1,zone2,zone3 -–primary-zone=RANDOM  --locality=F@zone1,F@zone2,F@zone3 --charset=utf8 -s 'ob_tcp_invited_nodes="%"' --optimize=<optimize>
    

    Parameters:

    • DEPLOY_NAME: the name of the cluster.

    • TENANT_NAME: the name of the tenant.

    • --zone-list: the list of zones for the tenant.

    • --primary-zone: the primary zone for the tenant.

    • --locality: the distribution of replicas among zones.

    • --charset: the character set of the tenant.

    • -s: the system variable value of the tenant.

    • OPTIMIZE: the load type of the tenant. Valid values: express_oltp, complex_oltp, olap, htap, and kv. The default value is htap, which is suitable for hybrid OLAP and OLTP workloads. For more information about the OBD deployment, see obd cluster tenant create.

      Notice

      For V4.3.x and later, you can set the scenario parameter to specify the cluster load type when you deploy OceanBase Database by using OBD. If you do not set the scenario parameter, the default value is htap. For more information, see Deploy OceanBase Database by using OBD.

    For example, create a tenant named tpcds_tenant for the cluster named obperf. The tenant has 28 CPU cores and 180 GB of memory. Set the load type of the tenant to be consistent with the cluster scenario.

    obd cluster tenant create obperf -n tpcds_tenant --max-cpu=28 --memory-size=180G -–zone-list=zone1,zone2,zone3 -–primary-zone=RANDOM  --locality=F@zone1,F@zone2,F@zone3 --charset=utf8 -s 'ob_tcp_invited_nodes="%"' --optimize=htap
    

    Note

    In the preceding example, --optimize=htap specifies the default load type. In a production environment, select the appropriate load type based on the actual cluster type.

Test methods

After the test environment is ready, you can use the following methods to perform TPC-DS performance testing.

Manually execute TPC-DS tests by using the TPC-DS tool

After you select the cluster load type and tenant tuning scenarios, you can manually execute TPC-DS tests. This helps you better understand OceanBase Database and optimize parameter settings.

Step 1: Create a test tenant

Note

If the test tenant has already been created during the environment preparation, you can skip this step.

Run the following commands in the system tenant (sys tenant) to create a test tenant:

Note

The OceanBase cluster environment for this test is deployed in 1:1:1 mode.

  1. Create a resource unit named mysql_box.

    CREATE RESOURCE UNIT mysql_box
       MAX_CPU 28,
       MEMORY_SIZE '200G',
       MIN_IOPS 200000,
       MAX_IOPS 12800000,
       LOG_DISK_SIZE '300G';
    
  2. Create a resource pool named mysql_pool.

    CREATE RESOURCE POOL mysql_pool
       UNIT = 'mysql_box',
       UNIT_NUM = 1,
       ZONE_LIST = ('z1','z2','z3');
    
  3. Create a MySQL mode tenant named mysql_tenant.

    CREATE TENANT mysql_tenant
       RESOURCE_POOL_LIST = ('mysql_pool'),
       PRIMARY_ZONE = RANDOM,
       LOCALITY = 'F@z1,F@z2,F@z3'
       SET VARIABLES ob_compatibility_mode='mysql', ob_tcp_invited_nodes='%', secure_file_priv = "/";
    

Step 2: Tune the environment

  1. Tune OceanBase Database.

    Run the following commands in the system tenant (sys tenant) to configure the parameters.

     ALTER SYSTEM flush plan cache GLOBAL;
    
     ALTER SYSTEM SET enable_sql_audit=false;
    
     select sleep(5);
    
     ALTER SYSTEM SET enable_perf_event=false;
    
     ALTER SYSTEM SET syslog_level='PERF';
    
     ALTER SYSTEM SET enable_record_trace_log=false;
    
     ALTER SYSTEM SET data_storage_warning_tolerance_time = '300s';
    
     ALTER SYSTEM SET _data_storage_io_timeout = '600s';
    
     ALTER SYSTEM SET trace_log_slow_query_watermark = '7d';
    
     ALTER SYSTEM SET large_query_threshold='0ms';
    
     ALTER SYSTEM SET enable_syslog_recycle= 1;
    
     ALTER SYSTEM SET max_syslog_file_count = 300;
    
    
     set global ob_sql_work_area_percentage=50;
     ALTER SYSTEM SET default_table_store_format = 'column' ;
     ALTER SYSTEM SET ob_enable_batched_multi_statement='true';
     ALTER SYSTEM SET _io_read_batch_size = '128k';
     ALTER SYSTEM SET _io_read_redundant_limit_percentage = 50;
    
     ALTER SYSTEM SET ob_enable_batched_multi_statement='true';
    
     set global parallel_servers_target=10000;
    
    
  2. Tune the tenant.

    Run the following commands in the test tenant (user tenant) to configure the parameters.

     SET global NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS';
     SET global NLS_TIMESTAMP_FORMAT='YYYY-MM-DD HH24:MI:SS.FF';
     SET global NLS_TIMESTAMP_TZ_FORMAT='YYYY-MM-DD HH24:MI:SS.FF TZR TZD';
    
     set global ob_query_timeout=10800000000;
     set global ob_trx_timeout=10000000000;
    
     set global ob_sql_work_area_percentage=50;
     -- ALTER SYSTEM SET default_table_store_format = 'column' ;
     ALTER SYSTEM SET ob_enable_batched_multi_statement='true';
     ALTER SYSTEM SET _io_read_batch_size = '128k';
     ALTER SYSTEM SET _io_read_redundant_limit_percentage = 50;
    
     ALTER SYSTEM SET ob_enable_batched_multi_statement='true';
    
     set global parallel_servers_target=10000;
    
     set global collation_connection = utf8mb4_bin;
     set global collation_database = utf8mb4_bin;
     set global collation_server = utf8mb4_bin;
    

Step 3: Install the TPC-DS Tool

  1. Download the TPC-DS Tool. For more information, see TPC-DS Tool download page.

  2. After the download is complete, decompress the file and go to the tools directory under the decompressed TPC-DS directory.

    [wieck@localhost ~] $ unzip 49FA3DBA-FE6C-463C-952B-62B8E9D43372-TPC-DS-Tool.zip
    [wieck@localhost ~] $ cd DSGen-software-code-3.2.0rc1/tools
    
  3. Compile the files.

    [wieck@localhost tools] $ make
    

    After the files are compiled, the binary executable files dsdgen and dsqgen are generated in the tools directory.

Step 4: Generate data

You can generate 10G, 100G, or 1T data for the TPC-DS benchmark based on your environment. This example generates 100G data.

  1. Create a directory for storing the data files.

    [wieck@localhost tools] $ mkdir tpcds_100g
    
  2. Generate test data.

    [wieck@localhost tools] $ ./dsdgen -scale 100G -dir tpcds_100g
    

    When generating 1T data in parallel, OceanBase Database supports direct load, which allows you to import data from multiple files into a table at the same time.

    [wieck@localhost tools] $ 
    #!/bin/bash
    
    # Set variables
    SCALE=$1
    PARALLEL="20"
    DIR="/data/1/tpcds/tpcds_data/"    # You can modify the path as needed.
    
    for ((x=1; x<=PARALLEL; x++))
    do
        # Generate the dsdgen command
        CMD="./dsdgen -scale ${SCALE}GB -parallel ${PARALLEL} -child ${x} -dir ${DIR}"
    
        # Execute the command in the background
        ${CMD} &
    done
    
    wait
    
    echo "All data has been generated."
    
  3. Modify the test data.

    • Modify the NULL values.

      When you use the pipe symbol | as a field separator, a,NULL,c,d,NULL is exported to a text file in the a||c|d| format. When you import the data by using the LOAD DATA statement, an error is returned indicating that the import failed. Therefore, you need to process the NULL values.

      [wieck@localhost tools] $ vim fix-null.sh
      #!/bin/bash
      # Replace the NULL value in the first field with 0. Replace ^| with 0|.
      # Replace the NULL value in the middle fields with 0. Replace || with |0|.
      # Replace the NULL value in the last field with 0. Replace |$ with |0.
      for s_f in `ls *dat`
      do
          echo "$s_f"
          i=1
          while [ `egrep '\|\||^\||\|$' $s_f |wc -l` -gt 0 ]
          do 
              echo $i
              sed 's/^|/0|/g;s/||/|0|/g;s/|$/|0/g' -i $s_f
              ((i++))
          done
      done
      
      [wieck@localhost tools] $ sh fix-null.sh
      
    • Modify the date fields.

      [wieck@localhost tools] $ vim fix-date.sh
      for s_f in item.dat store.dat web_page.dat web_site.dat call_center.dat
      do
      # Replace the first two NULL dates with 0000-00-00.
      sed 's/^\([A-Za-z0-9]*|[A-Za-z0-9]*\)|0|0|\(.*\)/\1|0000-00-00|0000-00-00|\2/' -i $s_f
      
      # Replace the second NULL date with 0000-00-00.
      sed 's/^\([0-9A-Za-z]*|[A-Za-z0-9]*|[0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}\)|0|\(.*\)/\1|0000-00-00|\2/' -i $s_f
      
      # Replace the first NULL date with 0000-00-00.
      sed 's/^\([0-9A-Za-z]*|[A-Za-z0-9]*\)|0|\([0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}|.*\)/\1|0000-00-00|\2/' -i $s_f
      
      done
      
      [wieck@localhost tools] $ sh fix-date.sh
      

Step 5: Generate the query SQL

  1. Create a directory to store the test SQL.

    [wieck@localhost tools] $  mkdir sql-ds
    
  2. The query templates are stored in the DSGen-software-code-3.2.0rc1/query_templates directory. You can use dsqgen to generate the test SQL statements.

    Before you use dsqgen to generate the test SQL statements, you need to modify the template files in the DSGen-software-code-3.2.0rc1/query_templates directory and add define _END = "" to the files. Otherwise, an error will occur. The specific operation is as follows:

    1. Create an sh file and write the code.
    [wieck@localhost tools] $ vim query.sh
    for i in `ls query*tpl`
    do 
        echo $i;  
        echo "define _END = \"\";" >> $i
    done
    
    [wieck@localhost tools] $ sh query.sh
    
    1. Use dsqgen to generate the test SQL statements.

      [wieck@localhost tools] $ mkdir sql-ds
      [wieck@localhost tools] $ vim sql-ds.sh
      #!/bin/sh
      for i in `seq 1 99`
      do
      ./dsqgen -directory ../query_templates/ -template "query${i}.tpl" -dialect netezza  -output ./sql-ds/
      scp ./sql-ds/query_0.sql ./sql-ds/sql${i}.sql
      done
      
      [wieck@localhost tools] $ sh sql-ds.sh
      

      The return result is as follows:

      qgen2 Query Generator (Version 3.2.0)
      Copyright Transaction Processing Performance Council (TPC) 2001 - 2021
      Warning: This scale factor is valid for QUALIFICATION ONLY
      ......
      Copyright Transaction Processing Performance Council (TPC) 2001 - 2021
      qgen2 Query Generator (Version 3.2.0)
      Copyright Transaction Processing Performance Council (TPC) 2001 - 2021
      
  3. View the generated test files.

    [wieck@localhost tools] $  ls sql-ds/
    query_0.sql  sql17.sql  sql25.sql  sql33.sql  sql41.sql  sql5.sql   sql58.sql  sql66.sql  sql74.sql  sql82.sql  sql90.sql  sql99.sql
    sql1.sql     sql18.sql  sql26.sql  sql34.sql  sql42.sql  sql50.sql  sql59.sql  sql67.sql  sql75.sql  sql83.sql  sql91.sql
    sql10.sql    sql19.sql  sql27.sql  sql35.sql  sql43.sql  sql51.sql  sql6.sql   sql68.sql  sql76.sql  sql84.sql  sql92.sql
    sql11.sql    sql2.sql   sql28.sql  sql36.sql  sql44.sql  sql52.sql  sql60.sql  sql69.sql  sql77.sql  sql85.sql  sql93.sql
    sql12.sql    sql20.sql  sql29.sql  sql37.sql  sql45.sql  sql53.sql  sql61.sql  sql7.sql   sql78.sql  sql86.sql  sql94.sql
    sql13.sql    sql21.sql  sql3.sql   sql38.sql  sql46.sql  sql54.sql  sql62.sql  sql70.sql  sql79.sql  sql87.sql  sql95.sql
    sql14.sql    sql22.sql  sql30.sql  sql39.sql  sql47.sql  sql55.sql  sql63.sql  sql71.sql  sql8.sql   sql88.sql  sql96.sql
    sql15.sql    sql23.sql  sql31.sql  sql4.sql   sql48.sql  sql56.sql  sql64.sql  sql72.sql  sql80.sql  sql89.sql  sql97.sql
    sql16.sql    sql24.sql  sql32.sql  sql40.sql  sql49.sql  sql57.sql  sql65.sql  sql73.sql  sql81.sql  sql9.sql   sql98.sql
    
  4. Modify the query SQL.

    • Increase the number of parallel threads.

      Run the following command in the SYS tenant to query the total number of available CPU cores of the tenant.

      obclient> SELECT sum(cpu_capacity_max) FROM oceanbase.__all_virtual_server;
      

      For example, the modified SQL for sql1 is as follows:

      with customer_total_return as
      (select /*+    parallel(96) */ sr_customer_sk as ctr_customer_sk ---Add parallel execution
      ,sr_store_sk as ctr_store_sk
      ,sum(SR_FEE) as ctr_total_return
      from store_returns
      ,date_dim
      where sr_returned_date_sk = d_date_sk
      and d_year =2000
      group by sr_customer_sk
      ,sr_store_sk)
      select  c_customer_id
      from customer_total_return ctr1
      ,store
      ,customer
      where ctr1.ctr_total_return > (select avg(ctr_total_return)*1.2
      from customer_total_return ctr2
      where ctr1.ctr_store_sk = ctr2.ctr_store_sk)
      and s_store_sk = ctr1.ctr_store_sk
      and s_state = 'TN'
      and ctr1.ctr_customer_sk = c_customer_sk
      order by c_customer_id
      limit 100;
      
    • The date function has been adjusted.

      Adjust the date functions in SQL statements as follows: Change (cast('2025-03-04' as date) + 30 days) to date_add(cast('2025-03-04' as date), interval 30 day) or (cast('2025-03-04' as date) + INTERVAL 30 day). Change (cast('2025-04-18' as date) - 30 days) to date_sub(cast('2025-04-18' as date), interval 30 day) or (cast('2025-04-18' as date) - INTERVAL 30 day).

    The modified SQL statement is as follows:

     ```sql
     select  i_item_id
         ,i_item_desc
         ,i_current_price
     from item, inventory, date_dim, catalog_sales
     where i_current_price between 22 and 22 + 30
     and inv_item_sk = i_item_sk
     and d_date_sk=inv_date_sk
     and d_date between cast('2025-04-02' as date) and (cast('2025-04-02' as date) + INTERVAL 60 day)  -----modify the date function
     and i_manufact_id in (678,964,918,849)
     and inv_quantity_on_hand between 100 and 500
     and cs_item_sk = i_item_sk
     group by i_item_id,i_item_desc,i_current_price
     order by i_item_id
     limit 100;
     ```
    
    • Modify the usage of rollup functions.

      In the SQL statements, change the rollup() function to group by <col_name> with rollup.

      Consider the following SQL statement with a ROLLUP clause:

      select  i_item_id,
              ca_country,
              ca_state,
              ca_county,
              avg( cast(cs_quantity as decimal(12,2))) agg1,
              avg( cast(cs_list_price as decimal(12,2))) agg2,
              avg( cast(cs_coupon_amt as decimal(12,2))) agg3,
              avg( cast(cs_sales_price as decimal(12,2))) agg4,
              avg( cast(cs_net_profit as decimal(12,2))) agg5,
              avg( cast(c_birth_year as decimal(12,2))) agg6,
              avg( cast(cd1.cd_dep_count as decimal(12,2))) agg7
      from catalog_sales, customer_demographics cd1,
          customer_demographics cd2, customer, customer_address, date_dim, item
      where cs_sold_date_sk = d_date_sk and
          cs_item_sk = i_item_sk and
          cs_bill_cdemo_sk = cd1.cd_demo_sk and
          cs_bill_customer_sk = c_customer_sk and
          cd1.cd_gender = 'M' and
          cd1.cd_education_status = 'College' and
          c_current_cdemo_sk = cd2.cd_demo_sk and
          c_current_addr_sk = ca_address_sk and
          c_birth_month in (9,5,12,4,1,10) and
          d_year = 2001 and
          ca_state in ('ND','WI','AL'
                      ,'NC','OK','MS','TN')
      group by i_item_id, ca_country, ca_state, ca_county with rollup
      order by ca_country,
              ca_state,
              ca_county,
              i_item_id
      limit 100;
      

Step 6: Create a table

Create the create_tpcds_mysql_table_part.ddl file.

[wieck@localhost tools] $ mkdir load
[wieck@localhost tools] $ cd load
[wieck@localhost load] $ vim create_tpcds_mysql_table_part.ddl
SET global NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS';
SET global NLS_TIMESTAMP_FORMAT='YYYY-MM-DD HH24:MI:SS.FF';
SET global NLS_TIMESTAMP_TZ_FORMAT='YYYY-MM-DD HH24:MI:SS.FF TZR TZD';

set global ob_query_timeout=10800000000;
set global ob_trx_timeout=10000000000;

set global ob_sql_work_area_percentage=80;

set global optimizer_use_sql_plan_baselines = true;
set global optimizer_capture_sql_plan_baselines = true;

alter system set ob_enable_batched_multi_statement='true';

set global parallel_servers_target=10000;

create tablegroup if not exists tpcds_tg_catalog_group_range binding true partition by hash partitions 128;
create tablegroup if not exists tpcds_tg_store_group_range_72_hash binding true 
partition by range columns 1 subpartition BY hash subpartitions 64
(partition part_1 values less than (2450846), 
partition part_2 values less than (2450874), 
partition part_3 values less than (2450905), 
partition part_4 values less than (2450935), 
partition part_5 values less than (2450966), 
partition part_6 values less than (2450996), 
partition part_7 values less than (2451027), 
partition part_8 values less than (2451058), 
partition part_9 values less than (2451088), 
partition part_10 values less than (2451119),
partition part_11 values less than (2451149),
partition part_12 values less than (2451180),
partition part_13 values less than (2451211),
partition part_14 values less than (2451239),
partition part_15 values less than (2451270),
partition part_16 values less than (2451300),
partition part_17 values less than (2451331),
partition part_18 values less than (2451361),
partition part_19 values less than (2451392),
partition part_20 values less than (2451423),
partition part_21 values less than (2451453),
partition part_22 values less than (2451484),
partition part_23 values less than (2451514),
partition part_24 values less than (2451545),
partition part_25 values less than (2451576),
partition part_26 values less than (2451605),
partition part_27 values less than (2451636),
partition part_28 values less than (2451666),
partition part_29 values less than (2451697),
partition part_30 values less than (2451727),
partition part_31 values less than (2451758),
partition part_32 values less than (2451789),
partition part_33 values less than (2451819),
partition part_34 values less than (2451850),
partition part_35 values less than (2451880),
partition part_36 values less than (2451911),
partition part_37 values less than (2451942),
partition part_38 values less than (2451970),
partition part_39 values less than (2452001),
partition part_40 values less than (2452031),
partition part_41 values less than (2452062),
partition part_42 values less than (2452092),
partition part_43 values less than (2452123),
partition part_44 values less than (2452154),
partition part_45 values less than (2452184),
partition part_46 values less than (2452215), 
partition part_47 values less than (2452245),
partition part_48 values less than (2452276),
partition part_49 values less than (2452307),
partition part_50 values less than (2452335),
partition part_51 values less than (2452366),
partition part_52 values less than (2452396),
partition part_53 values less than (2452427),
partition part_54 values less than (2452457),
partition part_55 values less than (2452488),
partition part_56 values less than (2452519),
partition part_57 values less than (2452549),
partition part_58 values less than (2452580),
partition part_59 values less than (2452610),
partition part_60 values less than (2452641),
partition part_61 values less than (2452672),
partition part_62 values less than (2452700),
partition part_63 values less than (2452731),
partition part_64 values less than (2452761),
partition part_65 values less than (2452792),
partition part_66 values less than (2452822),
partition part_67 values less than (2452853),
partition part_68 values less than (2452884),
partition part_69 values less than (2452914),
partition part_70 values less than (2452945),
partition part_71 values less than (2452975),
partition part_72 values less than (2453006),
partition part_73 values less than (maxvalue));

create table dbgen_version
(
    dv_version                varchar(16)                   ,
    dv_create_date            date                          ,
    dv_create_time            varchar(20)                   ,
    dv_cmdline_args           varchar(200)
);

CREATE TABLE customer_address
(
 ca_address_sk      bigint NOT NULL,
 ca_address_id      varchar(16) NOT NULL,
 ca_street_number   varchar(10),
 ca_street_name     varchar(60),
 ca_street_type     varchar(15),
 ca_suite_number    varchar(10),
 ca_city        varchar(60),
 ca_county      varchar(30),
 ca_state       varchar(2),
 ca_zip         varchar(10),
 ca_country         varchar(20),
 ca_gmt_offset      decimal(5,2),
 ca_location_type   varchar(20),
 primary key(ca_address_sk)
) partition by hash (ca_address_sk) partitions 128;

create table customer_demographics
(
 cd_demo_sk                bigint not null,
 cd_gender                 char(1),
 cd_marital_status         char(1),
 cd_education_status       char(20),
 cd_purchase_estimate      int,
 cd_credit_rating          char(10),
 cd_dep_count              int,
 cd_dep_employed_count     int,
 cd_dep_college_count      int,
 primary key(cd_demo_sk)
) partition by hash (cd_demo_sk) partitions 128;

create table date_dim
(
 d_date_sk                 bigint not null,
 d_date_id                 char(16) not null,
 d_date                    date,
 d_month_seq               int,
 d_week_seq                int,
 d_quarter_seq             int,
 d_year                    int,
 d_dow                     int,
 d_moy                     int,
 d_dom                     int,
 d_qoy                     int,
 d_fy_year                 int,
 d_fy_quarter_seq          int,
 d_fy_week_seq             int,
 d_day_name                char(9),
 d_quarter_name            char(6),
 d_holiday                 char(1),
 d_weekend                 char(1),
 d_following_holiday       char(1),
 d_first_dom               int,
 d_last_dom                int,
 d_same_day_ly             int,
 d_same_day_lq             int,
 d_current_day             char(1),
 d_current_week            char(1),
 d_current_month           char(1),
 d_current_quarter         char(1),
 d_current_year            char(1),
 primary key(d_date_sk)
);

create table warehouse
(
 w_warehouse_sk            bigint not null,
 w_warehouse_id            char(16) not null,
 w_warehouse_name          varchar(20),
 w_warehouse_sq_ft         int,
 w_street_number           char(10),
 w_street_name             varchar(60),
 w_street_type             char(15),
 w_suite_number            char(10),
 w_city                    varchar(60),
 w_county                  varchar(30),
 w_state                   char(2),
 w_zip                     char(10),
 w_country                 varchar(20),
 w_gmt_offset              decimal(5,2),
 primary key(w_warehouse_sk)
 );

create table ship_mode
(
    sm_ship_mode_sk           bigint,
    sm_ship_mode_id           char(16) not null,
    sm_type                   char(30),
    sm_code                   char(10),
    sm_carrier                char(20),
    sm_contract               char(20),
    primary key(sm_ship_mode_sk)
) ;

create table time_dim
(
 t_time_sk                 bigint not null,
 t_time_id                 char(16) not null,
 t_time                    int,
 t_hour                    int,
 t_minute                  int,
 t_second                  int,
 t_am_pm                   char(2),
 t_shift                   char(20),
 t_sub_shift               char(20),
 t_meal_time               char(20),
 primary key(t_time_sk)
 );

create table reason
(
    r_reason_sk     bigint not null,
    r_reason_id     char(16) not null,
    r_reason_desc   char(100),
    PRIMARY key(r_reason_sk)
);

create table income_band
(
 ib_income_band_sk         bigint not null,
 ib_lower_bound            int,
 ib_upper_bound            int,
 PRIMARY key(ib_income_band_sk)
);

create table item
(
 i_item_sk                 bigint not null,
 i_item_id                 char(16) not null,
 i_rec_start_date          date,
 i_rec_end_date            date,
 i_item_desc               varchar(200),
 i_current_price           decimal(7,2),
 i_wholesale_cost          decimal(7,2),
 i_brand_id                int,
 i_brand                   char(50),
 i_class_id                int,
 i_class                   char(50),
 i_category_id             int,
 i_category                char(50),
 i_manufact_id             int,
 i_manufact                char(50),
 i_size                    char(20),
 i_formulation             char(20),
 i_color                   char(20),
 i_units                   char(10),
 i_container               char(10),
 i_manager_id              int,
 i_product_name            char(50),
 PRIMARY key(i_item_sk)
) ;

create table store
(
 s_store_sk                bigint not null,
 s_store_id                char(16) not null,
 s_rec_start_date          date,
 s_rec_end_date            date,
 s_closed_date_sk          bigint,
 s_store_name              varchar(50),
 s_number_employees        int,
 s_floor_space             int,
 s_hours                   char(20),
 s_manager                 varchar(40),
 s_market_id               int,
 s_geography_class         varchar(100),
 s_market_desc             varchar(100),
 s_market_manager          varchar(40),
 s_division_id             int,
 s_division_name           varchar(50),
 s_company_id              int,
 s_company_name            varchar(50),
 s_street_number           varchar(10),
 s_street_name             varchar(60),
 s_street_type             char(15),
 s_suite_number            char(10),
 s_city                    varchar(60),
 s_county                  varchar(30),
 s_state                   char(2),
 s_zip                     char(10),
 s_country                 varchar(20),
 s_gmt_offset              decimal(5,2),
 s_tax_percentage          decimal(5,2),
 PRIMARY key(s_store_sk)
);

create table call_center
(
 cc_call_center_sk             bigint not null,
 cc_call_center_id             char(16) not null,
 cc_rec_start_date             date,
 cc_rec_end_date               date,
 cc_closed_date_sk             bigint,
 cc_open_date_sk               bigint,
 cc_name                       varchar(50),
 cc_class                      varchar(50),
 cc_employees                  int,
 cc_sq_ft                      int,
 cc_hours                      char(20),
 cc_manager                    varchar(40),
 cc_mkt_id                     int,
 cc_mkt_class                  char(50),
 cc_mkt_desc                   varchar(100),
 cc_market_manager             varchar(40),
 cc_division                   int,
 cc_division_name              varchar(50),
 cc_company                    int,
 cc_company_name               char(50),
 cc_street_number              char(10),
 cc_street_name                varchar(60),
 cc_street_type                char(15),
 cc_suite_number               char(10),
 cc_city                       varchar(60),
 cc_county                     varchar(30),
 cc_state                      char(2),
 cc_zip                        char(10),
 cc_country                    varchar(20),
 cc_gmt_offset                 decimal(5,2),
 cc_tax_percentage             decimal(5,2),
 PRIMARY key(cc_call_center_sk)
);

CREATE TABLE customer
(
  c_customer_sk         bigint NOT NULL,
  c_customer_id         char(16) NOT NULL,
  c_current_cdemo_sk        bigint,
  c_current_hdemo_sk        bigint,
  c_current_addr_sk         bigint,
  c_first_shipto_date_sk    bigint,
  c_first_sales_date_sk     bigint,
  c_salutation          char(10),
  c_first_name          char(20),
  c_last_name           char(30),
  c_preferred_cust_flag     char(1),
  c_birth_day           int,
  c_birth_month         int,
  c_birth_year          int,
  c_birth_country       varchar(20),
  c_login           char(13),
  c_email_address       char(50),
  c_last_review_date_sk     bigint,
  PRIMARY key(c_customer_sk)
)partition by hash (c_customer_sk) partitions 128;

create table web_site
(
 web_site_sk               bigint not null,
 web_site_id               char(16) not null,
 web_rec_start_date        date,
 web_rec_end_date          date,
 web_name                  varchar(50),
 web_open_date_sk          bigint,
 web_close_date_sk         bigint,
 web_class                 varchar(50),
 web_manager               varchar(40),
 web_mkt_id                int,
 web_mkt_class             varchar(50),
 web_mkt_desc              varchar(100),
 web_market_manager        varchar(40),
 web_company_id            int,
 web_company_name          char(50),
 web_street_number         char(10),
 web_street_name           varchar(60),
 web_street_type           char(15),
 web_suite_number          char(10),
 web_city                  varchar(60),
 web_county                varchar(30),
 web_state                 char(2),
 web_zip                   char(10),
 web_country               varchar(20),
 web_gmt_offset            decimal(5,2),
 web_tax_percentage        decimal(5,2),
 PRIMARY key(web_site_sk)
);

create table store_returns
(
 sr_returned_date_sk       bigint,
 sr_return_time_sk         bigint,
 sr_item_sk                bigint not null,
 sr_customer_sk            bigint,
 sr_cdemo_sk               bigint,
 sr_hdemo_sk               bigint,
 sr_addr_sk                bigint,
 sr_store_sk               bigint,
 sr_reason_sk              bigint,
 sr_ticket_number          bigint not null,
 sr_return_quantity        int,
 sr_return_amt             decimal(7,2),
 sr_return_tax             decimal(7,2),
 sr_return_amt_inc_tax     decimal(7,2),
 sr_fee                    decimal(7,2),
 sr_return_ship_cost       decimal(7,2),
 sr_refunded_cash          decimal(7,2),
 sr_reversed_charge        decimal(7,2),
 sr_store_credit           decimal(7,2),
 sr_net_loss               decimal(7,2)
)
partition by range (sr_returned_date_sk)
subpartition BY hash(sr_item_sk) subpartitions 64
(partition part_1 values less than (2450846), 
partition part_2 values less than (2450874), 
partition part_3 values less than (2450905), 
partition part_4 values less than (2450935), 
partition part_5 values less than (2450966), 
partition part_6 values less than (2450996), 
partition part_7 values less than (2451027), 
partition part_8 values less than (2451058), 
partition part_9 values less than (2451088), 
partition part_10 values less than (2451119),
partition part_11 values less than (2451149),
partition part_12 values less than (2451180),
partition part_13 values less than (2451211),
partition part_14 values less than (2451239),
partition part_15 values less than (2451270),
partition part_16 values less than (2451300),
partition part_17 values less than (2451331),
partition part_18 values less than (2451361),
partition part_19 values less than (2451392),
partition part_20 values less than (2451423),
partition part_21 values less than (2451453),
partition part_22 values less than (2451484),
partition part_23 values less than (2451514),
partition part_24 values less than (2451545),
partition part_25 values less than (2451576),
partition part_26 values less than (2451605),
partition part_27 values less than (2451636),
partition part_28 values less than (2451666),
partition part_29 values less than (2451697),
partition part_30 values less than (2451727),
partition part_31 values less than (2451758),
partition part_32 values less than (2451789),
partition part_33 values less than (2451819),
partition part_34 values less than (2451850),
partition part_35 values less than (2451880),
partition part_36 values less than (2451911),
partition part_37 values less than (2451942),
partition part_38 values less than (2451970),
partition part_39 values less than (2452001),
partition part_40 values less than (2452031),
partition part_41 values less than (2452062),
partition part_42 values less than (2452092),
partition part_43 values less than (2452123),
partition part_44 values less than (2452154),
partition part_45 values less than (2452184),
partition part_46 values less than (2452215),   
partition part_47 values less than (2452245),
partition part_48 values less than (2452276),
partition part_49 values less than (2452307),
partition part_50 values less than (2452335),
partition part_51 values less than (2452366),
partition part_52 values less than (2452396),
partition part_53 values less than (2452427),
partition part_54 values less than (2452457),
partition part_55 values less than (2452488),
partition part_56 values less than (2452519),
partition part_57 values less than (2452549),
partition part_58 values less than (2452580),
partition part_59 values less than (2452610),
partition part_60 values less than (2452641),
partition part_61 values less than (2452672),
partition part_62 values less than (2452700),
partition part_63 values less than (2452731),
partition part_64 values less than (2452761),
partition part_65 values less than (2452792),
partition part_66 values less than (2452822),
partition part_67 values less than (2452853),
partition part_68 values less than (2452884),
partition part_69 values less than (2452914),
partition part_70 values less than (2452945),
partition part_71 values less than (2452975),
partition part_72 values less than (2453006),
partition part_73 values less than (maxvalue));

create table household_demographics
(
 hd_demo_sk                bigint not null,
 hd_income_band_sk         bigint,
 hd_buy_potential          char(15),
 hd_dep_count              int,
 hd_vehicle_count          int,
 PRIMARY key(hd_demo_sk)
);

create table web_page
(
 wp_web_page_sk            bigint not null,
 wp_web_page_id            char(16) not null,
 wp_rec_start_date         date,
 wp_rec_end_date           date,
 wp_creation_date_sk       bigint,
 wp_access_date_sk         bigint,
 wp_autogen_flag           char(1),
 wp_customer_sk            bigint,
 wp_url                    varchar(100),
 wp_type                   char(50),
 wp_char_count             int,
 wp_link_count             int,
 wp_image_count            int,
 wp_max_ad_count           int,
 PRIMARY key(wp_web_page_sk)
);

create table promotion
(
 p_promo_sk                bigint not null,
 p_promo_id                char(16) not null,
 p_start_date_sk           bigint,
 p_end_date_sk             bigint,
 p_item_sk                 bigint,
 p_cost                    decimal(15,2),
 p_response_target         int,
 p_promo_name              char(50),
 p_channel_dmail           char(1),
 p_channel_email           char(1),
 p_channel_catalog         char(1),
 p_channel_tv              char(1),
 p_channel_radio           char(1),
 p_channel_press           char(1),
 p_channel_event           char(1),
 p_channel_demo            char(1),
 p_channel_details         varchar(100),
 p_purpose                 char(15),
 p_discount_active         char(1),
 PRIMARY key(p_promo_sk)
);

create table catalog_page
(
 cp_catalog_page_sk     bigint not null,
 cp_catalog_page_id     varchar(16) not null,
 cp_start_date_sk   bigint,
 cp_end_date_sk     bigint,
 cp_department      varchar(50),
 cp_catalog_number  int,
 cp_catalog_page_number int,
 cp_description     varchar(100),
 cp_type        varchar(100),
 primary key(cp_catalog_page_sk)
 );

create table inventory
(
 inv_date_sk               bigint not null,
 inv_item_sk               bigint not null,
 inv_warehouse_sk          bigint not null,
 inv_quantity_on_hand      int
)
partition by range (inv_date_sk)
subpartition BY hash(inv_item_sk) subpartitions 64
(partition part_1 values less than (2450846), 
partition part_2 values less than (2450874), 
partition part_3 values less than (2450905), 
partition part_4 values less than (2450935), 
partition part_5 values less than (2450966), 
partition part_6 values less than (2450996), 
partition part_7 values less than (2451027), 
partition part_8 values less than (2451058), 
partition part_9 values less than (2451088), 
partition part_10 values less than (2451119),
partition part_11 values less than (2451149),
partition part_12 values less than (2451180),
partition part_13 values less than (2451211),
partition part_14 values less than (2451239),
partition part_15 values less than (2451270),
partition part_16 values less than (2451300),
partition part_17 values less than (2451331),
partition part_18 values less than (2451361),
partition part_19 values less than (2451392),
partition part_20 values less than (2451423),
partition part_21 values less than (2451453),
partition part_22 values less than (2451484),
partition part_23 values less than (2451514),
partition part_24 values less than (2451545),
partition part_25 values less than (2451576),
partition part_26 values less than (2451605),
partition part_27 values less than (2451636),
partition part_28 values less than (2451666),
partition part_29 values less than (2451697),
partition part_30 values less than (2451727),
partition part_31 values less than (2451758),
partition part_32 values less than (2451789),
partition part_33 values less than (2451819),
partition part_34 values less than (2451850),
partition part_35 values less than (2451880),
partition part_36 values less than (2451911),
partition part_37 values less than (2451942),
partition part_38 values less than (2451970),
partition part_39 values less than (2452001),
partition part_40 values less than (2452031),
partition part_41 values less than (2452062),
partition part_42 values less than (2452092),
partition part_43 values less than (2452123),
partition part_44 values less than (2452154),
partition part_45 values less than (2452184),
partition part_46 values less than (2452215),   
partition part_47 values less than (2452245),
partition part_48 values less than (2452276),
partition part_49 values less than (2452307),
partition part_50 values less than (2452335),
partition part_51 values less than (2452366),
partition part_52 values less than (2452396),
partition part_53 values less than (2452427),
partition part_54 values less than (2452457),
partition part_55 values less than (2452488),
partition part_56 values less than (2452519),
partition part_57 values less than (2452549),
partition part_58 values less than (2452580),
partition part_59 values less than (2452610),
partition part_60 values less than (2452641),
partition part_61 values less than (2452672),
partition part_62 values less than (2452700),
partition part_63 values less than (2452731),
partition part_64 values less than (2452761),
partition part_65 values less than (2452792),
partition part_66 values less than (2452822),
partition part_67 values less than (2452853),
partition part_68 values less than (2452884),
partition part_69 values less than (2452914),
partition part_70 values less than (2452945),
partition part_71 values less than (2452975),
partition part_72 values less than (2453006),
partition part_73 values less than (maxvalue));

create table catalog_returns
(
 cr_returned_date_sk        bigint,
 cr_returned_time_sk        bigint,
 cr_item_sk             bigint not null,
 cr_refunded_customer_sk    bigint,
 cr_refunded_cdemo_sk       bigint,
 cr_refunded_hdemo_sk       bigint,
 cr_refunded_addr_sk        bigint,
 cr_returning_customer_sk   bigint,
 cr_returning_cdemo_sk      bigint,
 cr_returning_hdemo_sk      bigint,
 cr_returning_addr_sk       bigint,
 cr_call_center_sk      bigint,
 cr_catalog_page_sk         bigint ,
 cr_ship_mode_sk        bigint ,
 cr_warehouse_sk        bigint ,
 cr_reason_sk           bigint ,
 cr_order_number        bigint not null,
 cr_return_quantity         int,
 cr_return_amount       decimal(7,2),
 cr_return_tax          decimal(7,2),
 cr_return_amt_inc_tax      decimal(7,2),
 cr_fee             decimal(7,2),
 cr_return_ship_cost        decimal(7,2),
 cr_refunded_cash       decimal(7,2),
 cr_reversed_charge         decimal(7,2),
 cr_store_credit        decimal(7,2),
 cr_net_loss            decimal(7,2)
)
tablegroup = tpcds_tg_catalog_group_range
partition by hash(cr_item_sk) partitions 128
;

create table web_returns
(
 wr_returned_date_sk       bigint,
 wr_returned_time_sk       bigint,
 wr_item_sk                bigint not null,
 wr_refunded_customer_sk   bigint,
 wr_refunded_cdemo_sk      bigint,
 wr_refunded_hdemo_sk      bigint,
 wr_refunded_addr_sk       bigint,
 wr_returning_customer_sk  bigint,
 wr_returning_cdemo_sk     bigint,
 wr_returning_hdemo_sk     bigint,
 wr_returning_addr_sk      bigint,
 wr_web_page_sk            bigint,
 wr_reason_sk              bigint,
 wr_order_number           bigint not null,
 wr_return_quantity        int,
 wr_return_amt             decimal(7,2),
 wr_return_tax             decimal(7,2),
 wr_return_amt_inc_tax     decimal(7,2),
 wr_fee                    decimal(7,2),
 wr_return_ship_cost       decimal(7,2),
 wr_refunded_cash          decimal(7,2),
 wr_reversed_charge        decimal(7,2),
 wr_account_credit         decimal(7,2),
 wr_net_loss               decimal(7,2)
)
partition by range(wr_returned_date_sk) 
subpartition BY hash(wr_item_sk) subpartitions 64
(partition part_1 values less than (2450846), 
partition part_2 values less than (2450874), 
partition part_3 values less than (2450905), 
partition part_4 values less than (2450935), 
partition part_5 values less than (2450966), 
partition part_6 values less than (2450996), 
partition part_7 values less than (2451027), 
partition part_8 values less than (2451058), 
partition part_9 values less than (2451088), 
partition part_10 values less than (2451119),
partition part_11 values less than (2451149),
partition part_12 values less than (2451180),
partition part_13 values less than (2451211),
partition part_14 values less than (2451239),
partition part_15 values less than (2451270),
partition part_16 values less than (2451300),
partition part_17 values less than (2451331),
partition part_18 values less than (2451361),
partition part_19 values less than (2451392),
partition part_20 values less than (2451423),
partition part_21 values less than (2451453),
partition part_22 values less than (2451484),
partition part_23 values less than (2451514),
partition part_24 values less than (2451545),
partition part_25 values less than (2451576),
partition part_26 values less than (2451605),
partition part_27 values less than (2451636),
partition part_28 values less than (2451666),
partition part_29 values less than (2451697),
partition part_30 values less than (2451727),
partition part_31 values less than (2451758),
partition part_32 values less than (2451789),
partition part_33 values less than (2451819),
partition part_34 values less than (2451850),
partition part_35 values less than (2451880),
partition part_36 values less than (2451911),
partition part_37 values less than (2451942),
partition part_38 values less than (2451970),
partition part_39 values less than (2452001),
partition part_40 values less than (2452031),
partition part_41 values less than (2452062),
partition part_42 values less than (2452092),
partition part_43 values less than (2452123),
partition part_44 values less than (2452154),
partition part_45 values less than (2452184),
partition part_46 values less than (2452215),   
partition part_47 values less than (2452245),
partition part_48 values less than (2452276),
partition part_49 values less than (2452307),
partition part_50 values less than (2452335),
partition part_51 values less than (2452366),
partition part_52 values less than (2452396),
partition part_53 values less than (2452427),
partition part_54 values less than (2452457),
partition part_55 values less than (2452488),
partition part_56 values less than (2452519),
partition part_57 values less than (2452549),
partition part_58 values less than (2452580),
partition part_59 values less than (2452610),
partition part_60 values less than (2452641),
partition part_61 values less than (2452672),
partition part_62 values less than (2452700),
partition part_63 values less than (2452731),
partition part_64 values less than (2452761),
partition part_65 values less than (2452792),
partition part_66 values less than (2452822),
partition part_67 values less than (2452853),
partition part_68 values less than (2452884),
partition part_69 values less than (2452914),
partition part_70 values less than (2452945),
partition part_71 values less than (2452975),
partition part_72 values less than (2453006),
partition part_73 values less than (maxvalue))
;

create table web_sales
(
 ws_sold_date_sk           bigint,
 ws_sold_time_sk           bigint,
 ws_ship_date_sk           bigint,
 ws_item_sk                bigint not null,
 ws_bill_customer_sk       bigint,
 ws_bill_cdemo_sk          bigint,
 ws_bill_hdemo_sk          bigint,
 ws_bill_addr_sk           bigint,
 ws_ship_customer_sk       bigint,
 ws_ship_cdemo_sk          bigint,
 ws_ship_hdemo_sk          bigint,
 ws_ship_addr_sk           bigint,
 ws_web_page_sk            bigint,
 ws_web_site_sk            bigint,
 ws_ship_mode_sk           bigint,
 ws_warehouse_sk           bigint,
 ws_promo_sk               bigint,
 ws_order_number           bigint not null,
 ws_quantity               int,
 ws_wholesale_cost         decimal(7,2),
 ws_list_price             decimal(7,2),
 ws_sales_price            decimal(7,2),
 ws_ext_discount_amt       decimal(7,2),
 ws_ext_sales_price        decimal(7,2),
 ws_ext_wholesale_cost     decimal(7,2),
 ws_ext_list_price         decimal(7,2),
 ws_ext_tax                decimal(7,2),
 ws_coupon_amt             decimal(7,2),
 ws_ext_ship_cost          decimal(7,2),
 ws_net_paid               decimal(7,2),
 ws_net_paid_inc_tax       decimal(7,2),
 ws_net_paid_inc_ship      decimal(7,2),
 ws_net_paid_inc_ship_tax  decimal(7,2),
 ws_net_profit             decimal(7,2)
) 
partition by range (ws_sold_date_sk) 
subpartition BY hash(ws_item_sk) subpartitions 64
(partition part_1 values less than (2450846), 
partition part_2 values less than (2450874), 
partition part_3 values less than (2450905), 
partition part_4 values less than (2450935), 
partition part_5 values less than (2450966), 
partition part_6 values less than (2450996), 
partition part_7 values less than (2451027), 
partition part_8 values less than (2451058), 
partition part_9 values less than (2451088), 
partition part_10 values less than (2451119),
partition part_11 values less than (2451149),
partition part_12 values less than (2451180),
partition part_13 values less than (2451211),
partition part_14 values less than (2451239),
partition part_15 values less than (2451270),
partition part_16 values less than (2451300),
partition part_17 values less than (2451331),
partition part_18 values less than (2451361),
partition part_19 values less than (2451392),
partition part_20 values less than (2451423),
partition part_21 values less than (2451453),
partition part_22 values less than (2451484),
partition part_23 values less than (2451514),
partition part_24 values less than (2451545),
partition part_25 values less than (2451576),
partition part_26 values less than (2451605),
partition part_27 values less than (2451636),
partition part_28 values less than (2451666),
partition part_29 values less than (2451697),
partition part_30 values less than (2451727),
partition part_31 values less than (2451758),
partition part_32 values less than (2451789),
partition part_33 values less than (2451819),
partition part_34 values less than (2451850),
partition part_35 values less than (2451880),
partition part_36 values less than (2451911),
partition part_37 values less than (2451942),
partition part_38 values less than (2451970),
partition part_39 values less than (2452001),
partition part_40 values less than (2452031),
partition part_41 values less than (2452062),
partition part_42 values less than (2452092),
partition part_43 values less than (2452123),
partition part_44 values less than (2452154),
partition part_45 values less than (2452184),
partition part_46 values less than (2452215),   
partition part_47 values less than (2452245),
partition part_48 values less than (2452276),
partition part_49 values less than (2452307),
partition part_50 values less than (2452335),
partition part_51 values less than (2452366),
partition part_52 values less than (2452396),
partition part_53 values less than (2452427),
partition part_54 values less than (2452457),
partition part_55 values less than (2452488),
partition part_56 values less than (2452519),
partition part_57 values less than (2452549),
partition part_58 values less than (2452580),
partition part_59 values less than (2452610),
partition part_60 values less than (2452641),
partition part_61 values less than (2452672),
partition part_62 values less than (2452700),
partition part_63 values less than (2452731),
partition part_64 values less than (2452761),
partition part_65 values less than (2452792),
partition part_66 values less than (2452822),
partition part_67 values less than (2452853),
partition part_68 values less than (2452884),
partition part_69 values less than (2452914),
partition part_70 values less than (2452945),
partition part_71 values less than (2452975),
partition part_72 values less than (2453006),
partition part_73 values less than (maxvalue))
;

create table catalog_sales
(
 cs_sold_date_sk           bigint,
 cs_sold_time_sk           bigint,
 cs_ship_date_sk           bigint,
 cs_bill_customer_sk       bigint,
 cs_bill_cdemo_sk          bigint,
 cs_bill_hdemo_sk          bigint,
 cs_bill_addr_sk           bigint,
 cs_ship_customer_sk       bigint,
 cs_ship_cdemo_sk          bigint,
 cs_ship_hdemo_sk          bigint,
 cs_ship_addr_sk           bigint,
 cs_call_center_sk         bigint,
 cs_catalog_page_sk        bigint,
 cs_ship_mode_sk           bigint,
 cs_warehouse_sk           bigint,
 cs_item_sk                bigint not null,
 cs_promo_sk               bigint,
 cs_order_number           bigint not null,
 cs_quantity               int,
 cs_wholesale_cost         decimal(7,2),
 cs_list_price             decimal(7,2),
 cs_sales_price            decimal(7,2),
 cs_ext_discount_amt       decimal(7,2),
 cs_ext_sales_price        decimal(7,2),
 cs_ext_wholesale_cost     decimal(7,2),
 cs_ext_list_price         decimal(7,2),
 cs_ext_tax                decimal(7,2),
 cs_coupon_amt             decimal(7,2),
 cs_ext_ship_cost          decimal(7,2),
 cs_net_paid               decimal(7,2),
 cs_net_paid_inc_tax       decimal(7,2),
 cs_net_paid_inc_ship      decimal(7,2),
 cs_net_paid_inc_ship_tax  decimal(7,2),
 cs_net_profit             decimal(7,2)
 )
partition by range (cs_sold_date_sk) 
subpartition BY hash(cs_item_sk) subpartitions 64
(partition part_1 values less than (2450846), 
partition part_2 values less than (2450874), 
partition part_3 values less than (2450905), 
partition part_4 values less than (2450935), 
partition part_5 values less than (2450966), 
partition part_6 values less than (2450996), 
partition part_7 values less than (2451027), 
partition part_8 values less than (2451058), 
partition part_9 values less than (2451088), 
partition part_10 values less than (2451119),
partition part_11 values less than (2451149),
partition part_12 values less than (2451180),
partition part_13 values less than (2451211),
partition part_14 values less than (2451239),
partition part_15 values less than (2451270),
partition part_16 values less than (2451300),
partition part_17 values less than (2451331),
partition part_18 values less than (2451361),
partition part_19 values less than (2451392),
partition part_20 values less than (2451423),
partition part_21 values less than (2451453),
partition part_22 values less than (2451484),
partition part_23 values less than (2451514),
partition part_24 values less than (2451545),
partition part_25 values less than (2451576),
partition part_26 values less than (2451605),
partition part_27 values less than (2451636),
partition part_28 values less than (2451666),
partition part_29 values less than (2451697),
partition part_30 values less than (2451727),
partition part_31 values less than (2451758),
partition part_32 values less than (2451789),
partition part_33 values less than (2451819),
partition part_34 values less than (2451850),
partition part_35 values less than (2451880),
partition part_36 values less than (2451911),
partition part_37 values less than (2451942),
partition part_38 values less than (2451970),
partition part_39 values less than (2452001),
partition part_40 values less than (2452031),
partition part_41 values less than (2452062),
partition part_42 values less than (2452092),
partition part_43 values less than (2452123),
partition part_44 values less than (2452154),
partition part_45 values less than (2452184),
partition part_46 values less than (2452215),   
partition part_47 values less than (2452245),
partition part_48 values less than (2452276),
partition part_49 values less than (2452307),
partition part_50 values less than (2452335),
partition part_51 values less than (2452366),
partition part_52 values less than (2452396),
partition part_53 values less than (2452427),
partition part_54 values less than (2452457),
partition part_55 values less than (2452488),
partition part_56 values less than (2452519),
partition part_57 values less than (2452549),
partition part_58 values less than (2452580),
partition part_59 values less than (2452610),
partition part_60 values less than (2452641),
partition part_61 values less than (2452672),
partition part_62 values less than (2452700),
partition part_63 values less than (2452731),
partition part_64 values less than (2452761),
partition part_65 values less than (2452792),
partition part_66 values less than (2452822),
partition part_67 values less than (2452853),
partition part_68 values less than (2452884),
partition part_69 values less than (2452914),
partition part_70 values less than (2452945),
partition part_71 values less than (2452975),
partition part_72 values less than (2453006),
partition part_73 values less than (maxvalue));

create table store_sales
(
 ss_sold_date_sk           bigint,
 ss_sold_time_sk           bigint,
 ss_item_sk                bigint not null,
 ss_customer_sk            bigint,
 ss_cdemo_sk               bigint,
 ss_hdemo_sk               bigint,
 ss_addr_sk                bigint,
 ss_store_sk               bigint,
 ss_promo_sk               bigint,
 ss_ticket_number          bigint not null,
 ss_quantity               int,
 ss_wholesale_cost         decimal(7,2),
 ss_list_price             decimal(7,2),
 ss_sales_price            decimal(7,2),
 ss_ext_discount_amt       decimal(7,2),
 ss_ext_sales_price        decimal(7,2),
 ss_ext_wholesale_cost     decimal(7,2),
 ss_ext_list_price         decimal(7,2),
 ss_ext_tax                decimal(7,2),
 ss_coupon_amt             decimal(7,2),
 ss_net_paid               decimal(7,2),
 ss_net_paid_inc_tax       decimal(7,2),
 ss_net_profit             decimal(7,2)
)
partition by range(ss_sold_date_sk) 
subpartition BY hash(ss_item_sk) subpartitions 64
(partition part_1 values less than (2450846), 
partition part_2 values less than (2450874), 
partition part_3 values less than (2450905), 
partition part_4 values less than (2450935), 
partition part_5 values less than (2450966), 
partition part_6 values less than (2450996), 
partition part_7 values less than (2451027), 
partition part_8 values less than (2451058), 
partition part_9 values less than (2451088), 
partition part_10 values less than (2451119),
partition part_11 values less than (2451149),
partition part_12 values less than (2451180),
partition part_13 values less than (2451211),
partition part_14 values less than (2451239),
partition part_15 values less than (2451270),
partition part_16 values less than (2451300),
partition part_17 values less than (2451331),
partition part_18 values less than (2451361),
partition part_19 values less than (2451392),
partition part_20 values less than (2451423),
partition part_21 values less than (2451453),
partition part_22 values less than (2451484),
partition part_23 values less than (2451514),
partition part_24 values less than (2451545),
partition part_25 values less than (2451576),
partition part_26 values less than (2451605),
partition part_27 values less than (2451636),
partition part_28 values less than (2451666),
partition part_29 values less than (2451697),
partition part_30 values less than (2451727),
partition part_31 values less than (2451758),
partition part_32 values less than (2451789),
partition part_33 values less than (2451819),
partition part_34 values less than (2451850),
partition part_35 values less than (2451880),
partition part_36 values less than (2451911),
partition part_37 values less than (2451942),
partition part_38 values less than (2451970),
partition part_39 values less than (2452001),
partition part_40 values less than (2452031),
partition part_41 values less than (2452062),
partition part_42 values less than (2452092),
partition part_43 values less than (2452123),
partition part_44 values less than (2452154),
partition part_45 values less than (2452184),
partition part_46 values less than (2452215),   
partition part_47 values less than (2452245),
partition part_48 values less than (2452276),
partition part_49 values less than (2452307),
partition part_50 values less than (2452335),
partition part_51 values less than (2452366),
partition part_52 values less than (2452396),
partition part_53 values less than (2452427),
partition part_54 values less than (2452457),
partition part_55 values less than (2452488),
partition part_56 values less than (2452519),
partition part_57 values less than (2452549),
partition part_58 values less than (2452580),
partition part_59 values less than (2452610),
partition part_60 values less than (2452641),
partition part_61 values less than (2452672),
partition part_62 values less than (2452700),
partition part_63 values less than (2452731),
partition part_64 values less than (2452761),
partition part_65 values less than (2452792),
partition part_66 values less than (2452822),
partition part_67 values less than (2452853),
partition part_68 values less than (2452884),
partition part_69 values less than (2452914),
partition part_70 values less than (2452945),
partition part_71 values less than (2452975),
partition part_72 values less than (2453006),
partition part_73 values less than (maxvalue));

Step 7: Load data

Write a script based on the data and SQL statements generated in the preceding steps. The following example shows how to load data:

  1. Create a load.py script.

    [wieck@localhost load] $ vim load.py
    
    #!/usr/bin/env python
    #-*- encoding:utf-8 -*-
    import os
    import sys
    import time
    import commands
    hostname='$host_ip'  # Note: Please fill in the IP address of an observer, such as the server where observer A is located.
    port='$host_port'               # The port number of observer A.
    tenant='$tenant_name'              # The tenant name.
    user='$user'               # The username.
    password='$password'           # The password.
    data_path='$path'         # Note: Please fill in the directory where the tbl file is located on the server where observer A is located.
    db_name='$db_name'             # The database name.
    # Create tables
    cmd_str='obclient -h%s -P%s -u%s@%s -p%s -D%s < create_tpcds_mysql_table_part.ddl'%(hostname,port,user,tenant,password,db_name)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str='obclient -h%s -P%s -u%s@%s -p%s  -D%s -e "show tables;" '%(hostname,port,user,tenant,password,db_name)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/dbgen_version.dat' into table dbgen_version fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/customer_address.dat' into table customer_address fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c -D%s -e "load data /*+ parallel(80) */ infile '%s/customer_demographics.dat' into table customer_demographics fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/date_dim.dat' into table date_dim fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s   -D%s -e "load data /*+ parallel(80) */ infile '%s/warehouse.dat' into table warehouse fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/ship_mode.dat' into table ship_mode fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/time_dim.dat' into table time_dim fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/reason.dat' into table reason fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/income_band.dat' into table income_band fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/item.dat' into table item fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c -D%s -e "load data /*+ parallel(80) */ infile '%s/store.dat' into table store fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/call_center.dat' into table call_center fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s   -D%s -e "load data /*+ parallel(80) */ infile '%s/customer.dat' into table customer fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/web_site.dat' into table web_site fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/store_returns.dat' into table store_returns fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/household_demographics.dat' into table household_demographics fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/web_page.dat' into table web_page fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/promotion.dat' into table promotion fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c -D%s -e "load data /*+ parallel(80) */ infile '%s/catalog_page.dat' into table catalog_page fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/inventory.dat' into table inventory fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s   -D%s -e "load data /*+ parallel(80) */ infile '%s/catalog_returns.dat' into table catalog_returns fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/web_returns.dat' into table web_returns fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/web_sales.dat' into table web_sales fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/catalog_sales.dat' into table catalog_sales fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    cmd_str=""" obclient -h%s -P%s -u%s@%s -p%s -c  -D%s -e "load data /*+ parallel(80) */ infile '%s/store_sales.dat' into table store_sales fields terminated by '|';" """ %(hostname,port,user,tenant,password,db_name,data_path)
    result = commands.getstatusoutput(cmd_str)
    print result
    
  2. Load data.

    Notice

    You must install the OBClient client before you can load data.

    [wieck@localhost load] $ python load.py
    
  3. Execute a major compaction.

    Log in to the test tenant and execute a major compaction.

    obclient > ALTER SYSTEM SET undo_retention = 100;
    obclient > ALTER SYSTEM MAJOR FREEZE;
    
  4. Check whether the major compaction is completed.

    SELECT * FROM oceanbase.DBA_OB_ZONE_MAJOR_COMPACTION\G
    

    The return result is as follows:

    *************************** 1. row ***************************
                ZONE: zone1
      BROADCAST_SCN: 1716172011046213913
            LAST_SCN: 1716172011046213913
    LAST_FINISH_TIME: 2024-05-20 10:35:07.829496
          START_TIME: 2024-05-20 10:26:51.579881
              STATUS: IDLE
    1 row in set
    

    When the value of the STATUS column is IDLE and the values of the BROADCAST_SCN and LAST_SCN columns are equal, the major compaction is completed.

  5. Manually collect statistics.

    Execute the obclient -h$host_ip -P$host_port -u$user@$tenant -p$password -A -D$database command in the test user account.

    call dbms_stats.gather_table_stats(NULL, 'date_dim', degree=>128, granularity=>'AUTO', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'warehouse', degree=>128, granularity=>'AUTO', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'ship_mode', degree=>128, granularity=>'AUTO', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'time_dim', degree=>128, granularity=>'AUTO', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'reason', degree=>128, granularity=>'AUTO', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'income_band', degree=>128, granularity=>'AUTO', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'item', degree=>128, granularity=>'AUTO', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'store', degree=>128, granularity=>'AUTO', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'call_center', degree=>128, granularity=>'AUTO', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'web_site', degree=>128, granularity=>'AUTO', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'household_demographics', degree=>128, granularity=>'AUTO', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'web_page', degree=>128, granularity=>'AUTO', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'promotion', degree=>128, granularity=>'AUTO', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'catalog_page', degree=>128, granularity=>'AUTO', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'inventory', degree=>128, granularity=>'GLOBAL', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'web_sales', degree=>128, granularity=>'GLOBAL', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'catalog_sales', degree=>128, granularity=>'GLOBAL', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'store_sales', degree=>128, granularity=>'GLOBAL', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'catalog_returns', degree=>128, granularity=>'GLOBAL', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'web_returns', degree=>128, granularity=>'GLOBAL', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'store_returns', degree=>128, granularity=>'GLOBAL', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'customer_address', degree=>128, granularity=>'GLOBAL', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'customer_demographics', degree=>128, granularity=>'GLOBAL', method_opt=>'FOR ALL COLUMNS SIZE 128');
    call dbms_stats.gather_table_stats(NULL, 'customer', degree=>128, granularity=>'GLOBAL', method_opt=>'FOR ALL COLUMNS SIZE 128');
    

Step 8: Run the test

Write a script based on the data and SQL statements generated in the previous steps. Here is an example of how to run the test:

  1. Write a test script named tpcds.sh in the sql-ds directory.

    [wieck@localhost queries] $ vim tpcds.sh
    
      #!/bin/bash
      TPCDS_TEST="obclient -h $host_ip -P $host_port -utpcds_100g_part@tpcds_mysql  -D tpcds_100g_part  -ptest -c"
      # warmup preheating
      for i in {1..99}
      do
          sql1="source sql${i}.sql"
          echo $sql1| $TPCDS_TEST >db${i}.log  || ret=1
      done
      # Run the test
      for i in {1..99}
      do
          starttime=`date +%s%N`
          echo `date  '+[%Y-%m-%d %H:%M:%S]'` "BEGIN Q${i}"
          sql1="source sql${i}.sql"
          echo $sql1| $TPCDS_TEST >db${i}.log  || ret=1
          stoptime=`date +%s%N`
          costtime=`echo $stoptime $starttime | awk '{printf "%0.2f\n", ($1 - $2) / 1000000000}'`
          echo `date  '+[%Y-%m-%d %H:%M:%S]'` "END,COST ${costtime}s"
      done
    
  2. Run the test script.

    sh tpcds.sh
    

Common error messages

  • Data import failed. The error message is as follows:

    ERROR 1017 (HY000) at line 1: File not exist
    

    The tbl file must be stored in a directory on the server where the connected OceanBase database is located, because data must be imported locally.

  • Data query failed. The error message is as follows:

    ERROR 4624 (HY000): No memory or reach tenant memory limit
    

    The memory is insufficient. We recommend that you increase the memory of the tenant.

  • Data import failed. The error message is as follows:

    ERROR 1227 (42501) at line 1: Access denied
    

    You must grant the user access privileges. Run the following command to grant the privileges.

Contact Us