OceanBase Database adopts the Shared-Nothing architecture to implement node peering. Each node has its SQL engine, transaction engine, and storage engine, and is run on a cluster of ordinary PC servers. This architecture provides benefits such as high scalability, high availability, high performance, high cost efficiency, and high compatibility with mainstream database systems.

An OceanBase cluster consists of multiple nodes. Each node belongs to a zone. The term zone is a logical concept that represents a set of nodes with similar hardware availability in a cluster. It has different meanings in different deployment modes. For example, if a cluster is deployed in one IDC, nodes in a zone can belong to the same rack or the same switch. If a cluster is deployed across IDCs, each zone can correspond to one IDC. Each zone has two properties: IDC and region, which specify the IDC in which the zone resides and the region in which the IDC resides. Generally, the region refers to the city where the IDC is located. The system improves the policy used to implement and optimize automatic failover in clusters based on the IDCs and regions of zones. OceanBase Database provides multiple deployment modes based on different high-availability requirements for database systems. For more information, see Overview.
In OceanBase Database, data in a table can be horizontally divided into multiple shards based on a specific division rule. Each shard is a partition. A row of data belongs to only one partition. You can specify partitioning rules when you create a table. Partitions of the hash, range, and list types, and subpartitions are supported. For example, you can divide an order table in a transaction database into several partitions by user ID, and then divide each partition into several subpartitions by month. In a subpartition table, each subpartition is a physical partition, while a partition is only a logical concept. Multiple partitions of a table can be distributed across multiple nodes in a zone. Each physical partition has a storage layer object, called a tablet, for storing ordered data records.
When you modify data in a tablet, the system records redo logs to the corresponding log stream to ensure data persistence. Each log stream serves multiple tablets on the local node. To protect data and ensure service continuity when a node fails, each log stream and tablet have multiple replicas. Typically, multiple replicas of a log stream or tablet are distributed across zones. You can modify only one of the replicas, which is referred to as a leader. Other replicas are referred to as followers. Data consistency is ensured between the leader and followers based on the Multi-Paxos protocol. If the node where the leader is located is down, a follower is elected as the new leader to continue to provide services.
Each node in the cluster runs an observer process, which contains multiple operating system threads. All nodes provide the same features. Each observer process accesses data of partitions on the node in which it runs, and parses and executes SQL statements routed to the node. The observer processes communicate with each other by using the TCP/IP protocol. In addition, each observer process listens to connection requests from external applications, establishes connections and database sessions, and provides database services. For more information about observer processes, see Threads.
OceanBase Database provides the unique multi-tenant feature to simplify the management of multiple business databases deployed on a large scale and reduce resource costs. In an OceanBase cluster, you can create multiple isolated database instances, which are referred to as tenants. From the perspective of applications, each tenant is a separate database. In addition, you can create a tenant in MySQL- or Oracle-compatible mode. After your application is connected to a MySQL-compatible tenant, you can create users and databases in the tenant. The user experience is similar to that with a standalone MySQL database. In the same way, after your application is connected to an Oracle-compatible tenant, you can create schemas and manage roles in the tenant. The user experience is similar to that with a standalone Oracle database. After a new cluster is initialized, a sys tenant is created. The sys tenant is a MySQL-compatible tenant that stores the metadata of the cluster.
Applicability
OceanBase Database Community Edition provides only the MySQL mode.
To isolate the resources of each tenant, each observer process can have multiple virtual containers known as resource units (UNIT) that belong to different tenants. The resource units of each tenant across multiple nodes form a resource pool. A resource unit includes CPU and memory resources.
To shield applications from internal details such as the distribution of partitions and replicas in OceanBase Database and make accessing a distributed database as simple as accessing a standalone database, we provide the OceanBase Database Proxy (ODP) service. Applications do not directly connect to the OBServer nodes, but instead connects to ODP, which then forwards SQL requests to the appropriate OBServer node. ODP is a stateless service, and multiple ODP nodes provide a unified network address to the application through network load balancing (SLB).