
Cross-cloud database deployments — whether primary-standby DR or active-active replication — share one problem that doesn't appear on architecture diagrams: the networking layer between two clouds that don't share a control plane, a private backbone, or even a consistent API for network interconnection. Bridging them requires specialized network solutions — dedicated lines, public networks, or cloud enterprise networks — each introducing its own complexity around bandwidth, latency, and data transmission security.
Earlier posts in this series on active-active replication and primary-standby DR touched briefly on networking, but stopped short of exploring it in detail. This post breaks down the sync methods, network paths, and latency factors that affect cross-cloud replication, then looks at how OceanBase Cloud handles the common wiring that underpins its DR topology. We'll use primary-standby as the running example since it is the most common starting point.
Before choosing a network topology, understand that cross-cloud database replication involves two fundamentally different sync methods — and the network path question only applies to one of them.
The direct network connection establishes a live channel between primary and standby. The Log Transport Service (LTS) continuously reads redo logs from the primary and transmits them in real-time to the standby, where the Log Replay Service (LRS) applies them into the standby's MemStore as they arrive. No intermediary storage — sync latency sits at the millisecond level.
This method supports read/write separation (the standby can serve read traffic), and it's the only option suitable for workloads with strict RPO requirements.
Within this method, you choose a network path:
The log archiving method takes a completely different approach. The primary writes redo logs to object storage in its own region — OSS on Alibaba Cloud, S3 on AWS — and the standby reads those archived logs and replays them locally. No direct network connection between the two databases.

Sync latency is at the second level because data takes an extra hop through object storage. Read/write separation is not supported — the standby is purely a recovery target. But you need zero cross-cloud network infrastructure, and you can set this up on demand without provisioning anything.
| Direct network connection | Log archiving | |
| Sync latency | Milliseconds | Seconds |
| Sync path | Primary → VPC/dedicated line/internet → Standby | Primary → Object storage → Standby |
| Read/write split | Supported (standby serves reads) | Not supported |
| Network infrastructure | VPC peering, dedicated line, or internet | None (object storage API only) |
| Cost | Higher (private channel + line fees) | Lower (storage egress only) |
| RPO suitability | Strict RPO workloads | Tolerant RPO workloads |
| Setup complexity | Weeks to months (provisioning required) | On-demand, no network changes |
Most production deployments needing cross-cloud DR for compliance or business continuity use the direct network connection. Log archiving fits cost-sensitive environments where second-level lag is acceptable — development/staging DR, reporting replicas, or regulatory data residency where the standby exists for jurisdictional reasons rather than fast failover.
Once you've chosen the direct network connection method, your sync latency depends on several factors — some you can control, some you can't.
log_transport_compress_all configuration parameter, which applies lz4 or zstd compression to log transport with minimal CPU overhead. This is recommended for all workload types in bandwidth-constrained scenarios. For dedicated arbitration services, the minimum bandwidth requirement is 20 Mbps, scaling at approximately 20 Mbps per 32 additional single-unit tenants.💡 Measure before you commit
Deploy test instances on both clouds in your target regions. Run sustained write workloads for at least 72 hours and measure p50, p95, and p99 sync latency across your candidate network paths. Weekend and weeknight traffic patterns differ from business hours — capture both.
That's the data plane — what moves between clouds, how, and what affects performance. But there's still a layer that doesn't appear on architecture diagrams: the managed infrastructure that provisions and maintains those cross-cloud pipes. OceanBase Cloud handles this through three infrastructure primitives that underpin every cross-cloud topology, whether primary-standby or active-active.
Note:
In both cases, this is not DNS-based failover: routing updates happen at the endpoint layer rather than waiting for DNS TTL propagation. Applications never change their connection strings.
Two operational concerns that cut across everything above.
Cross-cloud networking is the layer that makes or breaks a multi-cloud database deployment, and it's the layer most teams underestimate until they're debugging replication lag at 2 a.m. The sync method, network path, and infrastructure automation decisions covered here should be made before you provision your first cross-cloud instance — not after.

AI era doesn't need another heavy, complex enterprise database. It needs agility. It needs flexibility. We went back to the drawing board to understand what an AI application actually needs from a database. Our answer is OceanBase seekdb


Welcome to the latest episode in our series of articles designed to help you get started with OceanBase, a next-generation distributed relational database. Building on our previous guides where we connected OceanBase to a Sveltekit app and built an e-commerce app with Flask and OceanBase, we now ...


bubseek is a self-driven insight Agent that turns every agent interaction into queryable data on OceanBase seekdb — making observability intrinsic, not bolted on.
