* 5.5x Faster at 500 Warehouses: In TPC-C benchmarks with 500 warehouses, OrioleDB outperformed PostgreSQL's default heap tables by 5.5 times. This highlights significant gains in workloads that stress shared memory cache bottlenecks.
* 2.7x Faster at 1000 Warehouses: Even when the data doesn't fit into the OS memory cache (at 1000 warehouses), OrioleDB was 2.7 times faster. Its index-organized tables improve data locality, reducing disk I/O and boosting performance.
Try it yourself:
Clone the OrioleDB repository from GitHub and follow the build instructions, or use their Docker image. https://github.com/orioledb/orioledb#installation Alternatively, run OrioleDB on Supabase. Read the blog post for more details. https://supabase.com/blog/orioledb-launch
Run your own workloads or existing benchmarks like go-tpc or HammerDB to see the performance differences firsthand. We Would love to hear about others' experiences with OrioleDB, especially in production-like environments or with different workloads.
Fu--ing finally!
It's motivated by liking of reliable systems that keep working when a node fails.
There are a bunch of non-webscale companies that pick mysql over postgresql because they can use either percona xtradb cluster or galera cluster.
Having an open source multi-master solution would mean that postgresql could finally be used over there as well.
The important design issue about building active-active multi-master on the base of raft protocol is about being able to apply changes locally without immediately putting them into a log (without sacrificing durability). MySQL implements a binlog, which is separate from a log, to ensure durability. OrioleDB implements copy-on-write checkpoints and row-level WAL. That gives us a chance to implement MM and durability using a single log.
It’s not just about building their extension but actually making Postgres better for everyone. I would have loved that big corps would have taken this approach, as it opens the doors to others to add features for different use cases and making postgres more of a DBMS framework
EDB and Cybertech definitely made a great start with Zheap[0] although the initiative stalled for whatever reason
Hopefully the community can support this effort to improve the Table AM API - it would be beneficial even beyond oriole. As you point out, a Pluggable Storage system in Postgres would open up a few new use cases
This instance is less bad than some in that it's at least comparing the same sort of database and doing it using the same driver -- so it is at least an apples to apples measurement of something.
Still, please, as a community we need to stop getting rid of the think time and quoting the output as tpmC or as a standard benchmark.
See https://www.tpc.org/tpc_documents_current_versions/pdf/tpc-c... for the spec.
For sure, not all the PostgreSQL memory is N^2. AFAIR, just a couple of components, including deadlock decoding, require a quadratic amount of memory. Normally, they are insignificant but growing fast if you are rising max_connections.
Another suite to look at is sysbench. It’s very flexible, for better or for worse, but it can allow you to create an interesting mix of queries at different scale factors. For something like this where you’re going head to head with Postgres, having more dimensions with more benchmarks isn’t going to hurt. Ideally you’ll see a nice win across the board and get an understanding of the shape of differences.