PostGreSQL "in-memory" with multiple terabytes of working set data?

by Roman Nasuti   Last Updated June 12, 2019 07:06 AM

I've come to the understanding that PostGreSQL generally scales up well, with an 8-socket x86 configuration having been tested before showing near-linear scalability. However, such tests were done a while ago (and were read-only), and I would like to know if anybody knows of any upper bounds on PostGreSQL vertical scalability:

Would very high-end vertical scalability cases -- for example, 100+ cores and 4+ TB of RAM -- with a write-heavy multi-terabyte OLTP working set that fits in RAM, be utilized efficiently by PostGreSQL? The workload I have in mind is technically web, but with >95% of reads being trivially cacheable so most operations would be read->compute->write->update cache.

I'm considering running some mockup tests on AWS HiMem or X1e.32xlarge instances to test this theory, but I'd like to know if anybody has info on this before I start spending $15-30+/hr to test this.



Related Questions




Mysql Tuning for a small server

Updated March 01, 2016 01:02 AM