Not sure about PostgresSQL, but I tried TimescaleDB which is postgres based. The lesson I have learnt is that I should have use influxdb2 in the first place.
The timescaledb database exploded in size very quickly in 2 weeks (first sync all archive (600chs/30s, 1+year), then 600chs/5s live insertion left on for days), tried all compression tricks mentioned in doc (I re-created and re-imprted the data archive several times taking hours and hours just to make sure, I didn’t mis-operate), nothing helped much. The size droped by
2/3 when switched to influxdb2, and the increament in size also slowed down. After all data in RapidScada archive has been imporpted via socketapi into Influxdb2 the size is bigger than RS’s native format, but not so much – not at all, considering the tradeoff for the abiltiy to use the advanced timeseries analytic queries and customized downsampling.
I use postgresql very often, I have nothing against it, This is just a friednly aheadsup for my wasted 1 week in dealing with this mess. Maybe a benchmark or some reasearch on scalablity on the setup will turn out to be benificial.
Timescaledb’s insertion speed is notably faster c.f. influxdb2 in my test. The query performance was initally on par after the db has the right partitioning, but performance decreases notably for some query when the size quicly become unmanagable…