I have a stream of live data that contains a full snapshot every 8 hours and updates every 5 minutes. This stream contains objects and relations so more than just one value.
I need to have multiple working datasets that are either standalone or derived from the live data (at some point in time). The working datasets are manipulated manually. A manual manipulation can ADD, DELETE, MODIFY objects or relations.
I might need to integrate updates from the live stream later or fork the dataset again but this is not a requirement yet.
This should be realised on PostgreSQL unless there is a much better alternative.
How should I struccture the databse to avoid time expensive computation of historic views or excessive redundancy.