Thoughts about other update transmission methods
So...I looked more deeply into Lovefield's query observers. When a change occurs, all observed queries that depend on the table are re-executed then the rows added and removed are determined from the result set. Even if re-executing the query doesn't take that long, the difference calculation for long result sets is very slow (97% of the execution time in a profile on my machine for a single query with 1000 rows in result set).
Blindly, I had not noticed that even if the observers were quick enough, simply syncing data from changes in a Postgres backend to Lovefield would not necessarily have all the rows necessary to fulfill joined queries. For example, if a column referencing a foreign key changed, the foreign table's row matching the new value may not be cached already, requiring a query on the Postgres backend anyways.
Client-side latency compensation remains impossible for queries with joins. The client does not know the full contents of the tables and cannot make a guess.
Nonetheless, I have discovered a potential method for obtaining result set changes without manually authored invalidation functions.
Changed rows will be collected by triggers as JSON objects in the application. New rows can be checked for result set additions, old rows for result set deletions. If logical decoding were used, the old rows would not be available, making determination of which rows to remove on an update impossible.
The relation that the rows belongs to is changed in the query to use json_populate_recordset for each row.
A query like:
SELECT * FROM
cows,
horses
WHERE cows.horse_id = horses.id
would be modified to check if the changed row (on the `horses table) matches by replacing the relation:
SELECT * FROM
cows,
json_populate_recordset(null::x, '[{"id":1,"color":2}]') AS horses
WHERE cows.horse_id = horses.id
This will return any rows that this change makes to the original query but there is no way to tell if the rows should be included in the result set if the query contains a LIMIT or OFFSET clause.
Also, the query must be parsed to determine how to perform the relation substitution. At least this is a solvable problem.
The performance of this approach is very questionable though due to needing to rerun all the queries for each row changed (albeit over limited rows for single relation).
...to be continued...?