Eric Ridge
                                            Eric Ridge
                                        
                                    > so the only way to be sure the index is okay is to vacuum full the table before adding a zombodb index and in the same transaction, right ?...
> that’s a bit sad, do you think it’s worth filling an issue on postgres ? Naw, it's by design. If you went and put, for example, a btree index...
The problem is around this function: https://github.com/zombodb/zombodb/blob/c1f9955a4082636f5469ea327635f56e5ff99f6f/src/access_method/build.rs#L201 PG=13 only gives us the item pointer. In that case I’d have to lookup the tuple in the heap to get a HeapTuple...
You just need to keep increasing that limit until it indexes. Having such deeply nested json fields isn’t really a good idea in general, however. Perhaps you can pre-analyze your...
It works for me: ```sq; [v13.0][49599] foo=# create schema predictus; CREATE SCHEMA Time: 0.478 ms [v13.0][49599] foo=# create table predictus.lawsuits( foo(# nu_lawsuit text not null, foo(# dt_distribution date not null,...
The `ERROR: cannot convert whole-row table reference` comes from Postgres, and it seems to be related to an index that references a whole row (which ZDB-indexes require) and table inheritance....
Okay, so I think the problem here is that Postgres just doesn't know how to rewrite the CREATE INDEX statement when created on the top-level partitioned table. It does seem...
Thinking about this a little bit... Right now, ZDB requires you to `CREATE INDEX ... USING zombodb ((tableref.*));` so that you can ultimately do `SELECT ... WHERE tableref ==> 'query...
I just hacked something together, but I think it would need to look a little like this: ```sql # create function sentinel(anyelement) returns boolean immutable parallel safe strict language sql...
> Did you apply any patch over zombo or just used that function so that ZDB works with that partitioned table? Oh, I literally hacked a few parts of ZDB...