postgrest-py
postgrest-py copied to clipboard
Incorrect behavior of `select().eq("column", None)`
Bug report
Describe the bug
If a table contains a nullable column, then trying to use that column in a filter via .eq(column, None)
produces rows where the column has text "None" (instead of matching NULL values); and using .is_(column, None)
produces an error "failed to parse filter (is.None)"
.
To Reproduce
Create a simple table "tmp" with a nullable column "text" of type text
:
create table public.tmp (
id bigint generated by default as identity,
text text null,
constraint tmp_pkey primary key (id)
) tablespace pg_default;
Insert records:
insert into tmp(text) values("None");
insert into tmp(text) values(NULL);
Query the table via the python client:
import supabase
client = supabase.create_client(...)
response = client.table("tmp").select("*").eq("text", None)
print(response.data)
Expected behavior
Actual behavior: Row 1 is returned (where the text is "None"), Expected behavior: Row 2 is returned (where the text is NULL).
System information
supabase.__version__: 2.4.1
Additional context
Can be fixed by redefining the function eq()
in BaseFilterRequestBuilder
as follows:
def eq(self, column: str, value: Any) -> Self:
if value is None:
value = "NULL"
op = Filters.IS
else:
op = Filters.EQ
return self.filter(column, op, value)
According to the Python doc, this is how null checking is achieved: https://supabase.com/docs/reference/python/is
data, count = supabase.table('countries')
.select('*')
.is_('name', 'null')
.execute()
As @sapphire008 has stated you should use the is_
method with null
. I have opened a PR to add support for Python's None
type with the is_
method. https://github.com/supabase-community/postgrest-py/pull/446