Filter not working if accessorFn returns null for any row
Fixed by #4441
Originally posted by @KevinVandy in https://github.com/TanStack/table/issues/4245#issuecomment-1312965743
Hello, even with this fix :
If the 1st value is null, globalFilter doesn't work.
I've noticed the same thing
I encountered the same issue, I've made a repro here https://codesandbox.io/p/sandbox/wispy-moon-58wym7?embed=1&file=%2Fsrc%2FmakeData.ts%3A37%2C11. if you try to filter on something in the first name column it wont filter.
I have also encountered the same thing, seems to happen for column filters and/or global filters depending on specifics. I have a hunch the issue has something to do with the "includesString" filter that seems to be the default if no other filter is specified. My steps to repro:
- Take the standard "React Filters" example.
- Replace the
makedataimport with a simple static definition forPersonwith a nullablelastName, and make the first entry for the Data be someone who has no last name:
type Person = {
firstName: string
lastName: null | string
age: number
visits: number
status: string
progress: number
}
const defaultData: Person[] = [
{
firstName: 'madonna',
lastName: null,
age: 65,
visits: 1000,
status: 'In Relationship',
progress: 60,
},
{
firstName: 'tanner',
lastName: 'linsley',
age: 24,
visits: 100,
status: 'In Relationship',
progress: 50,
},
{
firstName: 'tandy',
lastName: 'miller',
age: 40,
visits: 40,
status: 'Single',
progress: 80,
},
{
firstName: 'joe',
lastName: 'dirte',
age: 45,
visits: 20,
status: 'Complicated',
progress: 10,
},
]
- remove the computed
fullNamecolumn to avoid false positives. - replace
setDatacall with thisdefaultData. - remove all references to the
fuzzyFilterdefined in the React Filters example (so all filtering will use the defaultincludesStringfilter)
With no filtering:
Partial match filtering on first name works as expected:
Partial match filtering on last name fails:
Partial match filtering via global filter also fails, for matches that should be in the lastName column (should also return row with "linsley"):
Workaround for me is to modify my types to not allow nulls in any columns, and use empty string in such cases. That works fine for my specific case, but may not be an acceptable solution for all cases.
Well here's the root cause... table-core/src/features/Filters.ts Lines 406-441:
column.getAutoFilterFn = () => {
const firstRow = table.getCoreRowModel().flatRows[0]
const value = firstRow?.getValue(column.id)
if (typeof value === 'string') {
return filterFns.includesString
}
if (typeof value === 'number') {
return filterFns.inNumberRange
}
if (typeof value === 'boolean') {
return filterFns.equals
}
if (value !== null && typeof value === 'object') {
return filterFns.equals
}
if (Array.isArray(value)) {
return filterFns.arrIncludes
}
return filterFns.weakEquals
}
column.getFilterFn = () => {
return isFunction(column.columnDef.filterFn)
? column.columnDef.filterFn
: column.columnDef.filterFn === 'auto'
? column.getAutoFilterFn()
: // @ts-ignore
table.options.filterFns?.[column.columnDef.filterFn as string] ??
filterFns[column.columnDef.filterFn as BuiltInFilterFn]
}
For any given column:
- if the value in the first row happens to be null then the "auto" filter function is set to "weakEquals" for that entire column
- if any subsequent rows have a non-null value then they will be compared with "weakEquals" (ie. cell "==" filter, with double-equals not triple-equals) which in the examples provided is counter-intuitive, because we are expecting "includesString" or some other type-appropriate function.
- explicitly specifying a particular filter in your column definition with
filterFn: 'includesString'(or one of the others) will successfully force the filter for that column to use the filter function you want... this is an alternative workaround if you need nullable values and all you want is column filtering.
However, specifying an equivalent globalFilterFn: 'includesString' in your table options STILL does not fix the global filtering in the same way, because (a) the global auto filter function is already defaulted to 'includeString' under the hood anyway, and (b) #5138 means the default implementation of getColumnCanGlobalFilter() will still completely exclude any column if the value in the first row of the column is not a string or a number. Lines 391-398:
getColumnCanGlobalFilter: column => {
const value = table
.getCoreRowModel()
.flatRows[0]?._getAllCellsByColumnId()
[column.id]?.getValue()
return typeof value === 'string' || typeof value === 'number'
}
The workaround for global filtering therefore is to override getColumnCanGlobalFilter() with your own implementation when setting the table options... the crudest example being:
getColumnCanGlobalFilter: () => true
TLDR; the table tries to do type inference by looking at the first row of data, then deciding what the "type" of each column is based only on the values in that first row, then defaulting to filtering algorithms based on the inferred type for that column. If the first value in any given column happens to be null, then the column filtering defaults to weakEquals and the column is excluded from global filtering entirely.
Not only does this result in counter-intuitive default behaviour, but it could conceivably lead to intermittent bugs ... eg. filtering could potentially work, then break, then work again, then break again, all because the first row in the underlying data is changing.
If your dataset may contain null values, your options are:
- replace the nulls with some appropriate non-null value that has a
stringtype (eg. empty string) ornumbertype (eg. NaN)... either directly in the data source, or by replacing the nulls on the fly in an overriddenaccessorFn()or - be explicit about the filter function you want to use for any possibly-nullable column (eg. explicitly set
filterFn: 'includesString'or whatever on the definition for that column) and if you want global filtering, also overridegetColumnCanGlobalFilterin the table options with some appropriate custom implementation for your case.
Suggestion for the TanStack Table devs to consider, the problems seem to stem from doing type inference based solely on the values in the first row. Perhaps leaving that behaviour in there as a default, but allowing users to optionally specify a type for each column in the column definition (which the table would use in preference to inference) would be a cleaner way of solving this.
EG. comparing something like:
{
accessorKey: 'lastName',
typeHint: 'string',
},
versus
{
accessorKey: 'lastName',
filterFn: 'includesString',
},
would allow the naive user to still write readable, robust code that lets them leverage the sensible defaults you already have, and still override additional config as necessary while making any misconfiguration more visible. If explicitly defined in the columnDef, the type would not need to be inferred, meaning the behaviour would remain consistent regardless of the data, and anything that depends on the type (such as the filter functions) would never need to change.
Related issues: #5138, #4783, #4711, #4673, #4594.