filament icon indicating copy to clipboard operation
filament copied to clipboard

Export Data Contamination Due to Filesystem Directory ID Reuse

Open andrewdwallo opened this issue 6 months ago • 11 comments

Package

filament/filament

Package Version

v3.3.26

Laravel Version

v11.45.1

Livewire Version

v3.6.3

PHP Version

PHP 8.2

Problem description

Export data becomes contaminated when multiple exporters write to the same filesystem directory due to database auto-increment ID reuse. This results in exports containing data from different models/exporters.

Filament uses database auto-increment IDs for filesystem directory names in Export::getFileDirectory():

public function getFileDirectory(): string
{
    return 'filament_exports' . DIRECTORY_SEPARATOR . $this->getKey();
}

When database IDs are reused (due to resets, migrations, production imports, etc.) which is very common during development, but filesystem directories persist, new exports collide with old export files. The current implementation assumes database IDs and filesystem directories will always remain in sync, which is not guaranteed in real-world usage.

Common scenarios that trigger this:

  • php artisan migrate:fresh without clearing storage
  • Importing production database dumps in development
  • Manual database cleanup without corresponding filesystem cleanup

Impact:

  • Data integrity issues - exports contain wrong data
  • Compliance problems - sensitive data from one export appears in another
  • Development workflow disruption - common dev practices trigger the bug
  • Production risk - any database maintenance could cause contamination

This issue is particularly problematic in development environments where database resets are common, but can also occur in production during maintenance or data migrations.

Image

export-8-clients.xlsx

Expected behavior

Each export should only contain data from its intended model/exporter, regardless of database resets or ID reuse. Export directories should be unique and isolated per export operation.

Steps to reproduce

  1. Use public filesystem disk (local)
  2. Create and run any export (ExporterA) - gets assigned an ID (e.g., 5), creates storage/app/filament_exports/5/
  3. Reset the database using php artisan migrate:fresh (or import production data, truncate exports table, etc.)
  4. Run a different export (ExporterB) - gets assigned the same ID (5) again due to auto-increment reset
  5. New export writes to existing storage/app/filament_exports/5/ directory containing old export data
  6. Final CSV contains mixed data from both exporters

Reproduction repository (issue will be closed if this is not valid)

https://github.com/andrewdwallo/erpsaas

Relevant log output


andrewdwallo avatar Jun 17 '25 07:06 andrewdwallo

I think the solution here is just to clear the directory at the start of the export. Thoughts?

danharrin avatar Jun 17 '25 10:06 danharrin

What about replacing export ids with uuids?

alvleont avatar Jun 17 '25 10:06 alvleont

That's what I do in my own apps so I don't have this issue, but its too opinionated for everyone I think

danharrin avatar Jun 17 '25 12:06 danharrin

Maybe the job id?

andrewdwallo avatar Jun 17 '25 14:06 andrewdwallo

Clearing it is fine too. Maybe for v4 we should suggest UUID by default instead?

andrewdwallo avatar Jun 17 '25 14:06 andrewdwallo

Actually should we not be clearing them because are these stored for future use?

andrewdwallo avatar Jun 17 '25 16:06 andrewdwallo

Hey @andrewdwallo the situation is like this:

  1. You created some exports in your instance, which generated files and stored them in a folder named using the export key saved in the database.
  2. After that, it looks like you reset your database (maybe using php artisan migrate:fresh) and then triggered the exports again.
  3. Since the previous export folder was still there with the old files, the system added new ones and merged everything — ending up with an export that had both old and new data.

@danharrin I see two ways we could handle this:

  1. Maybe add something like a timestamp to the folder name, so even if the export key is reused, it won’t mix files. I've seen something similar in Metabase exports.
  2. If for any reason the same export key is used again, we could check if the folder already exists and clear it before writing anything new.

Let me know what you think. I can open a quick PR to help with this bug if you like 🙂

alvleont avatar Jun 17 '25 18:06 alvleont

Please simply clear the directory at the start of the export

danharrin avatar Jun 17 '25 21:06 danharrin

I guess this is the fastest solution. But wen you have multi-tenancy multi-database instances of filamentphp it is a pain in the ass to discover. Maybe at least leaving something into documentation?

alvleont avatar Jun 17 '25 22:06 alvleont

You can't clear them as that break would any existing exports and their notifications, let alone any other user doing the same export, etc. Needs to be a bit more flexible than that. I think UUID is the way to go. Open to options, but clearing the directory before any export isn't a reasonable one unless are talking about export id 8 clearing directory 8 before it writes new files to it, etc. Still prone to issues though. That seems more complicated than UUID though in my opinion.

MACscr avatar Jun 18 '25 00:06 MACscr

I am not saying clear the entire exports directory, I am saying just the directory of the export that is starting (exports/10)

If you have chosen to delete records in your exports DB table, don't expect the files to exist anymore. If you have a multi-db tenancy setup, you need to use UUIDs or multiple disks, but I think that is common practice. Single DB tenancy is not affected

danharrin avatar Jun 18 '25 07:06 danharrin