Strange behavior with TTL on reopen of (actively written) DB files.
The TTL behavior is pretty unexpected for items in persisted, longer-lived DBs, especially with large TTL (on the order of say, weeks or a month). It appears that the file format saves the TTL at the time of write, but upon loading it calculates the remaining TTL based on the modified time of the file. If the DB has any decent level of write activity, this effectively pushes the TTL of items out if the file is loaded before expiration (for instance, if the server is restarted and the TTL outlives the server's uptime).
The following test reproduces the behavior:
import (
"path/filepath"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tidwall/buntdb"
)
func TestBuntTTLWithAppendAndReopen(t *testing.T) {
dbPath := filepath.Join(t.TempDir(), "foo.db")
db, err := buntdb.Open(dbPath)
_ = db.SetConfig(buntdb.Config{SyncPolicy: buntdb.Always})
require.NoError(t, err)
ttl := 30 * time.Second // Imagine this is, say, 30 days...
_ = db.Update(func(tx *buntdb.Tx) error {
_, _, _ = tx.Set("test", "Should Die", &buntdb.SetOptions{Expires: true, TTL: ttl})
return nil
})
expectedExpireTime := time.Now().Add(ttl)
t.Logf("Should Expire At : " + expectedExpireTime.Format(time.RFC3339))
// Eat some, but not all of the TTL before performing some update ultimately causing a
// sync to happen (and update to file's modTime)
time.Sleep(ttl - time.Second)
_ = db.Update(func(tx *buntdb.Tx) error {
_, _, _ = tx.Set("test2", "Just some unrelated update", nil)
return nil
})
err = db.Close()
require.NoError(t, err)
// Now, open it again. (TTL of "test" is effectively reset on open)
db, err = buntdb.Open(dbPath)
require.NoError(t, err)
getTestItem := func() error {
t.Helper()
return db.View(func(tx *buntdb.Tx) error {
itemTTL, err := tx.TTL("test")
if err == nil {
t.Logf("Item's TTL = %s", itemTTL)
exat := time.Now().Add(itemTTL)
t.Logf("Actual Expire At : %s (+%v)", exat.Format(time.RFC3339),
exat.Sub(expectedExpireTime))
}
return err
})
}
// Still exists (within original TTL, this is expected)
assert.NoError(t, getTestItem())
// Wait until original TTL should have expired the item
time.Sleep(4 * time.Second)
assert.ErrorIs(t, getTestItem(), buntdb.ErrNotFound) // <--- FAILS because TTL was "reset" on open
_ = db.Close()
}
With the following output:
--- FAIL: TestBuntTTLWithAppendAndReopen (33.03s)
bunt_test.go:87: Should Expire At : 2022-03-25T19:03:14-07:00
bunt_test.go:108: Item's TTL = 27.9972228s
bunt_test.go:110: Actual Expire At : 2022-03-25T19:03:41-07:00 (+27.0076257s)
bunt_test.go:108: Item's TTL = 23.985959s
bunt_test.go:110: Actual Expire At : 2022-03-25T19:03:41-07:00 (+27.0077602s)
bunt_test.go:121:
Error Trace: bunt_test.go:121
Error: Target error should be in err chain:
expected: "not found"
in chain:
Test: TestBuntTTLWithAppendAndReopen
FAIL
Given how the current persistence file format works, I'm not sure this can be easily fixed. If nothing else, perhaps a warning/known-issue in the documentation might be warranted.