onlyjob
onlyjob
What to tweak in `mfschunkserver.cfg` to control when chunkserver is `OVERLOADED`? Recently I've commissioned a new super-fast NVMe-based chunkserver and ironically it is in `OVERLOADED` state most of the time....
Replication starts immediately as soon as one chunkserver is stopped without waiting for `CS_TEMP_MAINTENANCE_MODE_TIMEOUT`. Ideally there should be a way to delay replication of "undergoal" or "endangered" chunks as result...
CryFS have some good ideas and interesting design but terrible implementation... I did some testing on CryFS-0.10.2 and rsync'ed 290329 files from my home folder into CryFS: This is how...
As reported in https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=826836 _bmon_ 3.9 FTBFS on _kFreeBSD_: ``` in_sysctl.c: In function ‘sysctl_read’: in_sysctl.c:236:59: error: ‘struct if_data’ has no member named ‘ifi_recvquota’ snprintf(info_buf, sizeof(info_buf), "%u", ifm->ifm_data.ifi_recvquota); ^ in_sysctl.c:239:59: error:...
Since _bmon_ works on kFreeBSD it would be nice to support GNU Hurd as well. See the following page for details: - https://www.gnu.org/software/hurd/hurd/porting/guidelines.html Thanks.
In https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=798995 user reported segfault after leaving `bmon` running for a day. Unfortunately his backtrace is not great...
For some reason Revel uses [1](https://github.com/revel/revel/blob/master/cache/inmemory.go#L5) outdated fork "[github.com/robfig/go-cache](https://github.com/robfig/go-cache)" of the original project "[github.com/pmylund/go-cache](https://github.com/pmylund/go-cache)". The latter seems to be better maintained. Please consider switching. Thanks.
Synfig 1.5.1 just had a hilarious FTBFS on i686 in Debian: ``` FAIL: bline ============================================ Synfig Core 1.5.1: test/test-suite.log ============================================ # TOTAL: 2 # PASS: 1 # SKIP: 0 #...
SeaweedFS performed badly in [POSIX file system conformance test](https://github.com/pjd/pjdfstest): ``` Test Summary Report ------------------- /usr/share/pjdfstest/tests/chmod/00.t (Wstat: 0 Tests: 119 Failed: 61) Failed tests: 22-24, 26-28, 31-34, 36-38, 41-44, 46-48 51-54,...
Volume server fails to connect to any master when first master in "mserver" list is down when the volume server is started (`-mserver=server1:9333,server2:9333,server3:9333`)