sheepdog
sheepdog copied to clipboard
sheepdog stable release v1.0.2
candidate commits to backport
zk_control
- 9a58c28a2af02d1a1a856bb770aed984bb536b70 zk_control: fix "purge" threshold and default behavior
- 34d3ca833afa63e52eb2106f71f52ee78222d6e9 Log exact number of deleted nodes when purge fails
workqueues
- cbe755af178566fa2aa8b0b18e2b8d50c1e74ec7 Remove WQ_UNLIMITED and change the limit of threads for WQ_DYNAMIC
- dd924bdb67b0f5fa668afb24a1152be0c93b1796 sheep: add a new option for setting threads for dynamic workqueues
- c42b9d5f0cd5998911cc271f31dbb934380c4ddf sheep: fix logic for setting the max number of threads for dynamic WQ
object list cache
- 254730a6992f74ad635db4417e493de4b45afe6b sheep: remove object list cache of ledger when removed from store
logging
- e882d76dd632168f680dc31af81cfbdbc42225d9 Print object ID in '"%016" PRIx64'
Sheepdog v1.0.2_rc0 released. See #355 for backported commits.
1.0.2 (release candidate)
IMPORTANT CHANGES:
- Unlimited workqueue is now removed and changed to dynamic one. This is for sheep not to consume a huge amount of memory by creating new threads infinitely under heavy load, and to avoid being shot by OOM-killer.
- zk_control now can purge znodes within 24 hours. It purges znodes created before the given threshold. This is useful if tens of thousands of znodes are created in a day.
SHEEP COMMAND INTERFACE:
- New option "-x" to set the maximum number of threads for dynamic workqueue. (default: decided by "max(#nodes,#cores,16)*2" formula)
ZK_CONTROL COMMAND INTERFACE:
- The "purge" subcommand can take a non-negative interger as a threshold in seconds. (default: 86400 (24 hours))
LOGGING:
- Print object IDs in 16-digit zero-padded hexadecimal to sheep.log.
At Feb 22, 15:00 (GMT+9), v1.0.2_rc0 will be v1.0.2 officially if there are no complains and no serious new issues.
ping? i don't see release in github
Sorry for being late. v1.0.2 is not released yet.
I found that v1.0.2_rc0 has a distributed deadlock issue like v0.9.5, as I mentioned in #354.
I can't make up my mind what I should do; just document it as a known issue or fix it by cherry-picking #362.