nav3-recipes
nav3-recipes copied to clipboard
Proposal: Add Common Back Stack Manipulation Recipes
Background
While working with Navigation 3, situations often arise where direct manipulation of the back stack is required. However, there is currently a lack of official helper functions or reference examples for these cases. As a result, developers often end up writing similar code repeatedly, which can be both inconvenient and error-prone.
Proposal
I propose to provide commonly used Back stack manipulation patterns in the form of Recipes. These Recipes would allow developers to manage the back stack more intuitively and safely.
Here are some examples:
fun <E> MutableList<E>.addSingleTop(item: E) {
if (lastOrNull() != item) add(item)
}
/**
* Removes entries from this back stack **above** the last occurrence of [key].
*
* If [inclusive] is true, the matching [key] itself is also removed.
* The root element at index 0 is always kept.
*/
fun <E> MutableList<E>.popUpTo(
key: E,
inclusive: Boolean = false
) {
if (size <= 1) return // Do nothing when only the root element remains
val idx = indexOfLast { it == key }
if (idx < 0) return
val from = (if (inclusive) idx else idx + 1).coerceAtLeast(1)
if (from >= size) return
subList(from, size).clear()
}
backStack.addSingleTop(ConversationList)
backStack.popUpTo(ConversationList, inclusive = true)
Expected Benefits
- Provide reference implementations for common Back stack manipulation patterns
- Reduce repetitive boilerplate code and prevent potential mistakes
- Offer intuitive guidance for developers using Navigation 3
Etc
If this proposal seems reasonable, would it be possible for me to participate in contributing these Recipes? 😊
+1 I would like to know at least if that's even possible at the current stage
yes, would love to see how this architecture solve real life problems
I would also like to see the general usage example and explore applying this to other generic tasks. Is anyone trying that? I know someone working verifying the ARC-AGI claim: https://x.com/GregKamradt/status/1951834503363858640
Before investing my time to generalize this, I am confused about this paragraph from the paper:
For ARC-AGI challenge, we start with all input-output example pairs in the training and the evalua- tion sets. The dataset is augmented by applying translations, rotations, flips, and color permutations to the puzzles. Each task example is prepended with a learnable special token that represents the puzzle it belongs to. At test time, we proceed as follows for each test input in the evaluation set: (1) Generate and solve 1000 augmented variants and, for each, apply the inverse-augmentation trans- form to obtain a prediction. (2) Choose the two most popular predictions as the final outputs.3 All results are reported on the evaluation set.
Does this mean your dataset was polluted? Why take example pairs from evaluation set, even if you augment it later? I can see the implementation for this para in the https://github.com/sapientinc/HRM/blob/main/dataset/build_arc_dataset.py file.
Is this a standard practice with this particular dataset?
Same question
here's my probably incorrect simple implementation of hrm for snake, maybe you guys can spot some bugs or maybe someone can build on top of it
2 notes:
- there's no tokenization
- x is passed to H_net as well
https://github.com/Eternalyze0/hrm_snake
Has anyone applied this to LLM outputs yet? https://github.com/VatsaDev/NanoPoor