feat: dashboard chat follow ups
A few dashboard chat follow ups,
- [x] Anomaly explain doesnt show up when selecting a new scrub range. It should immediately show up and not need an additonal click.
- [x] Scrub range is not properly aligned in time dimension detail view.
- [x] User message with inline context in history doesnt have a hover action.
- [x] Mention dropdown can sometimes get cutoff. Use Floating UI library to calculate position to avoid this.
More items: https://www.notion.so/rilldata/UXQA-Rill-Cloud-AI-Features-2bdba33c8f578079a3a6fb883d4c73dc
Checklist:
- [x] Covered by tests (partially)
- [x] Ran it and it works as intended
- [x] Reviewed the diff before requesting a review
- [x] Checked for unhandled edge cases
- [ ] Linked the issues it closes
- [ ] Checked if the docs need to be updated. If so, create a separate Linear DOCS issue
- [x] Intend to cherry-pick into the release branch
- [ ] I'm proud of this work!
LGTM! Only thing is I still think the Explain(E) button shouldn't be below the highlighted area
It should be center-aligned vertically in the highlighted area. The reason is when there are measure annotations, it overlaps
Review: Architecture Feedback on Rich Text Rendering
Hi! While reviewing the changes, I noticed an architectural issue with how we're rendering user messages that we should discuss.
The Issue
We're using TipTap (ProseMirror) for rich text input but then discarding its document structure for display, falling back to manual string parsing and <br /> tags. This creates several problems:
Current flow:
- User types with TipTap's rich editor →
- Convert to plain text with custom XML-like tags →
- Parse strings manually with
indexOf()→ - Render with manual line breaks
Key problems:
convertPromptWithInlineContextToComponentsdoes brittle string manipulation (and has a bug on line 81)- We lose document structure (paragraphs become
<br />tags) - Double work: TipTap already handles this complexity
- Future features (lists, formatting) would require rewriting this parser
Suggested Solution
Since we're already using TipTap for input, we should use it for display too:
// Store the rich document structure
const messageContent = editor.getJSON(); // Instead of getText()
// Display with read-only TipTap
const displayEditor = new Editor({
element,
editable: false,
content: messageContent,
extensions: [...] // Same as input
});
This would:
- Eliminate the complex string parsing
- Maintain consistent rendering between input/output
- Make future rich text features trivial to add
- Improve accessibility with proper document structure
What do you think of this approach? I believe addressing this now while we're already working on the chat UI would prevent this technical debt from growing, but I'm interested in your perspective on whether there are constraints or considerations I might be missing.
Developed in collaboration with Claude Code
Observation: MeasureSelection Class Architecture
I noticed the MeasureSelection class maintains both temporal coordinates (start/end as Dates) and screen coordinates (x/y as pixels) as stored state. This caught my attention because it seems to mix data concerns with presentation concerns.
Current pattern:
// In measure-selection.ts
class MeasureSelection {
public readonly start = writable<Date | null>(null); // Domain data
public readonly end = writable<Date | null>(null); // Domain data
public readonly x = writable<number | null>(null); // Screen position
public readonly y = writable<number | null>(null); // Screen position
calculatePoint(start, end, scaler, config) { /* updates x/y */ }
}
Potential concerns:
- The x/y coordinates are derived from start/end + scale, but stored independently
- Components have to remember to call
calculatePoint()when data or scale changes - Testing the selection logic requires mocking rendering details
A few alternative patterns to consider:
-
Separate Domain and View Models
class MeasureSelection { // Only domain data measure: Writable<string | null>; start: Writable<Date | null>; end: Writable<Date | null>; } // Separate utility for rendering function getSelectionPosition(selection, scale, config) { return { x: ..., y: ... }; } -
Computed Properties
class MeasureSelection { // Store only source data private _start: Date | null; // Calculate positions on demand getScreenPosition(scale, config): {x: number, y: number} | null { if (\!this._start) return null; return { x: scale(this._start), y: config.bottom }; } } -
Event-Driven Updates
class MeasureSelection { onSelectionChange = new EventEmitter<{start, end}>(); setRange(measure, start, end) { // Let components calculate their own positions this.onSelectionChange.emit({start, end}); } }
I'm curious about your thoughts on whether separating data from presentation might make this more maintainable, or if there are specific reasons for the current approach that I might be missing (like performance considerations or cross-component position sharing).
Developed in collaboration with Claude Code
@ericpgreen2 The explain button is rendered outside the SimpleDataGraphic wrapper that provides the settings needed to calculate x and y. So the calculation needs to happen in MeasureSelection that is inside SimpleDataGraphic and stored to be used outside in ExplainButton. Added a comment clearing it up.