performance icon indicating copy to clipboard operation
performance copied to clipboard

Fix check_model() plots for categorical predictors

Open ANAMASGARD opened this issue 1 month ago • 6 comments

Fixes #873

  • The linearity and variance plots were hard to read when models had only categorical predictors - the confidence ribbons were huge and covered up the actual data pattern.

  • Now check_model() detects when all predictors are categorical and hides the dots automatically. Makes the variance across groups way easier to see.

  • Works with factor(x), as.factor(x), or variables that are already factors.

  • If your model has both categorical and continuous predictors, it still shows dots like before.

  • You can override this with show_dots = TRUE if you want the old behavior.

  • Added tests, updated docs. Everything passes locally on my system .

ANAMASGARD avatar Nov 14 '25 07:11 ANAMASGARD

Summary of Changes

Hello @ANAMASGARD, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a visualization challenge within the check_model() function, specifically for models that rely solely on categorical predictors. It introduces an intelligent mechanism to automatically suppress the display of individual data points in diagnostic plots under these conditions. This change dramatically enhances the clarity and interpretability of linearity and variance plots by eliminating visual clutter, making model diagnostics more intuitive for users working with categorical variables. The update maintains backward compatibility for mixed-type models and provides an explicit override for user control.

Highlights

  • Improved check_model() plots for categorical predictors: The check_model() function now automatically hides data points (dots) in diagnostic plots when a model contains only categorical predictors. This significantly improves the readability of linearity and variance plots by reducing visual clutter.
  • Enhanced clarity for variance patterns: By hiding dots in categorical-only models, the visualization of variance patterns across groups becomes much clearer and easier to interpret.
  • Flexible categorical predictor detection: The automatic detection works for variables explicitly wrapped in factor() or as.factor(), as well as pre-existing factor variables in the dataset.
  • Preservation of existing behavior for mixed models: If a model includes both categorical and continuous predictors, check_model() will continue to display data points as it did previously.
  • User override option: Users can explicitly override the automatic dot-hiding behavior by setting the show_dots = TRUE argument in check_model().
  • Comprehensive updates: The pull request includes new tests to validate the functionality and updated documentation to reflect these changes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with :thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

[^1]: Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

gemini-code-assist[bot] avatar Nov 14 '25 07:11 gemini-code-assist[bot]

Can you show an example before and after image? I'm not following why hiding the dots is the correct fix here

bwiernik avatar Nov 14 '25 22:11 bwiernik

Sir @bwiernik You're right to question this - let me be honest about the approach. Looking at the plots in #873, the main visual problem is the huge confidence bands that dominate the plot when you have categorical predictors. The LOESS smooth creates massive CI ribbons between the discrete category positions. I initially implemented hiding dots because:

  • Points stack at discrete positions anyway (not much info there)
  • It makes the smooth line more visible
  • Base R's diagnostic plots do something similar

But @strengejacke makes a good point - hiding the CI bands might be the better fix. The dots aren't really the problem; the CI is what's making the plots unreadable (as originally reported in #642). I'm happy to switch the implementation to hide CI instead of dots if that's what is preferred. It might actually be more in line with what users originally requested. What do you think? And please feel free to correct me if I am wrong . Thank you !

ANAMASGARD avatar Nov 16 '25 15:11 ANAMASGARD

Could you paste examples of the before and after images here?

bwiernik avatar Nov 16 '25 16:11 bwiernik

⚠️ Important Discovery - PR is Incomplete

Hi @bwiernik, you're absolutely right that the images look identical. I discovered the issue:

The Problem

  1. ✅ Our PR correctly sets attr(result, "show_dots") = FALSE for categorical models
  2. ❌ But the see package is responsible for actually plotting, and it's not respecting this attribute

Verification

devtools::load_all() # Load our changes star <- read.csv("https://drmankin.github.io/disc_stats/star.csv") star$star2 <- as.factor(star$star2) model <- lm(math2 ~ star2, data = star, na.action = na.exclude)

result <- check_model(model) attr(result, "show_dots") # Returns FALSE ✅ (our code works)

plot(result) # But still shows dots ❌ (see package doesn't respect it)### Complete Solution Requires Two PRs

  1. This PR (performance package): Sets the show_dots attribute ✅
  2. Companion PR (see package): Modifies plotting to respect the attribute ❌ (not done yet)

I also ran the before and after but the problem is :-

BEFORE (with dots - old behavior)

plot_BEFORE_with_dots

AFTER (without dots - new auto-detected behavior)

plot_AFTER_without_dots

But problem is they are exactly the same , no difference as you can see

  • The same plots with data points visible
  • The same confidence ribbons (gray shaded areas)
  • No visible difference in the "Linearity" and "Homogeneity of Variance" plots

Our "AFTER" image should show NO DOTS (just the smooth lines and CI ribbons), but it still has all the dots visible, just like the "BEFORE" image.

What approach would you prefer? @strengejacke @bwiernik

I apologize for not catching this earlier - I should have verified the actual visual output, not just the attribute setting.

ANAMASGARD avatar Nov 17 '25 09:11 ANAMASGARD

I think that we first should try to disable the confidence bands. There are diagnostic plots in other packages (also core R?) that still show data points for categorical predictors. Let's check for which plots data points are still useful - but maybe in general disable the CI for categorical predictors?

strengejacke avatar Nov 26 '25 19:11 strengejacke