ioHub
ioHub copied to clipboard
Implement eye tracker HW independent calibration accuracy feedback graphics
Viewed after a calibration, optionally. Save to image and store with session data files as well?
YES, please.
We are also trying to implement real-time gaze viewing on the 2nd monitor, as a way to monitor calibration drifts overtime. Would appreciate any suggestions.
- gary
Switching to support for any iohub eyetracker interface.
Perhaps can be done by supplying a 'validation' procedure that any ET can run; iohub collects samples during validation and then plots validation points with N samples from a window of time after each target was displayed.
Also display calculated min / max / average error in pixels and degrees.
Sounds good. Also a way to accept/reject the calibration based on the validation result? I can probably add a function to use a webcam and face tracking for head position feedback.
- gary
On Tuesday, February 11, 2014, Sol Simpson [email protected] wrote:
Switching to support for any iohub eyetracker interface.
Perhaps can be done by supplying a 'validation' procedure that any ET can run; iohub collects samples during validation and then plots validation points with N samples from a window of time after each target was displayed.
Also display calculated min / max / average error in pixels and degrees.
— Reply to this email directly or view it on GitHubhttps://github.com/isolver/ioHub/issues/48#issuecomment-34754012 .
-- gary
Yes, good point. Validation stage can be (optionally) done after an eye tracker calibrates. User could then press one key to exit the setup routine, or a different key to rerun the calibration / validation process again.
On Tue, Feb 11, 2014 at 8:37 AM, garyfeng [email protected] wrote:
Sounds good. Also a way to accept/reject the calibration based on the validation result? I can probably add a function to use a webcam and face tracking for head position feedback.
- gary
On Tuesday, February 11, 2014, Sol Simpson [email protected] wrote:
Switching to support for any iohub eyetracker interface.
Perhaps can be done by supplying a 'validation' procedure that any ET can run; iohub collects samples during validation and then plots validation points with N samples from a window of time after each target was displayed.
Also display calculated min / max / average error in pixels and degrees.
— Reply to this email directly or view it on GitHub< https://github.com/isolver/ioHub/issues/48#issuecomment-34754012> .
-- gary
— Reply to this email directly or view it on GitHubhttps://github.com/isolver/ioHub/issues/48#issuecomment-34754429 .