James

Results 194 comments of James
trafficstars

The API would look something like this (pseudocode below): ```python @dataclass class Keypoints: keypoints: np.ndarray confidence: Optional[np.ndarray] = None def __post_init__(self) -> None: """ Validate the keypoints inputs. """ n...

For annotating, we would have two annotators: 1. `PoseAnnotator` for pose estimation. 2. `FaceLandmarkAnnotator` for facial landmarks. Here is an example API structure for the `PoseAnnotator`: ```python class PoseAnnotator(BaseAnnotator): """...

Notes on data loaders that we should prioritize: - YOLOv8 - YOLOv7 - MediaPipe

Keypoint detection is now supported in Roboflow. With that said, it makes sense to revisit this issue. It might make sense for keypoints to be part of `sv.Detections` rather than...

Thank you for your research on this! We definitely need to think about the API. I can see `supervision` supporting landmarks, in which case `Face` is more appropriate since it...

My go-to is a fine-tined object detection model for face detection (i.e. https://universe.roboflow.com/mohamed-traore-2ekkp/face-detection-mik1i).

Oh my bad! We have had good results with `L2CSNet_gaze360_resnet50_90bins` in https://github.com/Ahmednull/L2CS-Net that you looked at. I have only tested in images where the primary subject of the image is...

Here is an example of an image where keypoints have been plotted with the `sv.PoseAnnotator` method:

Some keypoint and face landmark models also provide bounding boxes for the people or faces detected. Should these be part of `sv.Keypoint()`? If so, how should this information be included...

I have just added a `sv.Keypoint.from_mediapipe()` loader. Test code: ```python import mediapipe as mp from mediapipe.tasks import python from mediapipe.tasks.python import vision base_options = python.BaseOptions(model_asset_path='pose_landmarker_heavy.task') options = vision.PoseLandmarkerOptions( base_options=base_options, output_segmentation_masks=True)...