Andrew Hundt
Andrew Hundt
I'm setting up to adapt the [tf object detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to find grasp bounding boxes. Here is what needs to happen: - [x] install tensorflow/models repo with tf object detection...
https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation https://github.com/ildoonet/tf-pose-estimation https://github.com/michalfaber/keras_Realtime_Multi-Person_Pose_Estimation/
It will let us train with rotations which we need for the cornell dataset. #2 is getting the pixel-wise training working with the 2d labels, and with the loss function...
https://github.com/aurora95/Keras-FCN/blob/master/models.py#L190
pixel-wise training isn't making any progress, we need to figure out why. Options to try fixing / possible problem sources: - [x] try pretraining on single prediction `delta_depth - DONE...
TODO: figure out why and either fix the test or the code
We need to be able to generate a 3D visualization of many poses and the predicted grasp success values to determine if the results look reasonable. Here is the model...
https://twitter.com/andy_matuschak/status/955126796743098368
Currently, a typical output is as follows: ``` loss: 0.5930 - segmentation_single_pixel_binary_accuracy: 0.6826 - mean_pred_single_pixel: 0.3928 - mean_pred: 0.4465 - mean_true: 0.3928 ``` mean_pred_single_pixel is lower than mean_pred, when it...
There is a crash when running grasp_train.py with gaussian loss enabled. Figure out and fix this, which brings us closer to completing #366