lightning-pose icon indicating copy to clipboard operation
lightning-pose copied to clipboard

integrate lightning pose with anipose

Open siddypot opened this issue 1 year ago • 18 comments

I'm looking to convert the checkpoint file and .pkl to .h5 file format, has anyone come up with a solution for this?

siddypot avatar Nov 21 '24 18:11 siddypot

Hi @siddypot why do you need to do this?

ksikka avatar Nov 21 '24 19:11 ksikka

I would like to use anipose with lightning pose for 3d tracking. Anipose strictly accepts the .h5 file format

siddypot avatar Nov 21 '24 19:11 siddypot

Ok, I'll look into this.

ksikka avatar Nov 21 '24 19:11 ksikka

Thank you !

siddypot avatar Nov 21 '24 19:11 siddypot

@ksikka it might be worth reaching out to the anipose people instead and see if we can make a PR to allow anipose to accept the csv format (it really should anyways) - let's discuss tomorrow

@siddypot you're referring to the format of the pose predictions right? Or are you also referring to the model weights themselves?

themattinthehatt avatar Nov 21 '24 20:11 themattinthehatt

I am referring to the model weights. Lightning pose is generating the checkpoint file (.ckpt) and within the ckpt there is a .pkl file

siddypot avatar Nov 21 '24 23:11 siddypot

@siddypot Maybe I am misunderstanding - anipose shouldn't need the LP checkpoint file. there is a larger anipose pipeline that runs inference with the pose estimation network and then runs triangulation on the pose estimation outputs. The first part will require LP integration with anipose, which we are currently thinking about, but it will be a bit more complex than just providing an LP checkpoint. On the other hand, you can run inference yourself with LP and then use the later part of the anipose pipeline to just run triangulation. That part will be easier to integrate.

Can you describe your current workflow a bit more? Are you running inference on new videos with Lightning Pose yourself, and then hoping to use those outputs in Anipose? Or do you want the anipose pipeline to take care of the inference as well?

themattinthehatt avatar Nov 22 '24 00:11 themattinthehatt

After training LP I am left with 2d pose estimations in CSV. Anipose is expecting a DLC model. Based on the DLC model Anipose generates 2d pose estimation data in .csv, .pickle, and .h5 for all views and then based on that data anipose triangulates. Using LP I managed to get the .csv and the .pickle file, but anipose will not triangulate without all 3 files. I was hoping it would be a simple translation just to turn the LP data into a DLC model, or, as you said, take the 2d pose estimation data from LP and just use anipose for triangulation. Though no matter the step, I cannot get the LP data into Anipose, which is my biggest issue as of right now.

Sorry if I am not making too much sense, I am inexperienced with this technology.

siddypot avatar Nov 22 '24 01:11 siddypot

No problem at all! We're very happy to make the integration between LP and Anipose much smoother. Can you point us to the place in the anipose code where you are running into issues?

themattinthehatt avatar Nov 22 '24 01:11 themattinthehatt

model_folder in anipose config takes in the DLC project path. It would be great if we could get anipose to directly recognize the model folder of LP, but that may be a more difficult longer term project.

anipose analyze (line 141) invokes pose_videos. in pose_videos DLC is used, and I can't seem to change that without everything breaking

siddypot avatar Nov 22 '24 03:11 siddypot

thanks for the pointers, we'll look into it, and get back to you early next week

themattinthehatt avatar Nov 22 '24 12:11 themattinthehatt

@siddypot I took a look at anipose, and it will take a bit of work to integrate LP. This is on our roadmap, but we won't be able to get to this until after the holidays. In the meantime I would suggest looking at the docs for aniposelib, which is the backend for anipose. This exposes the actual tools much more clearly.

To go this route you'll need to run inference on videos yourself using LP (see for example here: https://lightning-pose.readthedocs.io/en/latest/source/user_guide/inference.html) and then you can follow the example in the aniposelib docs (https://anipose.readthedocs.io/en/latest/aniposelib-tutorial.html). You'll have to modify this line in the tutorial:

d = load_pose2d_fnames(fname_dict, cam_names=cgroup.get_names())

to load csv files from LP in the proper format, but after that the rest of the tutorial should look the same.

I'll make sure to keep you up-to-date on the anipose integration from our side.

themattinthehatt avatar Dec 04 '24 17:12 themattinthehatt

I'm using anipose for triangulation and would like to follow this. I wrote a simple function to convert LP csv file to hdf file.

def lp2anipose(lp_path, anipose_path):
    df = pd.read_csv(lp_path, header = None, index_col = 0)
    # Convert object data to float data
    arr = df.iloc[3:].to_numpy()
    new_arr = arr.astype('f')
    new_df = pd.DataFrame(data=new_arr)
    # Create multi-level index for columns
    column_arr = df.iloc[0:3].to_numpy() 
    tuples = list(zip(*column_arr))
    new_df.columns = pd.MultiIndex.from_tuples(tuples, names=df.index[0:3])
    # Save in hdf format
    new_df.to_hdf(anipose_path, key = 'new_df', mode='w') 

YitingChang avatar Dec 04 '24 18:12 YitingChang

thanks @YitingChang! I think you might be able to simplify this by doing

df = pd.read_csv(lp_path, header=[0, 1, 2], index_col=0)
df.to_hdf(anipose_path, key='new_df', mode='w')

themattinthehatt avatar Dec 04 '24 18:12 themattinthehatt

Great! I will do that.

YitingChang avatar Dec 04 '24 18:12 YitingChang

I'm using anipose for triangulation and would like to follow this. I wrote a simple function to convert LP csv file to hdf file.

def lp2anipose(lp_path, anipose_path):
    df = pd.read_csv(lp_path, header = None, index_col = 0)
    # Convert object data to float data
    arr = df.iloc[3:].to_numpy()
    new_arr = arr.astype('f')
    new_df = pd.DataFrame(data=new_arr)
    # Create multi-level index for columns
    column_arr = df.iloc[0:3].to_numpy() 
    tuples = list(zip(*column_arr))
    new_df.columns = pd.MultiIndex.from_tuples(tuples, names=df.index[0:3])
    # Save in hdf format
    new_df.to_hdf(anipose_path, key = 'new_df', mode='w') 

@YitingChang Have you successfully triangulated using this h5 converter? If so could you provide documentation on how you did it?

After getting the h5 files for my videos, running

fname_dict = {
    'A': 'viewA.h5',
    'B': 'viewB.h5',
    'C': 'viewC.h5',
}

d = load_pose2d_fnames(fname_dict, cam_names=cgroup.get_names())

score_threshold = 0.5

n_cams, n_points, n_joints, _ = d['points'].shape
points = d['points']
scores = d['scores']

bodyparts = d['bodyparts']


points[scores < score_threshold] = np.nan

points_flat = points.reshape(n_cams, -1, 2)
scores_flat = scores.reshape(n_cams, -1)

p3ds_flat = cgroup.triangulate(points_flat, progress=True)
reprojerr_flat = cgroup.reprojection_error(p3ds_flat, points_flat, mean=True)

p3ds = p3ds_flat.reshape(n_points, n_joints, 3)
reprojerr = reprojerr_flat.reshape(n_points, n_joints)

from here

doesn't seem to do anything at all

siddypot avatar Dec 05 '24 19:12 siddypot

@siddypot Yes, I have successfully triangulated using this converter! I first create a configuration file. Then, I set the paths to data (see below) and use the triangulate function directly.

config_file: path to the configuration file calib_folder: path to the calibration folder video_folder: path to the video folder pose2d_folder: path to the 2d pose folder (h5 files) output_fname: path to the output file (.csv) camera_names: a list of camera names

from anipose.triangulate import triangulate
import toml

# Load config file
config = toml.load(config_file)

# Create file name dictionary 
pose_2d_files = glob(os.path.join(pose2d_folder, '*.h5'))
fname_dict = dict(zip(sorted(camera_names), sorted(pose_2d_files)))

# Triangulate
triangulate(config, calib_folder, video_folder, pose_folder,
                fname_dict, output_fname) 

YitingChang avatar Dec 06 '24 22:12 YitingChang

@siddypot just wanted to check in to see if you've tried this out yet. I've talked with the anipose people and will work on integrating LP+Anipose sometime in January.

Btw would you mind telling me which lab you're from and what kind of data you're working with? The LP team is beginning to work a lot more functionality for multicamera setups so curious about the needs different people have.

themattinthehatt avatar Dec 13 '24 18:12 themattinthehatt