AWangji
AWangji
  如上图所示,这个结果代表什么意思?
the website of every 3rdparty package can not access. like:http://posefs1.perception.cs.cmu.edu/OpenPose/3rdparty/windows/caffe_16_2020_11_14.zip, how to download?
hi, in my windows, I use your portable windows demo for quick start, but after I start the command :  there seems to be no content in the json...
请问下调api的版本如何调通的?我一直遇到报错:  推理到中间就遇到终止符号停了
hi, I want to know where I can download the model file like the taichi or fashion and so on. I have no idea
hi, thanks for your great work. But when I try importing a ".pk" file with your add-on, it fails and says: no module named torch:  I have no idea...
默认的Alphapose检测出的BVH没有手部关键点,可以设置输出吗?openpose模型是可以设置脸部和手部关键点输出的,不知alphapose是否可以?或者需要重新改造?
我使用了一个text生成动作的模型:https://github.com/korrawe/guided-motion-diffusion 这个模型可以直接生成一个npy。我可以直接读取之后转换成bvh吗? 或者目前我们的模型支持openpose(25个关键点)导出BVH了吗?
请问下各位大佬 porcupine的access token如何获取呢: 
请问下该项目的语音唤醒模块,是否支持本地部署唤醒词训练部分代码?看issue里面是大家需要依赖于去https://snowboy.hahack.com/ 唤醒词训练网站进行训练?这部分是否也已开源?