SqueezeSeg icon indicating copy to clipboard operation
SqueezeSeg copied to clipboard

Could you open source the preprocess projection code

Open fangjin-cool opened this issue 5 years ago • 25 comments

I followed https://github.com/Durant35/SqueezeSeg/blob/master/src/nodes/segment_node.py by @Durant35. But the code seems something wrong, show as below.

k1

k2

the first one is visualized by opencv from your npy file, the second one is created by the code from durant, they are the same frame from kitti. but they looks something different. And with such preprocess, the training result is very bad. Could you open source the projection code? @BichenWuUCB

fangjin-cool avatar Mar 05 '19 11:03 fangjin-cool

The images are depth image ,I turn the value to uint8, and multiply it by 8, for visualization.

fangjin-cool avatar Mar 05 '19 11:03 fangjin-cool

Hey @jingleFun, have you found a solution to your problem? I am trying to achieve the same.

Lapayo avatar May 21 '19 10:05 Lapayo

@Lapayo Sorry, I have no solution .

fangjin-cool avatar May 21 '19 10:05 fangjin-cool

@jingleFun @Lapayo I wrote a code for this , i am sure your post code had some error, but i can not sure i am right;

lonlonago avatar May 24 '19 07:05 lonlonago

i can give the code to you to check;

lonlonago avatar May 24 '19 07:05 lonlonago

@lonlonago Can you share your code with me? I also have problem about this!

MoonWolf9067 avatar May 26 '19 09:05 MoonWolf9067

my mail is [email protected] @MoonWolf9067

lonlonago avatar May 27 '19 02:05 lonlonago

I have solved the problem.

lonlonago avatar May 29 '19 02:05 lonlonago

So cool!!! @lonlonago How did you solve it?

MoonWolf9067 avatar May 30 '19 08:05 MoonWolf9067

send me a mail

---Original--- From: "MoonWolf9067"[email protected] Date: Sun, May 26, 2019 17:01 PM To: "BichenWuUCB/SqueezeSeg"[email protected]; Cc: "Mention"[email protected];"SlowLon"[email protected]; Subject: Re: [BichenWuUCB/SqueezeSeg] Could you open source the preprocess projection code (#37)

@lonlonago Can you share your code with me? I also have problem about this!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

lonlonago avatar May 30 '19 09:05 lonlonago

@lonlonago, I have used your code, but still some black line in the range-image. I don't know why, if i use it in wrong way?

Peeta586 avatar Jun 18 '19 13:06 Peeta586

i will check it tomorror

---Original--- From: "Peeta586"[email protected] Date: Tue, Jun 18, 2019 21:03 PM To: "BichenWuUCB/SqueezeSeg"[email protected]; Cc: "Mention"[email protected];"SlowLon"[email protected]; Subject: Re: [BichenWuUCB/SqueezeSeg] Could you open source the preprocess projection code (#37)

@lonlonago, I have used your code, but still some black line in the range-image. I don't know why, if i use it in wrong way?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

lonlonago avatar Jun 18 '19 13:06 lonlonago

thank you

---Original--- From: "SlowLon"[email protected] Date: Tue, Jun 18, 2019 21:17 PM To: "BichenWuUCB/SqueezeSeg"[email protected]; Cc: "Peeta586"[email protected];"Comment"[email protected]; Subject: Re: [BichenWuUCB/SqueezeSeg] Could you open source the preprocess projection code (#37)

i will check it tomorror

---Original--- From: "Peeta586"[email protected] Date: Tue, Jun 18, 2019 21:03 PM To: "BichenWuUCB/SqueezeSeg"[email protected]; Cc: "Mention"[email protected];"SlowLon"[email protected]; Subject: Re: [BichenWuUCB/SqueezeSeg] Could you open source the preprocess projection code (#37)

@lonlonago, I have used your code, but still some black line in the range-image. I don't know why, if i use it in wrong way?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread. — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

Peeta586 avatar Jun 18 '19 13:06 Peeta586

`def lidar_to_2d_front_view_3(points, v_res=26.9/64, h_res=0.17578125 # h_res=0.08 ):

x_lidar = points[:, 0]  # -71~73
y_lidar = points[:, 1]  # -21~53
z_lidar = points[:, 2]  # -5~2.6
r_lidar = points[:, 3]  # Reflectance  0~0.99

# Distance relative to origin
d = np.sqrt(x_lidar ** 2 + y_lidar ** 2 + z_lidar ** 2)

# Convert res to Radians
v_res_rad = np.radians(v_res)
h_res_rad = np.radians(h_res)

# PROJECT INTO IMAGE COORDINATES
# 这里的负号去掉后,图片会左右颠倒,但是为什么之前是反的?
# -1024~1024   -3.14~3.14  ;
x_img_2 = np.arctan2(-y_lidar, x_lidar)#  得到水平角度
# 用它求得的值域只有上面的一半?因为r始终是正数,导致反面的和正面的投影都在一起了
# x_img_2 = -np.arcsin(y_lidar/r)  # 水平转角  -1.57~1.57

angle_diff = np.abs(np.diff(x_img_2))
threshold_angle = np.radians(250)  #
angle_diff = np.hstack((angle_diff, 0.001)) # 补一个元素,diff少了一个
angle_diff_mask = angle_diff > threshold_angle
# print('angle_diff_mask',np.sum(angle_diff_mask), threshold_angle)


x_img = np.floor((x_img_2 / h_res_rad)).astype(int)  # 把角度转换为像素坐标
x_img -= np.min(x_img)  # 把坐标为负数的部分做一个转移
# x_img[x_lidar < 0] = 0  # 只取x大于0的部分,因为小于0的部分相当于是雷达后面的视角
# 不是我们需要的数据,并且arcsin 计算会重复;


# -52~10  -0.4137~0.078
# y_img_2 = -np.arctan2(z_lidar, r) #
# 这个值域没有变化,但是需要加一个负号图像才是正的,不然上下颠倒

# y_img_2 = -np.arcsin(z_lidar/d) # 得到垂直方向角度
# y_img = np.round((y_img_2 / v_res_rad)).astype(int)  # # 把角度转换为像素坐标
# y_img -= np.min(y_img) # 把坐标为负数的部分做一个转移
# y_img[y_img >= 64] = 63 # 有可能会超出64根线,需要做一点限制



y_img[y_img >= 64] = 63 # 有可能会超出64根线,需要做一点限制


x_max = int(360.0 / h_res) + 1  # 投影后图片的宽度
# x_max = int(180.0 / h_res) + 1  # 投影后图片的宽度

# 根据论文的5维特征赋值
depth_map = np.zeros((64, x_max, 5))#+255
depth_map[y_img, x_img, 0] = x_lidar
depth_map[y_img, x_img, 1] = y_lidar
depth_map[y_img, x_img, 2] = z_lidar
depth_map[y_img, x_img, 3] = r_lidar
depth_map[y_img, x_img, 4] = d

# 抽取最中间的90度视角数据,也就是512个像素的宽度,64高度的数据
start_index = int(x_max/2 - 256)
result = depth_map[:, start_index:(start_index+512), :]

np.save(os.path.join('../data/samples/0001-3' + '.npy'), result)

print('write 0001-2')`

lonlonago avatar Jun 19 '19 02:06 lonlonago

大神,thanks very much, by the way, threshold_angle=np.radians(250)是什么意思?为何取250度呢? 谢谢!

---Original--- From: "SlowLon"[email protected] Date: Wed, Jun 19, 2019 10:50 AM To: "BichenWuUCB/SqueezeSeg"[email protected]; Cc: "Peeta586"[email protected];"Comment"[email protected]; Subject: Re: [BichenWuUCB/SqueezeSeg] Could you open source the preprocess projection code (#37)

`def lidar_to_2d_front_view_3(points, v_res=26.9/64, h_res=0.17578125

h_res=0.08

): """ VRES = 0.42 # vertical resolution 26.9/64=0.42 HRES = 0.17578125 # horizontal resolution 90/512 """ x_lidar = points[:, 0] # -71~73 y_lidar = points[:, 1] # -21~53 z_lidar = points[:, 2] # -5~2.6 r_lidar = points[:, 3] # Reflectance 0~0.99 # Distance relative to origin d = np.sqrt(x_lidar ** 2 + y_lidar ** 2 + z_lidar ** 2) # Convert res to Radians v_res_rad = np.radians(v_res) h_res_rad = np.radians(h_res) # PROJECT INTO IMAGE COORDINATES # 这里的负号去掉后,图片会左右颠倒,但是为什么之前是反的? # -1024~1024 -3.14~3.14 ; x_img_2 = np.arctan2(-y_lidar, x_lidar)# 得到水平角度 # 用它求得的值域只有上面的一半?因为r始终是正数,导致反面的和正面的投影都在一起了 # x_img_2 = -np.arcsin(y_lidar/r) # 水平转角 -1.57~1.57 angle_diff = np.abs(np.diff(x_img_2)) threshold_angle = np.radians(250) # angle_diff = np.hstack((angle_diff, 0.001)) # 补一个元素,diff少了一个 angle_diff_mask = angle_diff > threshold_angle # print('angle_diff_mask',np.sum(angle_diff_mask), threshold_angle) x_img = np.floor((x_img_2 / h_res_rad)).astype(int) # 把角度转换为像素坐标 x_img -= np.min(x_img) # 把坐标为负数的部分做一个转移 # x_img[x_lidar < 0] = 0 # 只取x大于0的部分,因为小于0的部分相当于是雷达后面的视角 # 不是我们需要的数据,并且arcsin 计算会重复; # -52~10 -0.4137~0.078 # y_img_2 = -np.arctan2(z_lidar, r) # # 这个值域没有变化,但是需要加一个负号图像才是正的,不然上下颠倒 # y_img_2 = -np.arcsin(z_lidar/d) # 得到垂直方向角度 # y_img = np.round((y_img_2 / v_res_rad)).astype(int) # # 把角度转换为像素坐标 # y_img -= np.min(y_img) # 把坐标为负数的部分做一个转移 # y_img[y_img >= 64] = 63 # 有可能会超出64根线,需要做一点限制 y_img[y_img >= 64] = 63 # 有可能会超出64根线,需要做一点限制 x_max = int(360.0 / h_res) + 1 # 投影后图片的宽度 # x_max = int(180.0 / h_res) + 1 # 投影后图片的宽度 # 根据论文的5维特征赋值 depth_map = np.zeros((64, x_max, 5))#+255 depth_map[y_img, x_img, 0] = x_lidar depth_map[y_img, x_img, 1] = y_lidar depth_map[y_img, x_img, 2] = z_lidar depth_map[y_img, x_img, 3] = r_lidar depth_map[y_img, x_img, 4] = d # 抽取最中间的90度视角数据,也就是512个像素的宽度,64高度的数据 start_index = int(x_max/2 - 256) result = depth_map[:, start_index:(start_index+512), :] np.save(os.path.join('../data/samples/0001-3' + '.npy'), result) print('write 0001-2')`
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

Peeta586 avatar Jun 20 '19 07:06 Peeta586

旋转一周的转折点,超过一定角度都可以,任意取值,

lonlonago avatar Jun 20 '19 07:06 lonlonago

这里的横线应该是kitti数据的问题导致的,它的y计算的并不准确,所以换了种方式处理。

lonlonago avatar Jun 20 '19 07:06 lonlonago

嗯嗯,明白了😊,非常感谢!

---Original--- From: "SlowLon"[email protected] Date: Thu, Jun 20, 2019 15:51 PM To: "BichenWuUCB/SqueezeSeg"[email protected]; Cc: "Peeta586"[email protected];"Comment"[email protected]; Subject: Re: [BichenWuUCB/SqueezeSeg] Could you open source the preprocess projection code (#37)

旋转一周的转折点,超过一定角度都可以,任意取值,

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

Peeta586 avatar Jun 20 '19 08:06 Peeta586

@lonlonago I'm also getting those black lines using your code. Any idea on how this could be fixed?

TheCodez avatar Jul 13 '19 09:07 TheCodez

@lonlonago the projection code has errors. Can you paste again a working code?

kartikmadhira1 avatar Aug 12 '19 21:08 kartikmadhira1

@jingleFun@Lapayo我为此写了一个代码,我确信你的邮政编码有一些错误,但我不能确定我是对的;

很开心看到您的评论,我也想要得到一份您的代码用来研究学习,已经向您发送了qq邮件,非常感谢。 I'm glad to see your comments. I also want to get a copy of your code for research and learning. I've sent you a QQ email. Thank you very much.

zyw11270106 avatar Jan 20 '22 00:01 zyw11270106

您发送的邮件已经收到。。。

Durant35 avatar Jan 20 '22 00:01 Durant35

您发送的邮件已经收到。。。

@Durant35 请问您有正确的代码吗,可以给我一份吗?万分感谢

zyw11270106 avatar Jan 20 '22 01:01 zyw11270106

@jingleFun@Lapayo我为此写了一个代码,我确信你的邮政编码有一些错误,但我不能确定我是对的;

很开心看到您的评论,我也想要得到一份您的代码用来研究学习,已经向您发送了qq邮件,非常感谢。 I'm glad to see your comments. I also want to get a copy of your code for research and learning. I've sent you a QQ email. Thank you very much.

@lonlonago 谢谢,刚刚忘记艾特您了

zyw11270106 avatar Jan 20 '22 01:01 zyw11270106

请问您有投影的相关代码吗,可以给我一份吗,谢谢

------------------ 原始邮件 ------------------ 发件人: "BichenWuUCB/SqueezeSeg" @.>; 发送时间: 2022年1月20日(星期四) 上午8:50 @.>; @.@.>; 主题: Re: [BichenWuUCB/SqueezeSeg] Could you open source the preprocess projection code (#37)

您发送的邮件已经收到。。。 — Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you commented.Message ID: @.***>

zyw11270106 avatar Jan 20 '22 02:01 zyw11270106