-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
train by my own datasets #9
Comments
Hi, Our preprocess process on Waymo Open Dataset also follows the format above, only with certain specifications. Feel free to ask questions here, and I'm also working on a tutorial for training on custom datasets. |
hi,friend. i run waymo data sucessfully, Afterwards, I converted my data set to your format according to the instructions, and then adjusted my configuration according to the waymo configuration, but I failed to run it, whether it was adding lasers or using depth maps. Here is the problem: **2023-09-01 10:11:51,646-rk0-train.py#959:=> Start loading data, for experiment: logs/streetsurf/owndata_2 it seems config setting error, i did not know how to solve it, my img size: 1920*1280, here is my config use lidar:
|
Hi, I have just updated a debug tool for this. In order to use it, you can pass After hitting "play" and "pause" again, you will see a popped up window like below, showing the lidar points (colored pointclouds), extracted occupancy grids (grey voxels), the street's AABB box (the large bold green box), the street's local coordinate axis (the large RGB arrows attached to the corner of the large green box), the camera frustums (colored frustums which are moving if you hit "play" again) debug_scene_5x.mp4You can check whether the AABB box is correctly created, whether ego_car and camers is above the occupancy grid surface, etc. You can also hit play again to check whether all the lidar frames are loaded correctly and whether the street's AABB box can contain all the lidar points (within the camera viewports). Apart from above, you can also zoom in to check whether the camera views are correctly, e.g. if they are incorrectly upside-down etc. debug_scene_camera.mp4 |
thanks! |
i found i input erroring pose in scenario.pt. |
@ventusff Dear author, how to use the vis tools in remote server's docker? |
Thanks for your awesome work.
I want to apply it to generate camera img and lidar databy my own street datasets.
My own datasets have lidar data ,img and pose, i want generate diff param lidar data and camera img; which
data format should i use,
Which modules and scripts do I need to use?
The text was updated successfully, but these errors were encountered: