https://github.com/CAIC-AD/YOLOPv2
https://arxiv.org/abs/2208.11434
YOLOPv2: Better, Faster, Stronger for Panoptic Driving Perception
0. Rosbag to mp4
sip2@sip2-2021:~/catkin_od/src/object_detection/scripts/YOLOPv2$ cd ~
sip2@sip2-2021:~$ cd sample_code/
PythonRobotics/ rosbag2video/
sip2@sip2-2021:~/sample_code/rosbag2video$ ./rosbag2video.py --topic /camera/color/image_raw /media/sip2/SIP2022/2022-09-16-iam-5.bag
############# UNCOMPRESSED IMAGE ######################
/camera/color/image_raw with datatype: sensor_msgs/Image
finished653 fps=297 q=28.0 size= 27392kB time=00:05:43.64 bitrate= 653.0kbits/s speed=11.8x
frame= 8798 fps=294 q=-1.0 Lsize= 28093kB time=00:05:51.80 bitrate= 654.2kbits/s speed=11.7x
1. YOLOPv2
sip2@sip2-2021:~$ source ~/anaconda3/etc/profile.d/conda.sh
sip2@sip2-2021:~$ conda info --e
# conda environments:
#
base * /home/sip2/anaconda3
py38-test /home/sip2/anaconda3/envs/py38-test
py38-torch1-12-1 /home/sip2/anaconda3/envs/py38-torch1-12-1
py38-torch1-12-1-gpu-od /home/sip2/anaconda3/envs/py38-torch1-12-1-gpu-od
sip2@sip2-2021:~$ conda create -n py38-torch1-12-1-gpu-yolopv2 --clone py38-torch1-12-1
Source: /home/sip2/anaconda3/envs/py38-torch1-12-1
Destination: /home/sip2/anaconda3/envs/py38-torch1-12-1-gpu-yolopv2
Packages: 65
Files: 0
Preparing transaction: done
Verifying transaction: |
SafetyError: The package for pytorch located at /home/sip2/anaconda3/pkgs/pytorch-1.12.1-py3.8_cuda11.6_cudnn8.3.2_0
appears to be corrupted. The path 'lib/python3.8/site-packages/torch/nn/modules/upsampling.py'
has an incorrect size.
reported size: 11056 bytes
actual size: 11005 bytes
done
Executing transaction: - By downloading and using the CUDA Toolkit conda packages, you accept the terms and conditions of the CUDA End User License Agreement (EULA): https://docs.nvidia.com/cuda/eula/index.html
done
#
# To activate this environment, use
#
# $ conda activate py38-torch1-12-1-gpu-yolopv2
#
# To deactivate an active environment, use
#
# $ conda deactivate
(py38-torch1-12-1-gpu-yolopv2) sip2@sip2-2021:~/catkin_od/src/object_detection/scripts$ git clone https://github.com/CAIC-AD/YOLOPv2.git
Cloning into 'YOLOPv2'...
remote: Enumerating objects: 162, done.
remote: Counting objects: 100% (49/49), done.
remote: Compressing objects: 100% (44/44), done.
remote: Total 162 (delta 34), reused 6 (delta 5), pack-reused 113
Receiving objects: 100% (162/162), 57.29 MiB | 8.22 MiB/s, done.
Resolving deltas: 100% (60/60), done.
(py38-torch1-12-1-gpu-yolopv2) sip2@sip2-2021:~/catkin_od/src/object_detection/scripts$ cd YOLOPv2/
(py38-torch1-12-1-gpu-yolopv2) sip2@sip2-2021:~/catkin_od/src/object_detection/scripts/YOLOPv2$ pip install -r requirements.txt
(py38-torch1-12-1-gpu-yolopv2) sip2@sip2-2021:~/catkin_od/src/object_detection/scripts/YOLOPv2$ python demo.py --source data/example.jpg
Namespace(agnostic_nms=False, classes=None, conf_thres=0.3, device='0', exist_ok=False, img_size=640, iou_thres=0.45, name='exp', nosave=False, project='runs/detect', save_conf=False, save_txt=False, source='data/example.jpg', weights='data/weights/yolopv2.pt')
/home/sip2/anaconda3/envs/py38-torch1-12-1-gpu-yolopv2/lib/python3.8/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1659484683044/work/aten/src/ATen/native/TensorShape.cpp:2894.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
384x640 Done. (1.498s)
The image with the result is saved in: runs/detect/exp/example.jpg
inf : (1.4985s/frame) nms : (0.0059s/frame)
Done. (1.566s)
コメント
コメントを投稿