Video Streaming via GStreamer, with CUDA-assisted image rectification
This application has been developed in Nsight. It is an extended version of the GStreamerCameraAppSource project.
The goal of this application is to rectify each image after it is grabbed from the camera, but before the image data is inserted into the gstreamer pipeline.
Running as stream source:
place Jetson in max performance mode
sudo ~/TK1_perf_set_max.sh
check that the CAMERAID.camconf rectification matrix files are in apps/Profiles
.
If you move the cameras out of their stereocam pairs, you'll need to generate new rectification matrixes.
launch gst source app
cd apps/Nsight/GstRectifiedCameraAppviaNsight
./launch.sh -udp PORT TARGET_IP NUM_HOWMANYFRAMES
After execution (or end of day), set Jetson back into normal performance
sudo ~/TK1_perf_set_default.sh
Addendum: generating calibration matrices:
Necessary if the cameras' stereopair is dislodged
- grab 40+ images of checkerboard in various configurations. Use the
apps/GrabRawImageForCalibration
applet to record raw RGB8-packed images. - Convert the images to .png or other image format (matlab scripts available in group's univ. cloud directory (
.../NMT/IST/ist_delat/.../LIFE_Camera_Calibrations/
)) - Read images into matlab's stereocalib tool, generate camera matrices, save out to workspace variables.
- Use
calculateNewTransforms.m
script inLIFE_Camera_Calibrations
to generate the rectification matrices (may need to update names for vars linking to camera calib data). - Write the data for each camera into a (unix) textfile named "CAMERAID.camconf". Content: single row of floating point numbers; first 9 numbers: camera intrinsic matrix (regular ordering, aka transposed Matlab intrinsicmatrix); next 9 numbers: inv(rectification matrix); last 2 numbers: radial distortion coefficients nr 1 and 2.
Rectification insertion
The lifecycle of the application, sans rectification, is approximately this:
- Parse control information, assemble pipeline syntax
- Initialize Pylon
- Initialize GStreamer, launch GStreamer process
- (Repeating)
- Grab raw (YUYV aka YUY2 format) frame from camera
- Push frame into Gstreamer pipeline via buffer
- Repeat until enough frames or error in Gstreamer
- Cleanly shut down Pylon
- Exit.
The rectification in general can be split into two main parts: the rectification pattern determination, and the rectification pattern application.
Determining how to rectify an image is both timeconsuming and boring. Meaning, it cna be done outside of the streaming application as part of a calibration process. So we don't determine the rectification here.
Instead, assuming we have a rectification matrix, we need to apply it to each grabbed image. CUDA helps achieve this. For each pixel in the rectified-image-to-be, a CUDA thread is launched to fetch the data from the correct location in the grabbed image.
For performance reasons, we work with the raw YUY2 image format. Conversion to an RGB8 style image is meaningless (the camera produces either RawBayer, or YUY2, not RGB8!) and performance-consuming, and thus is out of the question.
The modified lifecycle, with (CUDA)rectification, is this:
- parse control information, assemble pipeline syntax
- parse camera rectification config
- initialize CUDA
- allocate CUDA device memory for images and matrices, move matrix to device memory
- Initialize Pylon
- Initialize GStreamer, launch GStreamer process
- (repeating)
- Grab raw (YUY2) image data
- copy image data to device memory
- launch rectification CUDA process
- write rectified image data from device over onto the grabbed image frame
- Push frame into Gstreamer pipeline via buffer
- Repeat until enough frames or something breaks
- De-allocate device memory
- Cleanly shut down Pylon
- Exit.
For details, ref. to the Nsight project in the repository.
svn-info: $Id: app.rectified-video-streaming.md 829 2018-11-07 13:56:44Z elidim $