Video Streaming via GStreamer, with CUDA-assisted image rectification

This application has been developed in Nsight. It is an extended version of the GStreamerCameraAppSource project.

The goal of this application is to rectify each image after it is grabbed from the camera, but before the image data is inserted into the gstreamer pipeline.

Running as stream source:

place Jetson in max performance mode sudo ~/TK1_perf_set_max.sh

check that the CAMERAID.camconf rectification matrix files are in apps/Profiles. If you move the cameras out of their stereocam pairs, you'll need to generate new rectification matrixes.

launch gst source app cd apps/Nsight/GstRectifiedCameraAppviaNsight ./launch.sh -udp PORT TARGET_IP NUM_HOWMANYFRAMES

After execution (or end of day), set Jetson back into normal performance sudo ~/TK1_perf_set_default.sh

Addendum: generating calibration matrices:

Necessary if the cameras' stereopair is dislodged

Rectification insertion

The lifecycle of the application, sans rectification, is approximately this:

  1. Parse control information, assemble pipeline syntax
  2. Initialize Pylon
  3. Initialize GStreamer, launch GStreamer process
  4. (Repeating)
    • Grab raw (YUYV aka YUY2 format) frame from camera
    • Push frame into Gstreamer pipeline via buffer
    • Repeat until enough frames or error in Gstreamer
  5. Cleanly shut down Pylon
  6. Exit.

The rectification in general can be split into two main parts: the rectification pattern determination, and the rectification pattern application.

Determining how to rectify an image is both timeconsuming and boring. Meaning, it cna be done outside of the streaming application as part of a calibration process. So we don't determine the rectification here.

Instead, assuming we have a rectification matrix, we need to apply it to each grabbed image. CUDA helps achieve this. For each pixel in the rectified-image-to-be, a CUDA thread is launched to fetch the data from the correct location in the grabbed image.

For performance reasons, we work with the raw YUY2 image format. Conversion to an RGB8 style image is meaningless (the camera produces either RawBayer, or YUY2, not RGB8!) and performance-consuming, and thus is out of the question.

The modified lifecycle, with (CUDA)rectification, is this:

  1. parse control information, assemble pipeline syntax
  2. parse camera rectification config
  3. initialize CUDA
  4. allocate CUDA device memory for images and matrices, move matrix to device memory
  5. Initialize Pylon
  6. Initialize GStreamer, launch GStreamer process
  7. (repeating)
    • Grab raw (YUY2) image data
    • copy image data to device memory
    • launch rectification CUDA process
    • write rectified image data from device over onto the grabbed image frame
    • Push frame into Gstreamer pipeline via buffer
    • Repeat until enough frames or something breaks
  8. De-allocate device memory
  9. Cleanly shut down Pylon
  10. Exit.

For details, ref. to the Nsight project in the repository.

svn-info: $Id: app.rectified-video-streaming.md 829 2018-11-07 13:56:44Z elidim $