Save Framos IMX 253 images as Jpeg on NVIDIA Jetson

I was recently working on a demonstration of a Framos IMX 253 mono-chrome camera with a 12-bit sensor supported on a Jetson Xavier. For the demo, I needed to save the images as jpeg format. I thought it may be useful for others to see the inner-workings of a jpeg compression implementation Jetson, using the hardware assisted jpeg compressor.

The image came from the camera driver as 12 bit format in 16 bit integer array format. The sensor is mono-chrome, but the Jetson jpeg compressor only takes a single YUV format as input.

For this demo, I skip the step of re-mapping the luminance values from a 12 bit range to an 8 bit range and simply take the lower 8-bits. A production implementation needs a scheme for this re-mapping of luminance range. The implementation on a Jetson should take advantage of the hardware assist.

Step 1: get things setup


// Info is a structure provided from the caller that contains info about the frame 

// prepare to time execution of the jpeg compression    
auto start = std::chrono::steady_clock::now();

// prepare the output file
std::string outFile="/path/to/file.jpg";
std::ofstream* outFileStr = new std::ofstream(outFile);
if(!outFileStr->is_open())
        return false;

// create an instance of the nvidia jetson jpeg encoder

NvJPEGEncoder* jpegenc = NvJPEGEncoder::createJPEGEncoder("jpenenc");

// the jpeg output buffer size is 1.5 times the width*height 
unsigned long out_buf_size = Info.Width * Info.Height * 3 / 2;
unsigned char *out_buf = new unsigned char[out_buf_size];

Step 2: create an nvidia native buffer

// V4L2_PIX_FMT_YUV420M =  is the only format which appears to be supported by the Jetson jpeg encoder

// allocate the buffer    
NvBuffer buffer(V4L2_PIX_FMT_YUV420M, Info.Width, Info.Height , 0);

buffer.allocateMemory();

NvBuffer::NvBufferPlane* plane = &buffer.planes[0];

//convert the image luminance from uint16 to 8 bits and copy into the nvidia buffer
for(int y=0; y < Info.Height;y++)
{
    for(int x=0; x < Info.Width;x++)
    {
        plane->data[x+(y*plane->fmt.stride)] = (unsigned char) (m_img[x+(y*Info.Width)]);
    }
}

plane->bytesused = 1 * plane->fmt.stride * plane->fmt.height;

Step 3: the Framos camera driver provides a mono-chrome image, so make the image actually mono-chrome by setting the UV vectors to neutral color (127d).

 // initialize the Cb plan to the 127d value which means 0 color

plane = &buffer.planes[1];
char* data = (char *) plane->data;
plane->bytesused = 0;
for (int j = 0; j < plane->fmt.height; j++)
{
    memset(data,127,plane->fmt.width);
    data += plane->fmt.stride;
}
plane->bytesused = plane->fmt.stride * plane->fmt.height;

// initialize the Cr plan to the 127d value which means 0 color

plane = &buffer.planes[2];
data = (char *) plane->data;
plane->bytesused = 0;
for (int j = 0; j < plane->fmt.height; j++)
{
    memset(data,127,plane->fmt.width);
    data += plane->fmt.stride;
}
plane->bytesused = plane->fmt.stride * plane->fmt.height;

Step 4: run the actual jpeg encode function, save the file, and measure the results:

jpegenc->encodeFromBuffer(buffer, JCS_YCbCr, &out_buf, out_buf_size, 95);

auto end = std::chrono::steady_clock::now();

outFileStr->write((char *) out_buf, out_buf_size);
outFileStr->close();

printf( "Jpeg Encode Elapsed time in nanoseconds: %d\n",std::chrono::duration_cast<std::chrono::nanoseconds>(end - start).count());

delete[] out_buf;
delete outFileStr;

Result:

I am seeing roughly 25 milliseconds for the encode and save on an image 3840 x 2160 pixels.

Comments are closed.