Reading Image Stream from RCCC Bayer Camera Sensor in Ubuntu
up vote
2
down vote
favorite
I am working with a LI-AR0820 GMSL2 camera which uses the On-Semi AR0820 sensor that captures images in a 12-Bit RCCC Bayer format. I want to read the real-time image stream from the camera and turn it into a grayscale image (using this demosaicing algorithm) and then feed it into an object detection algorithm. However, since OpenCV does not support the RCCC format I can't use the VideoCapture class to get image data from the camera. I am looking for something similar to get the streamed image data in an array-like format so that I can further manipulate it. Any ideas?
I'm running Ubuntu 18.04 with OpenCV 3.2.0 and Python 3.7.1.
EDIT. I am using the code here.
#include <vector>
#include <iostream>
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main() {
// Each pixel is made up of 16 bits, with the high 4 bits always equal to 0
unsigned char bytes[2];
// Hold the data in a vector
std::vector<unsigned short int> data;
// Read the camera data
FILE *fp = fopen("test.raw","rb");
while(fread(bytes, 2, 1, fp) != 0) {
// The data comes in little-endian, so shift the second byte right and concatenate the first byte
data.push_back(bytes[0] | (bytes[1] << 8));
}
// Make a matrix 1280x720 with 16 bits of unsigned integers
cv::Mat imBayer = cv::Mat(720, 1280, CV_16U);
// Make a matrix to hold RGB data
cv::Mat imRGB;
// Copy the data in the vector into a nice matrix
memmove(imBayer.data, data.data(), data.size()*2);
// Convert the GR Bayer pattern into RGB, putting it into the RGB matrix!
cv::cvtColor(imBayer, imRGB, CV_BayerGR2RGB);
cv::namedWindow("Display window", cv::WINDOW_AUTOSIZE);
// *15 because the image is dark
cv::imshow("Display window", 15*imRGB);
cv::waitKey(0);
return 0;
}
There are two problems with the code. First, I have to get a raw image file using fswebcam and then use the code above to read the raw file and display the image. I want to be able to access the /dev/video1 node and directly read the raw data from there instead of having to first save it and then read it separately. Second, OpenCV does not support the RCCC Bayer format so I have to come up with a demosaicing method.
The camera outputs serialized data through a Coax cable, so I use a Deser board with USB 3.0 connection to connect the camera to my laptop. The setup can be seen here.
python image opencv video-streaming
|
show 1 more comment
up vote
2
down vote
favorite
I am working with a LI-AR0820 GMSL2 camera which uses the On-Semi AR0820 sensor that captures images in a 12-Bit RCCC Bayer format. I want to read the real-time image stream from the camera and turn it into a grayscale image (using this demosaicing algorithm) and then feed it into an object detection algorithm. However, since OpenCV does not support the RCCC format I can't use the VideoCapture class to get image data from the camera. I am looking for something similar to get the streamed image data in an array-like format so that I can further manipulate it. Any ideas?
I'm running Ubuntu 18.04 with OpenCV 3.2.0 and Python 3.7.1.
EDIT. I am using the code here.
#include <vector>
#include <iostream>
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main() {
// Each pixel is made up of 16 bits, with the high 4 bits always equal to 0
unsigned char bytes[2];
// Hold the data in a vector
std::vector<unsigned short int> data;
// Read the camera data
FILE *fp = fopen("test.raw","rb");
while(fread(bytes, 2, 1, fp) != 0) {
// The data comes in little-endian, so shift the second byte right and concatenate the first byte
data.push_back(bytes[0] | (bytes[1] << 8));
}
// Make a matrix 1280x720 with 16 bits of unsigned integers
cv::Mat imBayer = cv::Mat(720, 1280, CV_16U);
// Make a matrix to hold RGB data
cv::Mat imRGB;
// Copy the data in the vector into a nice matrix
memmove(imBayer.data, data.data(), data.size()*2);
// Convert the GR Bayer pattern into RGB, putting it into the RGB matrix!
cv::cvtColor(imBayer, imRGB, CV_BayerGR2RGB);
cv::namedWindow("Display window", cv::WINDOW_AUTOSIZE);
// *15 because the image is dark
cv::imshow("Display window", 15*imRGB);
cv::waitKey(0);
return 0;
}
There are two problems with the code. First, I have to get a raw image file using fswebcam and then use the code above to read the raw file and display the image. I want to be able to access the /dev/video1 node and directly read the raw data from there instead of having to first save it and then read it separately. Second, OpenCV does not support the RCCC Bayer format so I have to come up with a demosaicing method.
The camera outputs serialized data through a Coax cable, so I use a Deser board with USB 3.0 connection to connect the camera to my laptop. The setup can be seen here.
python image opencv video-streaming
1
Please provide some code on how you're currently reading images from the sensor. If you have a byte stream (encoded or raw) you can convert it to opencv mat.
– zindarod
Nov 5 at 20:57
If your camera supports Video4Linux, you'll be able to read data.
– dhanushka
Nov 7 at 16:47
Is the camera connected to your machine via USB 3.0?
– Ulrich Stern
Nov 9 at 20:39
According to the camera’s data sheet, it is UVC compliant. So in principle, VideoCapture should work. Can you share more details how VideoCapture failed for you?
– Ulrich Stern
Nov 10 at 9:24
VideoCapture reads a weird green-looking image, like the one found at the bottom left corner of the first page here: leopardimaging.com/uploads/LI-USB30-OV13850_datasheet.pdf
– Goodarz Mehr
Nov 10 at 18:16
|
show 1 more comment
up vote
2
down vote
favorite
up vote
2
down vote
favorite
I am working with a LI-AR0820 GMSL2 camera which uses the On-Semi AR0820 sensor that captures images in a 12-Bit RCCC Bayer format. I want to read the real-time image stream from the camera and turn it into a grayscale image (using this demosaicing algorithm) and then feed it into an object detection algorithm. However, since OpenCV does not support the RCCC format I can't use the VideoCapture class to get image data from the camera. I am looking for something similar to get the streamed image data in an array-like format so that I can further manipulate it. Any ideas?
I'm running Ubuntu 18.04 with OpenCV 3.2.0 and Python 3.7.1.
EDIT. I am using the code here.
#include <vector>
#include <iostream>
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main() {
// Each pixel is made up of 16 bits, with the high 4 bits always equal to 0
unsigned char bytes[2];
// Hold the data in a vector
std::vector<unsigned short int> data;
// Read the camera data
FILE *fp = fopen("test.raw","rb");
while(fread(bytes, 2, 1, fp) != 0) {
// The data comes in little-endian, so shift the second byte right and concatenate the first byte
data.push_back(bytes[0] | (bytes[1] << 8));
}
// Make a matrix 1280x720 with 16 bits of unsigned integers
cv::Mat imBayer = cv::Mat(720, 1280, CV_16U);
// Make a matrix to hold RGB data
cv::Mat imRGB;
// Copy the data in the vector into a nice matrix
memmove(imBayer.data, data.data(), data.size()*2);
// Convert the GR Bayer pattern into RGB, putting it into the RGB matrix!
cv::cvtColor(imBayer, imRGB, CV_BayerGR2RGB);
cv::namedWindow("Display window", cv::WINDOW_AUTOSIZE);
// *15 because the image is dark
cv::imshow("Display window", 15*imRGB);
cv::waitKey(0);
return 0;
}
There are two problems with the code. First, I have to get a raw image file using fswebcam and then use the code above to read the raw file and display the image. I want to be able to access the /dev/video1 node and directly read the raw data from there instead of having to first save it and then read it separately. Second, OpenCV does not support the RCCC Bayer format so I have to come up with a demosaicing method.
The camera outputs serialized data through a Coax cable, so I use a Deser board with USB 3.0 connection to connect the camera to my laptop. The setup can be seen here.
python image opencv video-streaming
I am working with a LI-AR0820 GMSL2 camera which uses the On-Semi AR0820 sensor that captures images in a 12-Bit RCCC Bayer format. I want to read the real-time image stream from the camera and turn it into a grayscale image (using this demosaicing algorithm) and then feed it into an object detection algorithm. However, since OpenCV does not support the RCCC format I can't use the VideoCapture class to get image data from the camera. I am looking for something similar to get the streamed image data in an array-like format so that I can further manipulate it. Any ideas?
I'm running Ubuntu 18.04 with OpenCV 3.2.0 and Python 3.7.1.
EDIT. I am using the code here.
#include <vector>
#include <iostream>
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main() {
// Each pixel is made up of 16 bits, with the high 4 bits always equal to 0
unsigned char bytes[2];
// Hold the data in a vector
std::vector<unsigned short int> data;
// Read the camera data
FILE *fp = fopen("test.raw","rb");
while(fread(bytes, 2, 1, fp) != 0) {
// The data comes in little-endian, so shift the second byte right and concatenate the first byte
data.push_back(bytes[0] | (bytes[1] << 8));
}
// Make a matrix 1280x720 with 16 bits of unsigned integers
cv::Mat imBayer = cv::Mat(720, 1280, CV_16U);
// Make a matrix to hold RGB data
cv::Mat imRGB;
// Copy the data in the vector into a nice matrix
memmove(imBayer.data, data.data(), data.size()*2);
// Convert the GR Bayer pattern into RGB, putting it into the RGB matrix!
cv::cvtColor(imBayer, imRGB, CV_BayerGR2RGB);
cv::namedWindow("Display window", cv::WINDOW_AUTOSIZE);
// *15 because the image is dark
cv::imshow("Display window", 15*imRGB);
cv::waitKey(0);
return 0;
}
There are two problems with the code. First, I have to get a raw image file using fswebcam and then use the code above to read the raw file and display the image. I want to be able to access the /dev/video1 node and directly read the raw data from there instead of having to first save it and then read it separately. Second, OpenCV does not support the RCCC Bayer format so I have to come up with a demosaicing method.
The camera outputs serialized data through a Coax cable, so I use a Deser board with USB 3.0 connection to connect the camera to my laptop. The setup can be seen here.
python image opencv video-streaming
python image opencv video-streaming
edited Nov 9 at 23:25
asked Nov 3 at 17:52
Goodarz Mehr
788
788
1
Please provide some code on how you're currently reading images from the sensor. If you have a byte stream (encoded or raw) you can convert it to opencv mat.
– zindarod
Nov 5 at 20:57
If your camera supports Video4Linux, you'll be able to read data.
– dhanushka
Nov 7 at 16:47
Is the camera connected to your machine via USB 3.0?
– Ulrich Stern
Nov 9 at 20:39
According to the camera’s data sheet, it is UVC compliant. So in principle, VideoCapture should work. Can you share more details how VideoCapture failed for you?
– Ulrich Stern
Nov 10 at 9:24
VideoCapture reads a weird green-looking image, like the one found at the bottom left corner of the first page here: leopardimaging.com/uploads/LI-USB30-OV13850_datasheet.pdf
– Goodarz Mehr
Nov 10 at 18:16
|
show 1 more comment
1
Please provide some code on how you're currently reading images from the sensor. If you have a byte stream (encoded or raw) you can convert it to opencv mat.
– zindarod
Nov 5 at 20:57
If your camera supports Video4Linux, you'll be able to read data.
– dhanushka
Nov 7 at 16:47
Is the camera connected to your machine via USB 3.0?
– Ulrich Stern
Nov 9 at 20:39
According to the camera’s data sheet, it is UVC compliant. So in principle, VideoCapture should work. Can you share more details how VideoCapture failed for you?
– Ulrich Stern
Nov 10 at 9:24
VideoCapture reads a weird green-looking image, like the one found at the bottom left corner of the first page here: leopardimaging.com/uploads/LI-USB30-OV13850_datasheet.pdf
– Goodarz Mehr
Nov 10 at 18:16
1
1
Please provide some code on how you're currently reading images from the sensor. If you have a byte stream (encoded or raw) you can convert it to opencv mat.
– zindarod
Nov 5 at 20:57
Please provide some code on how you're currently reading images from the sensor. If you have a byte stream (encoded or raw) you can convert it to opencv mat.
– zindarod
Nov 5 at 20:57
If your camera supports Video4Linux, you'll be able to read data.
– dhanushka
Nov 7 at 16:47
If your camera supports Video4Linux, you'll be able to read data.
– dhanushka
Nov 7 at 16:47
Is the camera connected to your machine via USB 3.0?
– Ulrich Stern
Nov 9 at 20:39
Is the camera connected to your machine via USB 3.0?
– Ulrich Stern
Nov 9 at 20:39
According to the camera’s data sheet, it is UVC compliant. So in principle, VideoCapture should work. Can you share more details how VideoCapture failed for you?
– Ulrich Stern
Nov 10 at 9:24
According to the camera’s data sheet, it is UVC compliant. So in principle, VideoCapture should work. Can you share more details how VideoCapture failed for you?
– Ulrich Stern
Nov 10 at 9:24
VideoCapture reads a weird green-looking image, like the one found at the bottom left corner of the first page here: leopardimaging.com/uploads/LI-USB30-OV13850_datasheet.pdf
– Goodarz Mehr
Nov 10 at 18:16
VideoCapture reads a weird green-looking image, like the one found at the bottom left corner of the first page here: leopardimaging.com/uploads/LI-USB30-OV13850_datasheet.pdf
– Goodarz Mehr
Nov 10 at 18:16
|
show 1 more comment
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
If your camera supports the CAP_PROP_CONVERT_RGB
property, you might be able to get raw RCCC data from VideoCapture. By setting this property to False
, you can disable the conversion to RGB. So, you can capture raw frames using a code like (no error checking for simplicity):
cap = cv2.VideoCapture(0)
# disable converting images to RGB
cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
while(True):
ret, frame = cap.read()
# other processing ...
cap.release()
I don't know if this works for your camera.
If you can get the raw images somehow, you can apply the de-mosaicing method described in the ANALOG DEVICES appnote.
with optimal filter
I wrote following python code as described in the appnote to test the RCCC -> GRAY conversion.
import cv2
import numpy as np
rgb = cv2.cvtColor(cv2.imread('RGB.png'), cv2.COLOR_BGR2RGB)
c = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
r = rgb[:, :, 0]
# no error checking. c shape must be a multiple of 2
rmask = np.tile([[1, 0], [0, 0]], [c.shape[0]//2, c.shape[1]//2])
cmask = np.tile([[0, 1], [1, 1]], [c.shape[0]//2, c.shape[1]//2])
# create RCCC image by replacing 1 pixel out of 2x2 pixel region
# in the monochrome image (c) with a red pixel
rccc = (rmask*r + cmask*c).astype(np.uint8)
# RCCC -> GRAY conversion
def rccc_demosaic(rccc, rmask, cmask, filt):
# RCCC -> GRAY
# use border type REFLECT_101 to give correct results for border pixels
filtered = cv2.filter2D(src=rccc, ddepth=-1, kernel=filt,
anchor=(-1, -1), borderType=cv2.BORDER_REFLECT_101)
demos = (rmask*filtered + cmask*rccc).astype(np.uint8)
return demos
# demo of the optimal filter
zeta = 0.5
kernel_4neighbor = np.array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 0, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]])/4.0
kernel_optimal = np.array([[0, 0, -1, 0, 0],
[0, 0, 2, 0, 0],
[-1, 2, 4, 2, -1],
[0, 0, 2, 0, 0],
[0, 0, -1, 0, 0]])/8.0
kernel_param = np.array([[0, 0, -1./4, 0, 0],
[0, 0, 0, 0, 0],
[-1./4, 0, 1., 0, -1./4],
[0, 0, 0, 0, 0],
[0, 0, -1./4, 0, 0]])
# apply optimal filter (Figure 7)
opt1 = rccc_demosaic(rccc, rmask, cmask, kernel_optimal)
# parametric filter with zeta = 0.5 (Figure 5)
opt2 = rccc_demosaic(rccc, rmask, cmask, kernel_4neighbor + zeta * kernel_param)
# PSNR
print(10 * np.log10(255**2/((c - opt1)**2).mean()))
print(10 * np.log10(255**2/((c - opt2)**2).mean()))
Input RGB image:
Simulated RCCC image:
Gray image from de-mosaicing algorithm:
One more thing:
If your camera vendor provides an SDK for Linux, it may have an API to do the RCCC -> GRAY conversion, or at least get the raw image. If RCCC -> GRAY conversion is not in the SDK, the C# sample code should have it, so I suggest you take a look at their code.
Thanks for the solution! I'll test it as soon as I can. The vendor does provide an SDK, but it is for Windows only, and the output image is mosaiced (like the second image above).
– Goodarz Mehr
Nov 11 at 18:22
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
If your camera supports the CAP_PROP_CONVERT_RGB
property, you might be able to get raw RCCC data from VideoCapture. By setting this property to False
, you can disable the conversion to RGB. So, you can capture raw frames using a code like (no error checking for simplicity):
cap = cv2.VideoCapture(0)
# disable converting images to RGB
cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
while(True):
ret, frame = cap.read()
# other processing ...
cap.release()
I don't know if this works for your camera.
If you can get the raw images somehow, you can apply the de-mosaicing method described in the ANALOG DEVICES appnote.
with optimal filter
I wrote following python code as described in the appnote to test the RCCC -> GRAY conversion.
import cv2
import numpy as np
rgb = cv2.cvtColor(cv2.imread('RGB.png'), cv2.COLOR_BGR2RGB)
c = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
r = rgb[:, :, 0]
# no error checking. c shape must be a multiple of 2
rmask = np.tile([[1, 0], [0, 0]], [c.shape[0]//2, c.shape[1]//2])
cmask = np.tile([[0, 1], [1, 1]], [c.shape[0]//2, c.shape[1]//2])
# create RCCC image by replacing 1 pixel out of 2x2 pixel region
# in the monochrome image (c) with a red pixel
rccc = (rmask*r + cmask*c).astype(np.uint8)
# RCCC -> GRAY conversion
def rccc_demosaic(rccc, rmask, cmask, filt):
# RCCC -> GRAY
# use border type REFLECT_101 to give correct results for border pixels
filtered = cv2.filter2D(src=rccc, ddepth=-1, kernel=filt,
anchor=(-1, -1), borderType=cv2.BORDER_REFLECT_101)
demos = (rmask*filtered + cmask*rccc).astype(np.uint8)
return demos
# demo of the optimal filter
zeta = 0.5
kernel_4neighbor = np.array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 0, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]])/4.0
kernel_optimal = np.array([[0, 0, -1, 0, 0],
[0, 0, 2, 0, 0],
[-1, 2, 4, 2, -1],
[0, 0, 2, 0, 0],
[0, 0, -1, 0, 0]])/8.0
kernel_param = np.array([[0, 0, -1./4, 0, 0],
[0, 0, 0, 0, 0],
[-1./4, 0, 1., 0, -1./4],
[0, 0, 0, 0, 0],
[0, 0, -1./4, 0, 0]])
# apply optimal filter (Figure 7)
opt1 = rccc_demosaic(rccc, rmask, cmask, kernel_optimal)
# parametric filter with zeta = 0.5 (Figure 5)
opt2 = rccc_demosaic(rccc, rmask, cmask, kernel_4neighbor + zeta * kernel_param)
# PSNR
print(10 * np.log10(255**2/((c - opt1)**2).mean()))
print(10 * np.log10(255**2/((c - opt2)**2).mean()))
Input RGB image:
Simulated RCCC image:
Gray image from de-mosaicing algorithm:
One more thing:
If your camera vendor provides an SDK for Linux, it may have an API to do the RCCC -> GRAY conversion, or at least get the raw image. If RCCC -> GRAY conversion is not in the SDK, the C# sample code should have it, so I suggest you take a look at their code.
Thanks for the solution! I'll test it as soon as I can. The vendor does provide an SDK, but it is for Windows only, and the output image is mosaiced (like the second image above).
– Goodarz Mehr
Nov 11 at 18:22
add a comment |
up vote
1
down vote
accepted
If your camera supports the CAP_PROP_CONVERT_RGB
property, you might be able to get raw RCCC data from VideoCapture. By setting this property to False
, you can disable the conversion to RGB. So, you can capture raw frames using a code like (no error checking for simplicity):
cap = cv2.VideoCapture(0)
# disable converting images to RGB
cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
while(True):
ret, frame = cap.read()
# other processing ...
cap.release()
I don't know if this works for your camera.
If you can get the raw images somehow, you can apply the de-mosaicing method described in the ANALOG DEVICES appnote.
with optimal filter
I wrote following python code as described in the appnote to test the RCCC -> GRAY conversion.
import cv2
import numpy as np
rgb = cv2.cvtColor(cv2.imread('RGB.png'), cv2.COLOR_BGR2RGB)
c = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
r = rgb[:, :, 0]
# no error checking. c shape must be a multiple of 2
rmask = np.tile([[1, 0], [0, 0]], [c.shape[0]//2, c.shape[1]//2])
cmask = np.tile([[0, 1], [1, 1]], [c.shape[0]//2, c.shape[1]//2])
# create RCCC image by replacing 1 pixel out of 2x2 pixel region
# in the monochrome image (c) with a red pixel
rccc = (rmask*r + cmask*c).astype(np.uint8)
# RCCC -> GRAY conversion
def rccc_demosaic(rccc, rmask, cmask, filt):
# RCCC -> GRAY
# use border type REFLECT_101 to give correct results for border pixels
filtered = cv2.filter2D(src=rccc, ddepth=-1, kernel=filt,
anchor=(-1, -1), borderType=cv2.BORDER_REFLECT_101)
demos = (rmask*filtered + cmask*rccc).astype(np.uint8)
return demos
# demo of the optimal filter
zeta = 0.5
kernel_4neighbor = np.array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 0, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]])/4.0
kernel_optimal = np.array([[0, 0, -1, 0, 0],
[0, 0, 2, 0, 0],
[-1, 2, 4, 2, -1],
[0, 0, 2, 0, 0],
[0, 0, -1, 0, 0]])/8.0
kernel_param = np.array([[0, 0, -1./4, 0, 0],
[0, 0, 0, 0, 0],
[-1./4, 0, 1., 0, -1./4],
[0, 0, 0, 0, 0],
[0, 0, -1./4, 0, 0]])
# apply optimal filter (Figure 7)
opt1 = rccc_demosaic(rccc, rmask, cmask, kernel_optimal)
# parametric filter with zeta = 0.5 (Figure 5)
opt2 = rccc_demosaic(rccc, rmask, cmask, kernel_4neighbor + zeta * kernel_param)
# PSNR
print(10 * np.log10(255**2/((c - opt1)**2).mean()))
print(10 * np.log10(255**2/((c - opt2)**2).mean()))
Input RGB image:
Simulated RCCC image:
Gray image from de-mosaicing algorithm:
One more thing:
If your camera vendor provides an SDK for Linux, it may have an API to do the RCCC -> GRAY conversion, or at least get the raw image. If RCCC -> GRAY conversion is not in the SDK, the C# sample code should have it, so I suggest you take a look at their code.
Thanks for the solution! I'll test it as soon as I can. The vendor does provide an SDK, but it is for Windows only, and the output image is mosaiced (like the second image above).
– Goodarz Mehr
Nov 11 at 18:22
add a comment |
up vote
1
down vote
accepted
up vote
1
down vote
accepted
If your camera supports the CAP_PROP_CONVERT_RGB
property, you might be able to get raw RCCC data from VideoCapture. By setting this property to False
, you can disable the conversion to RGB. So, you can capture raw frames using a code like (no error checking for simplicity):
cap = cv2.VideoCapture(0)
# disable converting images to RGB
cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
while(True):
ret, frame = cap.read()
# other processing ...
cap.release()
I don't know if this works for your camera.
If you can get the raw images somehow, you can apply the de-mosaicing method described in the ANALOG DEVICES appnote.
with optimal filter
I wrote following python code as described in the appnote to test the RCCC -> GRAY conversion.
import cv2
import numpy as np
rgb = cv2.cvtColor(cv2.imread('RGB.png'), cv2.COLOR_BGR2RGB)
c = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
r = rgb[:, :, 0]
# no error checking. c shape must be a multiple of 2
rmask = np.tile([[1, 0], [0, 0]], [c.shape[0]//2, c.shape[1]//2])
cmask = np.tile([[0, 1], [1, 1]], [c.shape[0]//2, c.shape[1]//2])
# create RCCC image by replacing 1 pixel out of 2x2 pixel region
# in the monochrome image (c) with a red pixel
rccc = (rmask*r + cmask*c).astype(np.uint8)
# RCCC -> GRAY conversion
def rccc_demosaic(rccc, rmask, cmask, filt):
# RCCC -> GRAY
# use border type REFLECT_101 to give correct results for border pixels
filtered = cv2.filter2D(src=rccc, ddepth=-1, kernel=filt,
anchor=(-1, -1), borderType=cv2.BORDER_REFLECT_101)
demos = (rmask*filtered + cmask*rccc).astype(np.uint8)
return demos
# demo of the optimal filter
zeta = 0.5
kernel_4neighbor = np.array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 0, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]])/4.0
kernel_optimal = np.array([[0, 0, -1, 0, 0],
[0, 0, 2, 0, 0],
[-1, 2, 4, 2, -1],
[0, 0, 2, 0, 0],
[0, 0, -1, 0, 0]])/8.0
kernel_param = np.array([[0, 0, -1./4, 0, 0],
[0, 0, 0, 0, 0],
[-1./4, 0, 1., 0, -1./4],
[0, 0, 0, 0, 0],
[0, 0, -1./4, 0, 0]])
# apply optimal filter (Figure 7)
opt1 = rccc_demosaic(rccc, rmask, cmask, kernel_optimal)
# parametric filter with zeta = 0.5 (Figure 5)
opt2 = rccc_demosaic(rccc, rmask, cmask, kernel_4neighbor + zeta * kernel_param)
# PSNR
print(10 * np.log10(255**2/((c - opt1)**2).mean()))
print(10 * np.log10(255**2/((c - opt2)**2).mean()))
Input RGB image:
Simulated RCCC image:
Gray image from de-mosaicing algorithm:
One more thing:
If your camera vendor provides an SDK for Linux, it may have an API to do the RCCC -> GRAY conversion, or at least get the raw image. If RCCC -> GRAY conversion is not in the SDK, the C# sample code should have it, so I suggest you take a look at their code.
If your camera supports the CAP_PROP_CONVERT_RGB
property, you might be able to get raw RCCC data from VideoCapture. By setting this property to False
, you can disable the conversion to RGB. So, you can capture raw frames using a code like (no error checking for simplicity):
cap = cv2.VideoCapture(0)
# disable converting images to RGB
cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
while(True):
ret, frame = cap.read()
# other processing ...
cap.release()
I don't know if this works for your camera.
If you can get the raw images somehow, you can apply the de-mosaicing method described in the ANALOG DEVICES appnote.
with optimal filter
I wrote following python code as described in the appnote to test the RCCC -> GRAY conversion.
import cv2
import numpy as np
rgb = cv2.cvtColor(cv2.imread('RGB.png'), cv2.COLOR_BGR2RGB)
c = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
r = rgb[:, :, 0]
# no error checking. c shape must be a multiple of 2
rmask = np.tile([[1, 0], [0, 0]], [c.shape[0]//2, c.shape[1]//2])
cmask = np.tile([[0, 1], [1, 1]], [c.shape[0]//2, c.shape[1]//2])
# create RCCC image by replacing 1 pixel out of 2x2 pixel region
# in the monochrome image (c) with a red pixel
rccc = (rmask*r + cmask*c).astype(np.uint8)
# RCCC -> GRAY conversion
def rccc_demosaic(rccc, rmask, cmask, filt):
# RCCC -> GRAY
# use border type REFLECT_101 to give correct results for border pixels
filtered = cv2.filter2D(src=rccc, ddepth=-1, kernel=filt,
anchor=(-1, -1), borderType=cv2.BORDER_REFLECT_101)
demos = (rmask*filtered + cmask*rccc).astype(np.uint8)
return demos
# demo of the optimal filter
zeta = 0.5
kernel_4neighbor = np.array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 0, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]])/4.0
kernel_optimal = np.array([[0, 0, -1, 0, 0],
[0, 0, 2, 0, 0],
[-1, 2, 4, 2, -1],
[0, 0, 2, 0, 0],
[0, 0, -1, 0, 0]])/8.0
kernel_param = np.array([[0, 0, -1./4, 0, 0],
[0, 0, 0, 0, 0],
[-1./4, 0, 1., 0, -1./4],
[0, 0, 0, 0, 0],
[0, 0, -1./4, 0, 0]])
# apply optimal filter (Figure 7)
opt1 = rccc_demosaic(rccc, rmask, cmask, kernel_optimal)
# parametric filter with zeta = 0.5 (Figure 5)
opt2 = rccc_demosaic(rccc, rmask, cmask, kernel_4neighbor + zeta * kernel_param)
# PSNR
print(10 * np.log10(255**2/((c - opt1)**2).mean()))
print(10 * np.log10(255**2/((c - opt2)**2).mean()))
Input RGB image:
Simulated RCCC image:
Gray image from de-mosaicing algorithm:
One more thing:
If your camera vendor provides an SDK for Linux, it may have an API to do the RCCC -> GRAY conversion, or at least get the raw image. If RCCC -> GRAY conversion is not in the SDK, the C# sample code should have it, so I suggest you take a look at their code.
edited Nov 12 at 11:04
answered Nov 11 at 8:41
dhanushka
7,65421931
7,65421931
Thanks for the solution! I'll test it as soon as I can. The vendor does provide an SDK, but it is for Windows only, and the output image is mosaiced (like the second image above).
– Goodarz Mehr
Nov 11 at 18:22
add a comment |
Thanks for the solution! I'll test it as soon as I can. The vendor does provide an SDK, but it is for Windows only, and the output image is mosaiced (like the second image above).
– Goodarz Mehr
Nov 11 at 18:22
Thanks for the solution! I'll test it as soon as I can. The vendor does provide an SDK, but it is for Windows only, and the output image is mosaiced (like the second image above).
– Goodarz Mehr
Nov 11 at 18:22
Thanks for the solution! I'll test it as soon as I can. The vendor does provide an SDK, but it is for Windows only, and the output image is mosaiced (like the second image above).
– Goodarz Mehr
Nov 11 at 18:22
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53133996%2freading-image-stream-from-rccc-bayer-camera-sensor-in-ubuntu%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
Please provide some code on how you're currently reading images from the sensor. If you have a byte stream (encoded or raw) you can convert it to opencv mat.
– zindarod
Nov 5 at 20:57
If your camera supports Video4Linux, you'll be able to read data.
– dhanushka
Nov 7 at 16:47
Is the camera connected to your machine via USB 3.0?
– Ulrich Stern
Nov 9 at 20:39
According to the camera’s data sheet, it is UVC compliant. So in principle, VideoCapture should work. Can you share more details how VideoCapture failed for you?
– Ulrich Stern
Nov 10 at 9:24
VideoCapture reads a weird green-looking image, like the one found at the bottom left corner of the first page here: leopardimaging.com/uploads/LI-USB30-OV13850_datasheet.pdf
– Goodarz Mehr
Nov 10 at 18:16