Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Announcements and Chat > General Discussion

Reply
 
Thread Tools Search this Thread Display Modes
Old 21st June 2017, 02:21   #1  |  Link
luquinhas0021
The image enthusyast
 
Join Date: Mar 2015
Location: Brazil
Posts: 270
A modified dual camera sensor system

Huawei uses a 20 MP monochrome + 12 MP RGB sensor system. Is possible to realize the final images will have interpolation artifacts, because the 12 MP image will have to be enlarged until 20 MP.
I am thinking this way: two sensors with same resolution. One capture the luminosity. Another capture two color component. It will be into a LAB color space.
Do you think my idea will generate better results than Huawei's one?
__________________
Searching for great solutions
luquinhas0021 is offline   Reply With Quote
Old 22nd June 2017, 03:25   #2  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,406
The physics for capturing the two color component doesn't make sense to me. What wavelengths would the sensors for "A" and "B" respond to?

When capturing color images for humans to view you need to capture the relative intensities of the red, green, and blue wavelengths. You cannot chop up the visible part of the electromagnetic spectrum into two normal sensor elements and get the information needed for a color image.

The sensors for a and b would need to respond to the ratios of red, green, and blue light, not their total intensities. Trying to imagine the quantum mechanics needed for a sensor element to respond to visible light in the same way a or b do scares me. Designing a material to do that makes developing room temperature superconductors sound easy.

Mathematically it is easy to imagine but developing a material with that complex of a response to the wavelengths of light it is exposed to is very hard.
__________________
madVR options explained
Asmodian is offline   Reply With Quote
Old 22nd June 2017, 12:01   #3  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
I was thinking pretty much exactly the same as Asmodian yesterday, but daft thought ...

1 ) Additional low rez Green sensor, for supplying G along with the 2 low rez R and B sensors. Electronics do the sums (R,G,B) -> LAB(A,B).

2 ) G gotten from half rez luminosity (average) and R,B sensors via electronics calcs (dont know if feasible). (R,G,B) -> LAB(A,B), via electronic calcs.

both suggestions probably just as daft as the op question.
[EDIT: and assume that Lab luminosity sensor good be provided in the first place. Advantage would be in storage only.]
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???

Last edited by StainlessS; 22nd June 2017 at 12:57.
StainlessS is offline   Reply With Quote
Old 22nd June 2017, 13:44   #4  |  Link
dipje
Registered User
 
Join Date: Oct 2014
Posts: 268
1) Is the dual-camera system actually scaling the color up to the monochrome version? Or is it downsampling the monochrome version to mix with the color information? Don't actually know

2) Realize that a normal digital 'color camera' is already upsampled. It captured R, G, B at different resolutions and must 'fill in the blanks' to get a full resolution picture (demosaicing). If you talk about a 12 megapixel sensor, you could say it captures 3 megapixels worth of red, 3 megapixels worth of blue, and 6 megapixels worth of green. To get to a final output picture of 12 megapixel, it's already upscaling / interpolating a lot. Since the pixels are intertwined (you only know the red value for the top left, then only the green for the value next to it, etc..) this process is not directly comparable to 'upscaling' or 'interpolation', but it's close enough. Upscaling the final output from 12mp to 20mp doesn't seem like such a big deal since certain channels started at 3mp anyway .

3) The full res 20mp monochrome sensor (so an actual 20mp of luminosity information) can help a lot in the interpolation / demosaicing since you don't have to guess the luminosity at every pixel anymore, you know what it is. This helps preventing artifacts.
But I doubt what benefit it will really bring since the problem with bayer arrays and sensors is the low-res color information to start it all with. Adding extra high-res luminosity data doesn't solve the problem of low-res color channels the way I see it.

4) Don't forget that two sensors means two lenses and they are spaced a bit apart, so there is a slight area where the two pictures DO NOT overlap, which means in those non-overlapping areas you get no gain of two sensors.. or you can't even use it. This non-overlapping area can easily be 1mp or 2mp worth of detail. So once again, not that much difference between the 12mp and 20mp if you compare from where you started .


The Sigma Foveon sensors try to battle this by using a single pixel / photocell that captures all 3 colors, capturing more color detail giving more apparent sharpness. But that has drawbacks as well (noise and speed for example).
Their newer designs sacrifice color resolution in some channels, but at least the first channel is full res, which means you start with one color channel full res + full res luma.
This is a bit like having a full-res monochrome sensor but with a green filter on it (So only capturing green, but capturing it full res) and then using two other sensors with lower-res to capture Red and Blue. But why not then just go with full-res Red and Blue as well? Three sensors, three lenses. You align the shots, crop away the parts that do not have all three sensors on it, and you're left with full-res luma + full-res rgb.

Anything else seems like a waste because the moment you combine two (or more) channels in a sensor (with Bayer array), you're effectively chopping up the resolution of the channel again.
dipje is offline   Reply With Quote
Old 22nd June 2017, 14:12   #5  |  Link
raffriff42
Retried Guesser
 
raffriff42's Avatar
 
Join Date: Jun 2012
Posts: 1,373
Here's how the Huawei P9's dual-camera system works (theverge.com)

The Huawei P9's dual camera can re-focus (businessinsider.com)

It seems the second monochrome sensor is not only for resolution enhancement, but for low-light sensitivity and also for depth sensing, which is used to speed up auto-focus and (more interestingly) for creating shallow depth-of-field effects, aka bokeh.
raffriff42 is offline   Reply With Quote
Old 23rd June 2017, 05:23   #6  |  Link
luquinhas0021
The image enthusyast
 
Join Date: Mar 2015
Location: Brazil
Posts: 270
New idea

I have thought in a different two sensors system: instead to be one monochrome sensor and one sensor with red and blue Bayer filters, I'm thinking about to be one sensor with only red filter and one sensor with only blue filter.
Is a Bayer sensor, with one color component, capable of resolve luminosity and chroma information? I.e, would a Bayer sensor with only red filter, for example, give me the red channel values and the luminosity channel ones?
__________________
Searching for great solutions
luquinhas0021 is offline   Reply With Quote
Old 4th July 2017, 22:33   #7  |  Link
DulcieEntwistle
Registered User
 
Join Date: Jun 2017
Posts: 1
I do believe that the monochrome sensor can be used to capture sharp shapes with a minimum amount of noise and the RGB one is for "painting" those shapes.
But I might delude myself because I'm so newbie at those things
DulcieEntwistle is offline   Reply With Quote
Old 5th July 2017, 03:39   #8  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
Quote:
Originally Posted by DulcieEntwistle View Post
I'm so newbie
Yes indeed, Welcome to the forum DulcieEntwistle
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???
StainlessS is offline   Reply With Quote
Old 5th July 2017, 05:35   #9  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,406
Quote:
Originally Posted by DulcieEntwistle View Post
I do believe that the monochrome sensor can be used to capture sharp shapes with a minimum amount of noise and the RGB one is for "painting" those shapes.
But I might delude myself because I'm so newbie at those things
I do think this is correct, in essence, it is much easier to make a low-light, low-noise, high resolution monochrome sensor. Capturing a second standard bayer color image and interpolating the final image using them both does sound ideal for getting low noise in low light.

However, I don't think they are using any true edge finding or similar algorithms. Though edge finding has gotten great for some purposes, I don't think it belongs in image sensors (yet). I believe this is still the realm of the more standard types of image interpolation, like standard demosaicing but with a monochrome version to help.

and Welcome.
__________________
madVR options explained

Last edited by Asmodian; 5th July 2017 at 05:37.
Asmodian is offline   Reply With Quote
Old 5th July 2017, 09:51   #10  |  Link
raffriff42
Retried Guesser
 
raffriff42's Avatar
 
Join Date: Jun 2012
Posts: 1,373
You guys need to read the links I posted above, instead of guessing.

Quote:
Here's how the Huawei P9's dual-camera system works (theverge.com)

RGB sensors have to filter out light in order to determine which colors go where. In doing so, they lose detail that could otherwise be used to make a sharper image. Huawei claims that using two sensors lets the P9 capture 270 percent more light than an iPhone 6S, and 70 percent more light than a Galaxy S7.
Quote:
The Huawei P9's dual camera can re-focus (businessinsider.com)

it's the first quality smartphone to include a dual camera — a feature that seems likely to show up on the iPhone 7. And that's allowed the Chinese manufacturer to build powerful depth-sensing software into the device, taking advantage of its binocular vision.

That's allowed Huawei to pack the P9 with a number of effects straight out of the futuristic world of computational photography, a technology that uses multiple small cameras to generate higher-quality images in small devices.

One of computational photography's best tricks is allowing users to adjust focus after the fact and mimic the look of wide-aperture DSLR lenses with nicely blurred out backgrounds.
Granted, these are pop-science sources, so the information may be unreliable. The fact that they partly contradict each other is a little worrisome.
raffriff42 is offline   Reply With Quote
Old 5th July 2017, 18:27   #11  |  Link
luquinhas0021
The image enthusyast
 
Join Date: Mar 2015
Location: Brazil
Posts: 270
Another option is to use a monochromatic + 3 CMOS (RGB) sensor system. Then you ask me: " Why don't use only a 3 CMOS?" Because the monochromatic sensor is for better capture low light shots.
The resolution of all sensor will be the same. Then, the images of the monochromatic sensor and the one of the 3CMOS will be merged.
Other alternative is a sensor which captures monochromatic and chromatic (One channel) information at same time.
__________________
Searching for great solutions

Last edited by luquinhas0021; 5th July 2017 at 19:02.
luquinhas0021 is offline   Reply With Quote
Old 6th July 2017, 21:07   #12  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,406
Quote:
Originally Posted by raffriff42 View Post
You guys need to read the links I posted above, instead of guessing.
I did read those links, they seem to say what I did, don't they?

That extra stuff is a side benefit, from my understanding it was mostly about less noise in low light. The added post process focus and depth of field is fun and once you have multiple sensors you can do stuff like that but with only two lenses it isn't really where those effects with computational photography take off.
__________________
madVR options explained
Asmodian is offline   Reply With Quote
Old 6th July 2017, 22:22   #13  |  Link
raffriff42
Retried Guesser
 
raffriff42's Avatar
 
Join Date: Jun 2012
Posts: 1,373
Sorry Asmodian, you're right!
raffriff42 is offline   Reply With Quote
Old 23rd August 2017, 01:09   #14  |  Link
luquinhas0021
The image enthusyast
 
Join Date: Mar 2015
Location: Brazil
Posts: 270
A simple, but great idea, is, instead to use monochrome sensor + RGB sensor, use monochrome sensor + RB sensor. This replacement does:
a) no artifact when the images of two sensors are combined: a monochrome image plus a RB image is... A YCbCr image.
b) better spatial resolution: a 2 x 2 RGB bayer array subsamples the R and B images by 1/4 of the full sensor resolution. In RB bayer array, the resolution is subsampled by 1/2 of the full sensor resolution. 2x more spatial resolution!
c) Easy adaptation: a typical RB bayer array

R B
B R
__________________
Searching for great solutions
luquinhas0021 is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 10:56.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.