Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
21st June 2017, 02:21 | #1 | Link |
The image enthusyast
Join Date: Mar 2015
Location: Brazil
Posts: 270
|
A modified dual camera sensor system
Huawei uses a 20 MP monochrome + 12 MP RGB sensor system. Is possible to realize the final images will have interpolation artifacts, because the 12 MP image will have to be enlarged until 20 MP.
I am thinking this way: two sensors with same resolution. One capture the luminosity. Another capture two color component. It will be into a LAB color space. Do you think my idea will generate better results than Huawei's one?
__________________
Searching for great solutions |
22nd June 2017, 03:25 | #2 | Link |
Registered User
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,406
|
The physics for capturing the two color component doesn't make sense to me. What wavelengths would the sensors for "A" and "B" respond to?
When capturing color images for humans to view you need to capture the relative intensities of the red, green, and blue wavelengths. You cannot chop up the visible part of the electromagnetic spectrum into two normal sensor elements and get the information needed for a color image. The sensors for a and b would need to respond to the ratios of red, green, and blue light, not their total intensities. Trying to imagine the quantum mechanics needed for a sensor element to respond to visible light in the same way a or b do scares me. Designing a material to do that makes developing room temperature superconductors sound easy. Mathematically it is easy to imagine but developing a material with that complex of a response to the wavelengths of light it is exposed to is very hard.
__________________
madVR options explained |
22nd June 2017, 12:01 | #3 | Link |
HeartlessS Usurer
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
|
I was thinking pretty much exactly the same as Asmodian yesterday, but daft thought ...
1 ) Additional low rez Green sensor, for supplying G along with the 2 low rez R and B sensors. Electronics do the sums (R,G,B) -> LAB(A,B). 2 ) G gotten from half rez luminosity (average) and R,B sensors via electronics calcs (dont know if feasible). (R,G,B) -> LAB(A,B), via electronic calcs. both suggestions probably just as daft as the op question. [EDIT: and assume that Lab luminosity sensor good be provided in the first place. Advantage would be in storage only.]
__________________
I sometimes post sober. StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace "Some infinities are bigger than other infinities", but how many of them are infinitely bigger ??? Last edited by StainlessS; 22nd June 2017 at 12:57. |
22nd June 2017, 13:44 | #4 | Link |
Registered User
Join Date: Oct 2014
Posts: 268
|
1) Is the dual-camera system actually scaling the color up to the monochrome version? Or is it downsampling the monochrome version to mix with the color information? Don't actually know
2) Realize that a normal digital 'color camera' is already upsampled. It captured R, G, B at different resolutions and must 'fill in the blanks' to get a full resolution picture (demosaicing). If you talk about a 12 megapixel sensor, you could say it captures 3 megapixels worth of red, 3 megapixels worth of blue, and 6 megapixels worth of green. To get to a final output picture of 12 megapixel, it's already upscaling / interpolating a lot. Since the pixels are intertwined (you only know the red value for the top left, then only the green for the value next to it, etc..) this process is not directly comparable to 'upscaling' or 'interpolation', but it's close enough. Upscaling the final output from 12mp to 20mp doesn't seem like such a big deal since certain channels started at 3mp anyway . 3) The full res 20mp monochrome sensor (so an actual 20mp of luminosity information) can help a lot in the interpolation / demosaicing since you don't have to guess the luminosity at every pixel anymore, you know what it is. This helps preventing artifacts. But I doubt what benefit it will really bring since the problem with bayer arrays and sensors is the low-res color information to start it all with. Adding extra high-res luminosity data doesn't solve the problem of low-res color channels the way I see it. 4) Don't forget that two sensors means two lenses and they are spaced a bit apart, so there is a slight area where the two pictures DO NOT overlap, which means in those non-overlapping areas you get no gain of two sensors.. or you can't even use it. This non-overlapping area can easily be 1mp or 2mp worth of detail. So once again, not that much difference between the 12mp and 20mp if you compare from where you started . The Sigma Foveon sensors try to battle this by using a single pixel / photocell that captures all 3 colors, capturing more color detail giving more apparent sharpness. But that has drawbacks as well (noise and speed for example). Their newer designs sacrifice color resolution in some channels, but at least the first channel is full res, which means you start with one color channel full res + full res luma. This is a bit like having a full-res monochrome sensor but with a green filter on it (So only capturing green, but capturing it full res) and then using two other sensors with lower-res to capture Red and Blue. But why not then just go with full-res Red and Blue as well? Three sensors, three lenses. You align the shots, crop away the parts that do not have all three sensors on it, and you're left with full-res luma + full-res rgb. Anything else seems like a waste because the moment you combine two (or more) channels in a sensor (with Bayer array), you're effectively chopping up the resolution of the channel again. |
22nd June 2017, 14:12 | #5 | Link |
Retried Guesser
Join Date: Jun 2012
Posts: 1,373
|
Here's how the Huawei P9's dual-camera system works (theverge.com)
The Huawei P9's dual camera can re-focus (businessinsider.com) It seems the second monochrome sensor is not only for resolution enhancement, but for low-light sensitivity and also for depth sensing, which is used to speed up auto-focus and (more interestingly) for creating shallow depth-of-field effects, aka bokeh. |
23rd June 2017, 05:23 | #6 | Link |
The image enthusyast
Join Date: Mar 2015
Location: Brazil
Posts: 270
|
New idea
I have thought in a different two sensors system: instead to be one monochrome sensor and one sensor with red and blue Bayer filters, I'm thinking about to be one sensor with only red filter and one sensor with only blue filter.
Is a Bayer sensor, with one color component, capable of resolve luminosity and chroma information? I.e, would a Bayer sensor with only red filter, for example, give me the red channel values and the luminosity channel ones?
__________________
Searching for great solutions |
5th July 2017, 03:39 | #8 | Link |
HeartlessS Usurer
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
|
Yes indeed, Welcome to the forum DulcieEntwistle
__________________
I sometimes post sober. StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace "Some infinities are bigger than other infinities", but how many of them are infinitely bigger ??? |
5th July 2017, 05:35 | #9 | Link | |
Registered User
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,406
|
Quote:
However, I don't think they are using any true edge finding or similar algorithms. Though edge finding has gotten great for some purposes, I don't think it belongs in image sensors (yet). I believe this is still the realm of the more standard types of image interpolation, like standard demosaicing but with a monochrome version to help. and Welcome.
__________________
madVR options explained Last edited by Asmodian; 5th July 2017 at 05:37. |
|
5th July 2017, 09:51 | #10 | Link | ||
Retried Guesser
Join Date: Jun 2012
Posts: 1,373
|
You guys need to read the links I posted above, instead of guessing.
Quote:
Quote:
|
||
5th July 2017, 18:27 | #11 | Link |
The image enthusyast
Join Date: Mar 2015
Location: Brazil
Posts: 270
|
Another option is to use a monochromatic + 3 CMOS (RGB) sensor system. Then you ask me: " Why don't use only a 3 CMOS?" Because the monochromatic sensor is for better capture low light shots.
The resolution of all sensor will be the same. Then, the images of the monochromatic sensor and the one of the 3CMOS will be merged. Other alternative is a sensor which captures monochromatic and chromatic (One channel) information at same time.
__________________
Searching for great solutions Last edited by luquinhas0021; 5th July 2017 at 19:02. |
6th July 2017, 21:07 | #12 | Link | |
Registered User
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,406
|
Quote:
That extra stuff is a side benefit, from my understanding it was mostly about less noise in low light. The added post process focus and depth of field is fun and once you have multiple sensors you can do stuff like that but with only two lenses it isn't really where those effects with computational photography take off.
__________________
madVR options explained |
|
23rd August 2017, 01:09 | #14 | Link |
The image enthusyast
Join Date: Mar 2015
Location: Brazil
Posts: 270
|
A simple, but great idea, is, instead to use monochrome sensor + RGB sensor, use monochrome sensor + RB sensor. This replacement does:
a) no artifact when the images of two sensors are combined: a monochrome image plus a RB image is... A YCbCr image. b) better spatial resolution: a 2 x 2 RGB bayer array subsamples the R and B images by 1/4 of the full sensor resolution. In RB bayer array, the resolution is subsampled by 1/2 of the full sensor resolution. 2x more spatial resolution! c) Easy adaptation: a typical RB bayer array R B B R
__________________
Searching for great solutions |
Thread Tools | Search this Thread |
Display Modes | |
|
|