• After 15+ years, we've made a big change: Android Forums is now Early Bird Club. Learn more here.

Help 3D camera speculations: what new possibilities?

As the others are saying, it's more about quality of the sensor. The number of MP is a side issue to overall quality.

If I two super-great sensors made by same guy, same series, and one is x (some number) MP and the other is 2x MP then the 2x MP is probably better.

If I have a doggy sensor at 8 MP and a great sensor at 5 MP, the great sensor will win every time, it'll just have fewer dots.

Where does the greatness come from if not MP? How big (how much light can it get in), how sensitive (can it really go to town on light), how noisy is it? (Hey, it's a digital doodad converting kinda analog (light) to digital - it WILL have noise.)

People like to call this the megapixel myth in digital photography.

Early results indicate the 5 MP in the 3vo may outclass the 8 MP in the Evo, for example.

Looks like HTC put a lot of love and research into the 3VO. I must give HTC the "slow clap" on a job well done.
 
Initially I felt the same way too, but I think it will actually be a better camera package overall. As many have said there is more to it than just the MegaPixels. The test images I've seen appear on par with the Evo 4G, and the video is very good. There is less stuttering and jitter during video recording and playback, which always plague camera phone modules.

I plan to go test them side by side once there is a live demo unit somewhere to try. I encourage you to do the same.

Earl, do you happen to know the Evo 4G's camera module model number (OmniVision BSI...)? I tried ifixit, which we have both found to be a great resource, but it does not list it specifically. I think we should compare the specs of the two different sensors and do some mythbusting!

I'm guessing they are both OmniVision sensors:
OmniVision

---edit---
nice, looks like you edited it into your previous post when you merged the thread over.
Now the tough part, finding out which sensor is in the unreleased Evo 3D...
Maybe it is buried in the FCC schematic - which got taken down and I don't have a copy saved
 
Edit: Didn't see jerofld's post before I made mine. I disagree with anything over 2MP being overkill for the newest cell phones.

Since he was referring to something I once said, allow me to qualify that:

there's a fundamental limitation to resolving power when you have a camera of such short focal length (single-digit mm) and such tiny aperture. And there's a standardized way to measure the resolving power of a given optical system, with something like this:

target%20figure%20q8new.jpg


Using a cell phone camera to take a picture of a chart like this will allow you to identify the resolving power of the camera by analyzing the resulting photo and seeing where individual lines start to blur. I can assure you that no cell phone camera requires more than a 2MP sensor to capture the optics' ability to resolve light.

As you said, optics comes first. And there is a physical limitation to what cell phone camera optics can do. It falls somewhere between 1MP and 2MP.

Pushing the sensor to higher MPs decrease the final image quality as each sensor receives less light, therefore magnifying the noise value of the sensor. But you gain the ability to downsize the image post-process to reduce other errors like hand shake and/or motion blur.

Most people end up resizing for web viewing. I think 2-5MP range is good for that.
 
Guys - no disrespect - but we really need to split these threads into camera discussions for newbies and camera discussions for experts.

I think the two groups really confuse each other.

In fact - I'm kinda sure of it.

novox77 or marc - any comments on my idea?

PS - I am old film dinosaur. I get light and taking pictures. I can spell digital camera and have used some. I know what I need to know to get the picture-taking job done. I would be your kinda digital camera newbie, I'll volunteer to guide too-expert posts to your side.
 
Guys - no disrespect - but we really need to split these threads into camera discussions for newbies and camera discussions for experts.

I think the two groups really confuse each other.

In fact - I'm kinda sure of it.

novox77 or marc - any comments on my idea?

Agreed, the beginning of this thread makes me feel like my cranium is filled with homogeneous body tissue simulating liquid. :D (see the FCC documents if you don't get that inside joke)

I think there should be a general thread with comparison side by side pics of what we have from the Evo 3D so far and old pics from the Evo 4G and let people decide which are 'better'. there can also be a closer inspection of the different camera modules. Camera Mythbusting?
 
---edit---
nice, looks like you edited it into your previous post when you merged the thread over.
Now the tough part, finding out which sensor is in the unreleased Evo 3D...
Maybe it is buried in the FCC schematic - which got taken down and I don't have a copy saved

Have no fear, I've not closed the schematic since I got it. :)

It does list the FFC camera on its own page - but the part there is for the multiplexor to connect to the sensor. For the 5 MP sensors, it lists only the connector part number. Sorry, I meant to download the full suite of docs at the FCC, but only have the schematic. They must somehow qualify without listing the camera sensor because that's a non-RF component. :(

PS - We're all thread-merging and -splitting bunnies over here! :)
 
@Early: I have no problem with reorganizing this stuff. I'll have to recuse myself though, since I may not be able to sort objectively here.


And loved your remark about being able to "spell 'digital camera'" :) Reminds me of Space Quest V when Roger Wilco says, "I know kung fu, karate, tai-kwon-do, and several other [asian] words!"
 
Since he was referring to something I once said, allow me to qualify that:

there's a fundamental limitation to resolving power when you have a camera of such short focal length (single-digit mm) and such tiny aperture. And there's a standardized way to measure the resolving power of a given optical system, with something like this:

target%20figure%20q8new.jpg


Using a cell phone camera to take a picture of a chart like this will allow you to identify the resolving power of the camera by analyzing the resulting photo and seeing where individual lines start to blur. I can assure you that no cell phone camera requires more than a 2MP sensor to capture the optics' ability to resolve light.

As you said, optics comes first. And there is a physical limitation to what cell phone camera optics can do. It falls somewhere between 1MP and 2MP.

Pushing the sensor to higher MPs decrease the final image quality as each sensor receives less light, therefore magnifying the noise value of the sensor. But you gain the ability to downsize the image post-process to reduce other errors like hand shake and/or motion blur.

Most people end up resizing for web viewing. I think 2-5MP range is good for that.

I agree with what you are saying. The 2MP limit seems plausible with such small lenses so I will stipulate to it being correct. I was only pointing out that, like you mentioned a bit towards the end, while you're optical resolving ability may be fixed oversampling with a higher sensor resolution lends itself to other benefits. Anyways doesn't the Nyquist limit hold true here? If my optical resolving power is 2 MP wouldn't I need a sensor with at least 4 MP to properly sample the optical signal at its max resolution?

All said though I think 5 MP is perfect.
 
I only understand Nyquist as it pertains to audio, and in particular one-dimensional wave signals. But you may be right.
 
@Early: I have no problem with reorganizing this stuff. I'll have to recuse myself though, since I may not be able to sort objectively here.


And loved your remark about being able to "spell 'digital camera'" :) Reminds me of Space Quest V when Roger Wilco says, "I know kung fu, karate, tai-kwon-do, and several other [asian] words!"

lma0!

OK, gang, I'll give it a shot in a bit (by tomorrow I think), and then users and mods will correct my moves while we sort it.

I think these things about picture quality really interest everyone, but need to taken in bites. We've got so many pro photographers and semi-pro's in here - and then there's the rest of us.

I used to hobby heavily in film, developed my own stuff and I care about the essentials, personally.

Rule One - always have a camera. The camera you're carrying in the moment far outclasses the best camera that's at home or on a store shelf because life's pictures will not wait.

And Rule One is what interests us digital camera noobs because we want the 3vo camera to be good enough.

And that leads to dinosaur Rule Two in photography.

Rule Two - if you're unhappy, either carry a better camera or exploit the camera you have. You can turn those picture frowns upside down.

Many have seen me post this before, so apologies, but to you other newbies, let's answer the question - how far you can go with a even a cheap digital camera?

From - http://androidforums.com/htc-evo-4g...fix-our-video-camera-first-3.html#post1251381 -

(lots snipped away here and then) -

But note - creativity in photography is all about exploiting your camera, not giving in to it.


Made on a Fischer-Price toy camcorder - the PXL 2000 - from the '80s.

120 x 90 black and white pixels - 15 fps. Recorded video to an audiocassette.


They call it - Pixelvision -

YouTube - Whitney (shot in Pixelvision) PXL 2000

I've also said there are no rules when it comes to photography. That's not being contradictory, that's celebrating the duality of things. ;) :)

~~~~~

The camera on a phone can be a make or break decision for many people.

If you're one of those people, consider the 3vo very carefully, maybe it's not the right one for you, maybe it is. Sprint has other great options already and more coming soon.

Choose what's right for you. (The DX and SGS take great pictures, no doubt about it, so the newer Motos and Sammys are worth your inspection if the camera's the thing. I have zero clue which are better, honestly - remember, I can spell digital camera. ;))

This post is meant for people who are likely to be OK with the 3vo camera, regardless.
 
Anyways doesn't the Nyquist limit hold true here? If my optical resolving power is 2 MP wouldn't I need a sensor with at least 4 MP to properly sample the optical signal at its max resolution?

Nyquist spoken fluently here. :)

I'll take some shortcuts to save time. When Nyquist showed that, it was really all about certainty of wavelengths - but the compliment of certainty is noise. And in a solution chain (lens -> sensor) noise matters.

If this were built like an insect's eyes, one lens per pick-up, then you'd be ok with a 1-1 mapping between lens and sensor MP.

What happens with a single lens is that the light result after the lens' line resolution - when magnified - is a diffractive pattern of noise. So you do want more sensor elements to gather the diffracted as well as non-diffracted light so you can average out the noise and get a better picture.

Same effect happens in film. You want a grade of film and paper with finer grain structure than the lens resolution in lines. You can use a loupe and see the line structure of better or worse lenses in a photo print. But the viewer eye integrates the lower quality picture into something more than sufficiently pleasant.

We have no such luxury without film, so the integration has to happen with software and hardware.

I'll freely stand corrected by any digital camera pro, but that's how I understand the physics of it.
 
Ah, I was going to make a sample file. Sorry I didn't get to it earlier (actually I did but never uploaded it). It's nothing super interesting, really, but if you're curious as to what a simple implementation can do, take a look at some of these files.

Pack contains a heavy and light denoising/enhancement example (multiple pics used to process one of them), "perspective fusion" sample with same lens moved by hand a few inches, and simple framerate interpolation example of a video (one lens). All use a similar principle to that which I described, but without additional masking or vanishing point compensation, etc (again, I don't have a 3D camera).

A system in which the lenses are fixed would increase the calculation speed while also greatly increasing the accuracy of both the analysis and the processing steps. This is because of both the fixed temporal and fixed lens displacement elements, since the near-random motion of the camera between shots in this demonstration renders a lot of the calculation inherently inaccurate as opposed to a system which fixes both of these things.

Does anyone actually have any 3D pictures or videos I can process? I would love a 'real world' test rather than a poor-quality (but slower-to-make!) approximation like this one. At least it'll be fun to take a whack at, no? :)
 
Hey, many thanks, will download asap and comment later! :)

And yes, yes there are some still side-by-side images pasted up around here, but I think if we find you a few of those raw, it'll be better.

Also - novox77 posted some from youtube before - can you do the youtube capture thing?
 
Interesting. I was able to make sense of your noise removal shots and zoomed in there is a definite difference in noise, although I've not get decided where there is also a loss of detail (it seems the photo gets slightly softer).

One thing that has occurred to me (and forgive me if it's already been addressed, I've only skimmed the thread) is that in my limited experience in photography and noise reduction, a lot of the better noise reduction systems operate on the principle that there is a consistent noise "signature" in each CCD/CMOS sensor, and so if you establish a profile with the noise signature, you can more effectively identify and eliminate noise instead of photo detail.

In our case those, wouldn't we be using two different sensors with slightly different noise signatures, thereby rendering this approach ineffective?
 
Interesting. I was able to make sense of your noise removal shots and zoomed in there is a definite difference in noise, although I've not get decided where there is also a loss of detail (it seems the photo gets slightly softer).

One thing that has occurred to me (and forgive me if it's already been addressed, I've only skimmed the thread) is that in my limited experience in photography and noise reduction, a lot of the better noise reduction systems operate on the principle that there is a consistent noise "signature" in each CCD/CMOS sensor, and so if you establish a profile with the noise signature, you can more effectively identify and eliminate noise instead of photo detail.

In our case those, wouldn't we be using two different sensors with slightly different noise signatures, thereby rendering this approach ineffective?
This algorithm implicitly takes into consideration the particular point that the noise signatures are different. This is what allows it to determine what is noise and what isn't between shots -- the noise will appear more "random" from one camera to the other. If it were two identical sensors with identical noise signatures, a cloud of patterned noise will simply appear to have 'moved' between one camera's picture and the other's, confounding the MC step.

If you look closely, the lines and objects are actually more defined in the noise-reduced picture, even though I used a higher threshold (I'm speaking of the 'heavy' sample, not the 'light' one). An optimal approach would use an ISO- and luma- dependent threshold. Overall, though, this is yet another (hidden) reason why it would be more accurate to use the dual-camera approach for this same method, since we aren't relying on a pattern approximation (or limited by it in the case of a single lens), but instead on an analysis of the differences between the two shots over what can be accounted for by vector mapping. (In a nutshell.) :)

This also frees the algorithm from a reliance on a heavy (and somewhat processor-intensive) cluster analysis and noise pattern recognition procedure and the approximations inherent to such a method. It focuses on a reliable displacement compensation algorithm itself greatly facilitated (in speed and accuracy) by a small fixed-lens, fixed-time system.
 
Just on the idea of HDR, this has to be possible (even if it isn't an available feature), simply because my digital camera creates HDR images by taking two shots and merging them and it does an amazing job, but the important point is that the two shots are taken within a split second of each other. On this handset the images could be taken at exactly the same time, so how can there be a resulting problem with the image merge?

Out of the box it might not do it, but I have Vignette on my Desire and that does a few things that the default camera doesn't including better digital zooming than the stock app. Not a feature I often use, but clearly the Vignette software is doing something a little better than the stock software. So if app writers are allowed to use both lenses then all sorts of possibilities open up! It could be that two lenses on a handset actually lead to better 2D images
 
I've definitely changed my stance from what I said earlier in this thread, after seeing fattank post the "middle" composite picture of my stereo pair in another thread:

http://androidforums.com/htc-evo-3d/352803-stereo-photos-not-evo-3d.html#post2810725

Traditional HDR requires a stationary subject, different exposure times, and a stable camera. If you can do this with the E3D and then apply the algorithm fattank used to merge, HDR should work. I'm still unclear if the resulting image could be a 3D HDR shot, though I'd be happy with the 2D HDR.

A better 2D image seems doable now, and we really wouldn't need to modify the phone. Just take a shot in 3D, extract the stereo pair, and apply the merge. It's conceivable that one could write an app to automate this process though.
 
i was just wondering about the EVO3Ds cameras.both rear cameras are 5 mps because of the 3d feature. why couldn't they make one of the cameras 8mps that would automatically downgrade to 5mps when switched to 3d and upgrade back to 8mps when switched to 2d? is this even possible? i know that my EVO4G gives me the option to switch the resolution of the camera all the way down to 1mp.maybe i should patent my idea? :)
 
i was just wondering about the EVO3Ds cameras.both rear cameras are 5 mps because of the 3d feature. why couldn't they make one of the cameras 8mps that would automatically downgrade to 5mps when switched to 3d and upgrade back to 8mps when switched to 2d? is this even possible? i know that my EVO4G gives me the option to switch the resolution of the camera all the way down to 1mp.maybe i should patent my idea? :)

Too late, Jobs already patented it!:p

Seriously though, I think 8 mp is getting to or possibly passing the upper limit of useability for a camera phone sensor. This may sound counter-intuitive but you would actually get more info per pixel with the 5 mp than the 8. The reason for this is the micro-lens over each pixel, the more megapixels the smaller the lens has to be which equates to less information gathered at each pixel.
 
i was just wondering about the EVO3Ds cameras.both rear cameras are 5 mps because of the 3d feature. why couldn't they make one of the cameras 8mps that would automatically downgrade to 5mps when switched to 3d and upgrade back to 8mps when switched to 2d? is this even possible? i know that my EVO4G gives me the option to switch the resolution of the camera all the way down to 1mp.maybe i should patent my idea? :)
The reason for 5MP cameras instead of 8MP was, likely more than anything, because of cost and not technical limitations.
 
You can expect added features to be available other than standard 2D and 3D capture on EVO 3D. Burst/Sport mode will utilize alternating camera lens. The other mode will be panoramic effect once again using both lenses. You can expect further details on features coming with EVO 3D about dual lens options available on June 3rd.
BSOD


This did not happen
 
That's true, I guess you could pull the stereoscopic image apart. Is there any PC software that does this now? If so, is this software able to do anything interesting with the images when they are separated?

Actually, yes. Perhaps. Or not.

I am working on a project that takes a stereo pair and allows you to create a lenticular print, at home, and not much in the way of specialized equipment is required. That said, the lenticular materials are not readily available except over the net, so it will take a little work.

One thing, it is not perfect and some people start thinking that it is some wonder application for stunning 3d hard copy image production. What we are doing is for creating one specific type of 3D print for a very, very narrow part of the market. My only part is writing some initial technical crap that the vendor will use for a very specific use.

You might want to have a look here:

Lenticular Image Creator by Jameson Bennett

This will help explain what can sometimes be done with some stereo images. I make no guarantees that it will work with images produced with the Evo 3d.

Can someone tell me how the images are created? Are they interlaced by the camera automatically or is a pair created that the phone interlaces? I see pairs taken with the Evo and it is almost always a stereo pair, side by side.
 
Back
Top Bottom