Thursday, August 22, 2013

Pentax HippoCam/FotoDiox RhinoCam™ Advantages over "Normal" Stitched Panoramas

I was asked this question today on the Pentax Discuss Mailing List: "I'm curious what qualities you expect from this that you wouldn't get from a careful normal multi-exposure, perhaps with a panoramic head … perspective correction?" 

I thought it was a good question, and so here is my (fairly long) response. Feel free to correct any errors in my thinking 


In a nutshell it is the technological differences between starting with a spherical image vs a flat image, when the end result must be flat.
There are two sides to producing a panorama image: Taking & Making, so lets consider the differences on both sides.

On the Taking side, what you call "normal" panorama is rotating the camera around the center point of the tripod stalk (rotational panorama). To do it properly, you need to position the nodal point of the lens over that rotational point. If you simply rotate around the tripod hole on your camera body, you are going to have problems with objects that are closer to your camera. As you mentioned, it also helps to have a pano head. The HippoCam does not rotate, so there is no concern over finding the nodal point of a lens and no equipment needed to position that nodal point over the rotation point of your tripod. Instead the camera is traveling in a plane over the 6x7 image circle, with some overlap to allow the stitching.

Now think for a moment about the image quality that the sensor receives from a lens designed for the size of your sensor, vs a lens designed to cover a 6x7 negative size. In your "normal" method you are working with a lens that normally sacrifices some quality at the corners. So a "normal" panorama image has overlapping weak corners at each "seam" of the process. By using a larger format lens the weak corners aren't even being sampled. The APS-C sensor is sliding right across the middle of the 6x7 image circle. (One could also effectively minimize this doing "normal" rotational panoramas on an APS-C sensor by using 35mm "full frame" lenses in which your smaller sensor size is also eliminating the corners.)

That brings us to the making part. In the "normal" panorama process you have to do two things: stitch and then distortion correct. First let's talk about the stitch. Here's an example of a simple stitched image: http://www.altostorm.com/images/corrector/sample_1_original.jpg

Two things:
As you know if you have ever produced a normal pano like this, you know that it wasn't rectangular like this. The original image was a bowtie shape. You had to crop off pixels to get to the USABLE rectilinear area. In short there is pixel "waste" or cost. You have in effect used a much smaller part of your sensor (especially vertically) than you started with.


Secondly, depending upon the focal length of the lens you used to take the individual pano frames, you know that there are often problems that creep in on parts of the image at the blends. These are called "stitching artifacts". These things can "give away" the fact that an impressive looking image was made up of segments. This is generally not critical for making web-resolution images, but if you want to make bigger wall sized prints from your images those things have to be dealt with in some way.

The HippoCam/Rhinocam™ stitching process is much easier technically because we are not stitching spherically, but only "flat stitching". It is a completely different process in Photoshop. In theory the pixels should PERFECTLY OVERLAP from one frame to the next (as opposed to an algorithm that must BLEND spherically distorted pixels in a pleasing way). No stitching artifacts are introduced into the process. And you throw away no pixels. Assuming the HippoCam is level, you should lose very few pixels vertically and get to use almost the full 23.7mm of sensor width in the vertical dimension.

Now, let's talk about the distortion correction phase: How do we magically go from this: http://www.altostorm.com/images/corrector/sample_1_original.jpg
to this:
http://www.altostorm.com/images/corrector/sample_1_corrected.jpg
???


Think about it. Either the pixels on the extreme right and left had to stretch apart (did the software interpolate pixels to fill that space in a way that made sense?) OR ELSE the center had to shrink to match the outside edges - which means again "throwing away" pixel information (which equals a loss of resolution).

Anybody who has ever tried to up-size a jpeg knows that there is a cost of sharpness and resolution to do so. No algorithm can reproduce information it doesn't have. The best it can do is guess and the end result is something that is very clearly inferior to our eyes.

If you eliminate the need for distortion correction in the first place (as the HippoCam/RhinoCam method does) you eliminate the corresponding loss of resolution.

We haven't even talked about DOF issues when comparing a spherical image to a panorama made from taking images across a single flat image plane.

All of this is just theory talking however. We'll hopefully see if it works in practice and I can do some comparison shots both ways.

No comments:

Post a Comment

Thanks for viewing. Your thoughts & comments are most welcome.