To become an expert in using something, it is neither necessary nor sufficient to understand how it works. But lately, I've been feeling like Otto von Chriek ... thinking about light a lot.
The things that define a photograph fall in two categories:
For the two-dimensional world, three things define composition.They are shown in red, blue and green in this picture.
In our own three-dimensional space, the same idea applies: only, what was a circle becomes a sphere, and the photograph is now a two-dimensional projection of a three-dimensional volume. The definitions are upgraded somewhat:
The interesting thing about the angle of view is that from any picture, we can extract other pictures as long as the 'wedge' of the extracted picture is entirely contained in the 'wedge' of the original picture. So if we manage to capture an 'all around' view—often called a 'spherical panorama', technically a 360°×180° or 4π steradian view—then the camera direction becomes irrelevant, and we can subsequently extract pictures with whatever direction and angle of view we want. We could forget about θ, φ, α, β and S when we are taking that shot.
If the picture captured is a flat image, a two-dimensional projection of the three-dimensional view selected, we cannot later extract pictures that have a depth-of-field that is entirely contained in the doughnut of the original image. And conventional capture techniques (such as film and common digital image sensors) are such projections. If, however, we somehow capture a light field, we can extract those pictures at our leisure. We need no longer worry about un and uf at the golden moment t!
Recent buzz seems to suggest that both these ideas may come to a neighbourhood store soon. Today most panoramas are created from multiple images taken with a sweep of the camera, but there is this ball, with 36 small cameras mounted on it, that takes spherical panoramas instantly. And there is a light field camera from Lytro that proudly invites us to 'focus later'.
Maybe in the not too distant future, a casual photographer can just get to (x, y, z) at time t ... and take a picture without bothering to compose!
The things that define a photograph fall in two categories:
- Composition, or framing
- Illumination
- Capture
For the two-dimensional world, three things define composition.They are shown in red, blue and green in this picture.
- The red circle shows the camera position.
- The blue wedge shows a combination of two things:
- The camera direction, which is the bearing of an imaginary line down the center of the wedge.
- The angle of view, which is the angular size of the wedge.
- The green doughnut shows the depth of field, which is the range of distances that our photographer wants to capture.
In our own three-dimensional space, the same idea applies: only, what was a circle becomes a sphere, and the photograph is now a two-dimensional projection of a three-dimensional volume. The definitions are upgraded somewhat:
- The two-dimensional camera position can be represented as (x, y) in some Cartesian coordinate frame. In three dimensions, this becomes (x, y, z).
- In two dimensions, the camera direction could be represented as and angle θ (-180° ≤ θ ≤ +180°), from some reference direction. To represent it in three dimensions, we need to introduce a new angle φ (-90° ≤ φ ≤ +90°). The direction would be expressed as (θ, φ). It may be easier to think of φ as the 'latitude' and θ as the 'longitude'.
- In two dimensions, the angle of view could be represented as a single angle α (0° ≤ α ≤ 360°). Now, the angle becomes a solid angle. If we fix the shape of the 'window' to be a rectangle of a given aspect ratio, this solid angle can be expressed as (α, β), where, α is the angle subtended by the diagonal of that rectangle at the centre of the sphere, while β (-90° ≤ β ≤ +90°), is the orientation of the diagonal of the rectangle with respect to the 'line of longitude' passing through its midpoint. In general, if the shape 'S' of the window has a rotational symmetry of order 'n', we have -180°/n ≤ β ≤ +180°/n. If the window happens to be circular, β is irrelevant.
- The depth-of-field doughnut becomes a hollow ball in three dimensions. In both cases, however, it is defined by the radii of the near and far bounding sphere or circle: un and uf
- The position co-ordinates x, y & z are decided by where the photographer has chosen to be
- The angles θ, φ & β are actually how the photographer chooses to hold and point his camera
- The shape S is actually decided by convention—it is usually a rectangle of aspect ratio 16:9, 4:3, 3:2 or 1:1.
- The parameter α may be available for control as 'zoom'
- The parameters un and uf are usually not adjustable individually, but may be available for control jointly.
- When focusing the camera manually or automatically, what is being adjusted is a quantity u0 that is:
- For uf < ∞, the harmonic mean of un and uf (defined as u0 = 2/(1/un+1/uf).)
- For uf = ∞,
- And the actual depth of field uf - un can be controlled indirectly though a number of other parameters, each of which have some other side effect.
- When focusing the camera manually or automatically, what is being adjusted is a quantity u0 that is:
The interesting thing about the angle of view is that from any picture, we can extract other pictures as long as the 'wedge' of the extracted picture is entirely contained in the 'wedge' of the original picture. So if we manage to capture an 'all around' view—often called a 'spherical panorama', technically a 360°×180° or 4π steradian view—then the camera direction becomes irrelevant, and we can subsequently extract pictures with whatever direction and angle of view we want. We could forget about θ, φ, α, β and S when we are taking that shot.
If the picture captured is a flat image, a two-dimensional projection of the three-dimensional view selected, we cannot later extract pictures that have a depth-of-field that is entirely contained in the doughnut of the original image. And conventional capture techniques (such as film and common digital image sensors) are such projections. If, however, we somehow capture a light field, we can extract those pictures at our leisure. We need no longer worry about un and uf at the golden moment t!
Recent buzz seems to suggest that both these ideas may come to a neighbourhood store soon. Today most panoramas are created from multiple images taken with a sweep of the camera, but there is this ball, with 36 small cameras mounted on it, that takes spherical panoramas instantly. And there is a light field camera from Lytro that proudly invites us to 'focus later'.
Maybe in the not too distant future, a casual photographer can just get to (x, y, z) at time t ... and take a picture without bothering to compose!

No comments:
Post a Comment