Like many photographers, I keep a collection of antique cameras around, partly as a decorative motif, partly to remind myself of our shared heritage. There’s nothing particularly notable about my own collection – none of my cameras is rare or valuable – but what I find intriguing is how different they all look and function. It’s interesting to look at the technological advances that these cameras exhibit, how each successive generation built on what was there before, took some parts, left others, modified and refined certain concepts, and generally moved the technology forward. If I could take a Nikon D3 back in time and show it to George Eastman, would he be amazed at the miniaturization? Appalled by all the plastic? Disturbed by the lack of film? Most of all, given his own experience of photographic technology, would he even recognize it as a camera?
Any attempt to answer that question would be pure speculation, but I like to think he would. Fundamentally, a camera needs just one thing to be recognizable as such: a lens. Add or take away as many other things as you like, re-shape it in any form and it’s still ultimately a triumph of function over form: it is a means of capturing and recording an image, be it moving or still, and that requires a way of bending light to focus it onto a recording medium. The lens is the visible manifestation of that system; George would know a camera for what it is. But given the pace of technological change today, I wonder if the same could be said for us.
This, then, is an attempt to bring something back from the future, to examine the shape of cameras to come and see just how different things might start to look. I’ll start with where things are now – the cutting edge, if you will – and attempt to extrapolate from there. There are a lot of things I could look at, but for the sake of brevity I’ll limit myself to just five. If you feel strongly that I’ve left something glaringly obvious on the cutting room floor, feel free to chime in in the comments below.
1. Light field optics
By now, we’ve all heard about the Lytro camera (http://www.lytro.com/) and what it can do. We’ve played with the photos online, re-focusing each shot after the fact and generally consigning it to the status of an expensive toy. After all, given the lack of controls (it has a zoom feature and a shutter button, and that’s it), the inability to connect a flash, the paltry 6-megapixel images, the noisy low-light performance, even the lack of a standard tripod mount, it’s hard to see who they’re aiming this camera at. Commercial photographers will likely scoff at its anemic performance, while casual snappers, accustomed to the pocket-sized offerings of Canon, Sony and the like, might well be put off by the odd, elongated form factor.
It’s worth noting that even the company itself admits that this is more of a proof of concept than a serious contender. Nobody’s going to shoot a magazine cover or a double-truck ad with it.
And therein lies the rub, as the bard would say. Consider that the very first commercial digital camera – the Kodak DCS – mounted a digital sensor to a Nikon F3 body to capture a whopping 1.3 megapixels of image data, and required you to walk around with a hard drive the size of a coffee maker to store the images. That was 21 years ago.
What’s important to consider is not so much the specifications of the Lytro as it exists now, but the potential of its underlying technology. Back in January I shot a session with an 80 megapixel Phase One camera. 21 years ago, the idea that a digital camera would produce sharper, better images than a similar-sized film camera would have been heretical. Today, it’s commonplace.
So what about the Lytro? Today it can capture a 6-MP image (or series of images, if you will – altogether it’s capturing the equivalent of 11 megapixels of data, or 11 million light rays, as they’re fond of saying). Given Moore’s Law, it’s not inconceivable that the Lytro Mk II will be at least an 8-MP camera. In five years, that could double to 16-MP, approaching the bottom end of what is commonly considered “commercial resolution”. Along with improvements in sensor element density, they’ll no doubt get to work on that niggling low-light performance, perhaps add in a few extra doodads to make it more studio-friendly (at least a wi-fi card if nothing else, to allow for something approaching tethered capture), and perhaps offer it in a more-respectable black housing.
In 21 years, we might even see an 80-megapixel version. Imagine, for a moment, an 80-megapixel image that you can refocus at will, or even focus stack seamlessly in-camera. Imagine what that would mean to macro photographers, landscape photographers, even forensic police photographers. Now imagine how a fine art photographer might use that. Will future versions of Photoshop, Lightroom and Capture One have a “focus” tool that you can use with a Wacom tablet? Will we see that iconic scene from the movie Blade Runner (http://www.youtube.com/watch?v=QkcU0gwZUdg) finally come to pass?
2. The iPhone
Huh? You might be asking. Okay, yeah, we’ve all seen the video comparisons between the iPhone 4S and a Canon 5D Mk II, and marveled at the strides in image quality that Apple – not a traditional camera manufacturer – have made in just a few short years in what is essentially a glorified Facebook-checking device. But what blew me away when I saw the first iPhone, and what still amazes me today is the intuitive way you use it as a camera. We’ve had autofocus cameras and lenses for a couple of decades now, and ever-increasingly complicated systems within our cameras for determining exactly where the focus point should be, with ever-increasingly complicated rituals required to use them (half-press the shutter button, recompose the shot, full-press, swear at the camera for choosing the wrong focus point, rinse, repeat) that make us look like curious birds in the midst of a ritual mating dance. The iPhone, by comparison, does away with all of that by making manual focus child-like in its simplicity. How does it work? You just point at the touch-screen where you want it to focus. Unless you’ve got fat fingers or shaky hands, it’s likely you’re going to get the shot you wanted.
Don’t think this is such a ground-breaking feature? The folks at Phase apparently did, because all three of their new flagship series of cameras will allow you to do the same thing. They even have a touch-screen that’s almost as large as an iPhone display. Not only that, but they’ve cribbed a few other features from the iPhone, like a built-in GPS, gyroscopic level finders, and a real-time focus-check that lets you see a live 1:1 preview of your image while you’re setting up (although not while you’re shooting, alas).
So what’s next? How about mashing this technology with the light field camera? Imagine having a camera that lets you define a range of focus by touching a near point and a far point? Imagine a camera that lets you set a “follow focus” point on a particular person in a scene so that no matter how crowded the piazza, your shots of Aunt Meghan are always tack sharp. Or on your rambunctious toddler’s left eye while he crawls around the kitchen. Or on the twitchy blue jay 30 yards away half-concealed by brambles?
3. In-camera HDR
I tend to plug Phase One a lot, singing the praises of the new IQ backs as the pinnacle of current photographic technology. But even those cameras, with their vaunted 12.5 f-stop dynamic range, can’t take a life-like picture of a sunset, not without some prep-work or extensive post-processing. Naturally, we’ve all come to accept that this is just normal – even in the film days, it was still de rigeur to break out the graduated neutral density filters as the sun got low to the horizon. And if you were out shooting someplace where the horizon wasn’t a straight line? Well, there’s always bracketing and compositing.
Ah, but Photomatix! you might say, triumphantly raising a finger to make your exclamation point manifest. But like a lot of filters, plug-ins and third-party imaging programs, Photomatix applies itself uniformly across your entire image, sans finesse. Sure, you can tweak the way the effect looks, but doing so affects the whole image simultaneously. And the program still works best when you give it bracketed exposures, rather than processed exposures made from a single capture.
Yet there are already cameras that can capture, in one shot, an HDR image.
Sony, Pentax, Canon, Nikon and even Casio have cameras that can kludge together an HDR in-camera. Processes and results, of course, vary, but by-and-large it’s a matter of automation: bracketing followed by in-camera processing to produce a JPEG. But as anyone who’s ever tried bracketing a scene with moving objects knows, even the slightest bit of wind can wreck your shot, which is why the shots you see of busy night-time streetscapes are inevitably single-shot HDR. As old-school landscape photographers will tell you, the absolute best way to capture that sunset is to get it all at once.
So wouldn’t it be interesting if there were such a thing as a variable neutral-density filter? Something you could adjust almost infinitely to limit the amount of light coming in to your camera at just about any given point? Turns out, that might be coming sooner than you think.
Canon, for instance, has a patent on pixel-level exposure adjustment that would make graduated ND filters a thing of the past. Once again, combine that with the touch-screen interface of an iPhone and you could, by simply pointing to one or two areas, define a range of luminosity values that you want to capture.
Admittedly, Canon’s system doesn’t technically increase dynamic range across the entire image, but who knows what strides they’ll make in sensors in the years to come? Instead of maxing out at 12.5 f/stops, imagine a camera with a similar dynamic range to our own eyes: 24 stops. Now imagine a camera that exceeds it.
Even with that range, you might still want to creatively adjust the exposure in certain areas of your image in post, exactly as you do now in Lightroom, Capture One or your favourite RAW processing software. But imagine the Exposure slider going from -12 to +12, with nary a noisy pixel in sight. Imagine painting light into or out of your photos using your Wacom, and having it look perfectly natural. Imagine taking a photograph good enough for National Geographic of the ceiling of the Sistene Chapel at night. By candlelight. Without a tripod.
4. Stereo photography
You’re likely thinking, this author has gone off the deep end for sure. Why would a short-lived Victorian-era experiment that produced nothing more lasting than the Viewmaster be one of the next big things in camera tech?
Notice something about the movies lately? There’ve been an awful lot of them screening in 3D. Unlike the crazes of the ’50s, ’70s and ’80s, many of the 3D movies we’re seeing now have some pretty big names attached to them: Martin Scorsese, Peter Jackson and James Cameron are all betting big on 3D. TV is following suit, predictably with sports networks the first out of the gate. Sales of 3D televisions have been slow, but that can be explained by a combination of consumer fatigue (“Seriously? I just bought a 50-inch LED and now you’re telling me it’s already useless?”), the uncertainty of a wobbly economy mixed with maxed-out credit, and aversion to wearing dorky-looking glasses.
But it’s another child’s toy, the Nintendo 3DS, that might point the way to where we’re going. Not unlike the Lytro, the Nintendo 3DS is a bit of a wolf in sheep’s clothing. It’s a proof-of-concept for stereoscopic 3D that doesn’t require the use of dorky glasses and even lets you adjust the apparent depth of the 3D effect on the fly. Imagine this technology built into a digital picture frame.
Admittedly, all of these things are playback devices and not capture devices. Peter Jackson, for instance, is using a custom-made rig with two digital cinema cameras to film the forthcoming two-film adaptation of The Hobbit. In one of his behind-the-scenes set videos, he explains that the cobbled-together hack of a 3D rig is necessary because it’s physically impossible to get two lenses close enough together to record an image that approximates the kind of parallax that our eyes see.
But given that the Victorians had such cameras 100 years ago, surely it’s just a matter of time before we see twin-lens, twin-sensor systems making it big.
But that’s the near future. There are already stereoscopic cameras out there – still as well as moving image. What excites me more is considering where this technology could ultimately go.
For instance, why stop at two lenses?
One of my personal complaints against 3D movies, and why I was not as taken by James Cameron’s Avatar as many were, is that the 3D still felt flat. It felt like I was watching a movie through a Viewmaster, with different objects on different planes. Nifty, but it’s not the way we actually see the world. Try this little experiment: put your hands on your keyboard (or on the surface of whatever tablet-like device you’re using to read this), close one eye and shift your head a little to the left, then a little to the right. Notice how at any given point your hands are obscuring different portions of whatever is behind them? That’s called parallax, the apparent spatial shift you perceive in objects relative to one another as you move your point of view. There was no parallax in Avatar. If I moved my head a little to the left or a little to the right, the image remained the same, breaking the illusion of 3 dimensionality.
But that’s already changing. Hitachi is working on bringing to market a 3D TV capable of producing honest-to-god parallax. (http://3dguy.tv/hitachi-full-parallax-3d-tv/). And yes, I know at this point we don’t have cameras capable of recording an image like that. For now, the only content that can be displayed on Hitachi’s prototype 3D parallax television is content that is computer-rendered, using up to 16 virtual cameras. But if the past 21 years of advances in camera technology have taught us anything, it’s to expect the miraculous. In due time.
Which brings me to what is surely my most way-out-there prognostication:
5. Quantum optics
Stay with me for a moment, this is going to get trippy.
Picture this: the camera is no longer a single, physical thing, but an agglomeration of hundreds of nano-lenses held together in a sort of tractor-beam-like magnetic field. Focus, exposure-range and zoom are set by your own eyes (the act of squinting, for instance, might activate a super-telephoto mode that would let you see small craters on the moon in real time). The image is relayed straight to the optic centre of your brain via a wearable neural stimulus device that resembles a Bluetooth earpiece.
Images are recorded in full parallax, with the dispersion of the nano-lenses being the only limiting factor. Focus is infinitely adjustable, or can be arbitrarily locked to produce an “old timey” look. And best of all, the camera records not just the instant you blink the shutter, but every moment before and after, sending quantum copies of the nano-lenses both forwards and backwards in time on a stream of tachyons to let you create time-lapse movies in a single exposure. The cerebralnet quickly fills up with time-lapse 3D HDR images of cats.
Would George still recognize it as a camera? Would any of us?