Our cameras have reached the 30+ Mpixel resolution and although pixels are not the silver bullet for better photos they easily capture the imagination of everyone of us, since it is so easy to understand a concept (although an incorrect one) of more pixels better resolution!
Actually, the resolution depends on two main parameters: the optical resolution of the lens and the way the software manipulates the data harvested by individual pixels. Our intuitive idea that a photo is like a mosaic where the individual tiles are the pixels is just wrong! Nevertheless, it is so convincing that we keep believing it. So more pixel is good!
Given a certain surface (the sensor surface) once you exceed a certain number of pixels you no longer increase the potential resolution since you start getting more noise. On the other hand, increasing the sensor size requires an increase in the lens size and makes it not just bulkier and more expensive but also more prone to defects that lead to a decreased resolution.
Insects have found a way to increase resolution by using many more eyes (composite eye). This is the approach being taken today for pushing the resolution beyond present limits, and it takes place on both directions of increasing the number of cameras and increasing the software capabilities.
The increase in pixels count by increasing the number of cameras is presented in an article on Nature, published on June 20th 2012.
It has been written by a team of researchers from the Duke university, from the University of Tucson and from Distant Focus Corporation.
They propose using 98 commercial 10 Mpixel cameras (their sensors) and to combine the generated signals through a software that can create an image in excess of 1 Gpixel. One should note that multiplying 98*10Mpixel does not equal 980Mpixels because the resolution pixels are being generated by the software program and may be lower or greater than the number resulting from a direct multiplication. Actually, the researchers claim in their article that this system can be pushed up to generate a 50 Gpixel image.
This is because the sensor pixels are not translated one to one. A micro area of a sensor contains a pixel with a filter that lets only light wavelength corresponding to our vision of green to reach the sensor. The nearby left pixel on the right will capture wavelengths corresponding to red and the one to the right wavelengths corresponding to blue. But that same pixel is also juxtaposed to four other “green” pixels in the corners and to two more red and two more blu pixels in the up and down direction. The software analyzing the data can therefore create “resolution” pixels by combining all these 9 pixels and this create a set of sets resulting in 31 different possible patterns for every single sensor’s pixel. Hence you can increase the resolution by 31 times with respect to the number of physical pixels in the sensors and with some more tweaking you can increase it further (up to the claimed 50GPixel). Since all of this happens in software there is no problem to run into problems with the lens resolution limitation.
This is the kind of image resolution resulting from such an approach: