Back to

The Raw Story:

A Raw file is not an image file. At least not in the same sense that a GIF or a TIFF is. A Raw file really is a DATA file that contains an image only in a latent state, and , just like a film negative , requires to be processed to become visible. Before that, it is only a combination of data and instructions on what needs to be done to convert that data into a meaningful collection of colored dots of varying intensities that we recognize as an image .

This process, in the case of a Raw file , really is a transformation, from a format meaningful only to the camera sensor, into another one, meaningful to a computer Imaging application. Although physically different ,the processing of a Raw file and that of a film do involve similar factors and decisions ,to be applied during the processing, that would determine Contrast ,Sharpness and Color, affecting image quality in general.

Just like in chemical processing, the end result will depend on the quality of the processing system and on the skill of the practitioner. Cheap chemicals or low skill = Low image quality.

In digital processing there exist a variety of processes available to do the transformation,known as Raw Converters, that go from the free ones ,supplied by the camera manufacturer, to the very exotic, highly specialized and expensive ones like Phase One's Capture One Pro ,the leader in professional digital imaging .

Recently this has been recognized as a new market by major Software companies and a new breed of highly specialized image browsers/ converters has become available and found their place in the post processing work flow , such as Adobe Lightroom and Apple Aperture

But why do we need a Raw file?,well , because of the profound differences between the analog nature of the camera sensors and those of digital computers ,displays and printers, an intermediate , Platform independent ,format is required and although most cameras have a built in camera converter that works quite well , the optimum results can only be achieved with the power of a full computer and the supervision of a trained practitioner.

The raw file is not exactly what comes out of the sensor as many people believe , but it does have some minimal transformations already applied to it

To begin with ,Raw files are Gray scale , they only represent luminance values, measured at each photosite and a reference to that photosite position coordinates in the Sensor, their color has to be calculated by referring to the photosite position in the Bayer Mosaic overlay , Color will only appear after Demosaicing.That is color is assigned rather than measured. This color however is linear and has to be converted to the nonlinear way that the human vision possess. What this means ,in practice, is that the real values captured by the sensor have to be distorted to correspond to the human perceptual color and its corresponding non linear gamma.

The less known part is that Raw files protocols are non Standard ,that is they are proprietary and secret. They contain additional instructions to compensate for physical limitations of the sensor ,such as light fall off towards the edges of the field or of the lenses (Vignetting) , Chromatic aberration(Color Fringing) and residual Geometrical aberrations that are hard to eliminate optically ,such as Pincushion. The Japanese industry jumped into the Electronic revolution and did not hesitate to include electronics on almost everything they manufacture,since there are always ways in which a Good Firmware Program can compensate for Poor hardware Design, and ,in this case ,fool the eye (and the brain behind the eyes) into accepting a less elaborate Optical design, based on the fact that software is much cheaper to manufacture than hardware.

The German optical industry , however, prefers to use cybernetic power were it belongs, behind their research and development and in better control of their exacting manufacturing specifications.They stick to true excellence

Well to summarize , an image file format is just a matrix of pixels where the brightness and color are specified in the RGB color space, A Raw file , on the other hand contain data and instructions to remove the overlaid color mosaic and fabricate the illusion of a tricolor pixel by interpolating color from the neighboring pixels . Since the eye is most sensitive to Green light, the mosaic surrounds One Green Photosite with two Red and two Blue ones around it. So it really takes FIVE PHOTOSITES to create ONE EQUIVALENT TRICOLOR PIXEL .



This clearly will fail at all the edges of the rectangular sensor and therefore some additional educated guesses will have to be applied there , or most of the peripheral photosites will not participate in the image forming, but only in the interpolation of color information of the adjacent inner photosites.Some top of the line camera manufactures account for this in a veiled ,obscure way, by stating the Effective Megapixels , implying that their cameras contain some pixels that are not image forming, but are there to perform Auto Focusing functions or White Color Balance or exposure determination and some other excuses. Pleease..!


Having said that , let me ad that there is light at the end of the tunnel .The american ingenuity has created a true tricolor photoreceptor sensor ,as a natural follow up of the CMOS technology developed by Mead Carver , the creator of Foveon ,that can retrieve electrical charges independently, on every photoreceptor, as opposed to the CCD technology in which the charges have to be retrieved in linear sequence.

This independence of the photoreceptors allowed Carver to make a TRI-LAYERED sensor , analog to the structure of color film, that reads both luminance and color at each photosite ,without need for interpolation or for a Mosaic Overlay sparing the falsification of color that it introduces and the inevitable Interference Moire that results when one pattern (Mosaic),overlays another one(Photosites).

What all this means for you and me in real life is that a Bayer sensor can only deliver ONE FIFT of its Photosite count in terms of true Resolving Power. The Foveon Sensor Architecture of vertically Stacked Photosites on the other hand requires ONE PHOTOSITE TO CREATE ONE PIXEL.

In other words where a Bayer based camera can resolve ONE PIXEL , the Foveon based camera will be able to distinguish FIVE.

If we look at this from the point of view of the Silicon AREA required to build ONE PIXEL it becomes very clear that the Foveon Pixels are at least FIVE TIMES SMALLER . This explains in part the Uncanny Sharpness of the Foveon Images, the other reason is that they do not require PHANTOM interpolated Pixels to fill out the spaces around the Photosite where there are no other Photosites from where to derive information


The resulting camera Image Quality is so superior to the conventional Bayer technology that it has been described as resembling a 4x5 large format negative ,both in Clarity,and in Color Fidelity.

This technology is maturing and is currently in its 3rd generation with the Sigma SD14.

I have to add that Leica really lost a unique opportunity when it was in the decision stages of their transition to digital and went the Bayer way ,that is they followed the classic way and , in the process,they forgot their historic original innovative spirit. From leaders they became followers. However not all is lost , Leica has been known to reverse course on bad decisions, like when they pulled out of their ill relationship with Minolta, they might very well see the light . That is after all what this is all about.