So an article discussing 2K vs. 4K images popped up on my radar today. It’s named ‘The Truth About 2K and 4K’ and is an interview with John Galt of panavision. It’s partially a marketing piece for Panavision, so take a grain of salt to some of the ‘truth’. On one hand he disparages the RED camera (panavision competitor) for not having a true 4K sensor (this is apprently true) and then later in the article he disparages IMAX (panavision competitor) for being 4K but that it doesn’t really matter because our eyes can normally only see 2K worth of detail. Uh… so that means RED actually got it right?
The jist of it is that RED, like Canon/Nikon DSLRs, uses a sensor with a Bayer mosaic pattern. Each spot (viewsite) on the sensor only receives one color (R, G, or B). 4 (Green gets counted twice) of those are added up to produce one pixel in your camera. Because of this, technically the image RED produces (and Canon and Nikon and…) is interpolated. The alternative is to have each spot on the sensor record all three colors at once. There is a GREAT comparison of the Canon 5D with the Sigma SD14 (which does use a sensor that captures all three colors on the same spot) and explains the difference between sensors very well:
Between both articles it brings up some interesting questions for RED users and for digital photographers.
Do we want our sensors to capture bigger images or capture smaller images that have more detail? I don’t know the answer… but the two articles have definitely made me rethink some of the things I took for granted about my cameras. Would we have less noise problems with digital cameras if they captured full pixels? As sensors become capable of handling larger and larger amounts of pixels (The 60mp PhaseOne camera), perhaps we should start asking for more detail instead of just ever increasing image sizes. Maybe all those megapixels can be put to better use.
Chief Executive Anarchist