© Elia Locardi
The new Enhance Details feature available in Camera Raw, Lightroom Classic, and Lightroom CC approaches demosaicing in a new way to better resolve fine details and fix issues like false colors and zippering. Enhance Details uses machine learning—an extensively trained convolutional neural network (CNN)—to provide state-of-the-art quality for the images that really matter. Enhance Details works well on both Bayer (Canon, Nikon, Sony, etc.) and X-Trans (Fujifilm) raw mosaic files.
The animation above shows the difference in a detail area zoomed in to 200% of a Fuji X-Trans based file. Notice the increased definition in fine details in the window panes as well as with the street lights in the background.
How cameras see the world
To understand how Enhance Details works, it’s helpful to first understand how a typical digital camera sensor actually sees the world.
The human eye can distinguish millions of colors. Most of us are trichromats, with different types of cones—color-sensitive photoreceptor cells—present on our retinas that can perceive red, green, and blue colors. Each type of cone allows the eye to distinguish approximately 100 shades of color and our visual system then mixes the signals together to see those millions of colors.
But that’s not how cameras see the world.
All digital photographs start out in monochrome. They are then turned into color through an algorithmic process called demosaicing_._
Digital camera sensors consist of two parts. First, there’s the main photosensor array. Microscopic, light-sensitive cavities measure the intensity of light for a given pixel. Intensity of light, but no color. Say you’re at the beach, and you’re gazing out over the Pacific Ocean at a spectacular sunset.
While you see something like this:
What you remember seeing.
The photosensor array on your camera would see a monochrome image (well, it’d be a lot darker since photo sensors perceive light differently than the human eye and raw processing takes care of this, but a super dark image wouldn’t have been as interesting so…):
What your camera would see without a color filter array.
And taking the color filter array that sits on top of the photosensor array into consideration would result in something different. The color filters allow the sensor to record the color of any given pixel, such as:
The color filter array records a single red, green, or blue value for each pixel. The image appears green since the Bayer array has twice as many green pixels as red and blue pixels in order to mimic the way that the human eye perceives colors.
Your digital camera only records one of the three color values for any given pixel. For example, for a red pixel, the color filters will remove all the blue and green information, resulting in only red being recorded by the pixel. Every single pixel in a raw image is therefore missing all the information about the other two colors.
How software re-creates the image
The composite red, green, and blue value of every pixel in a digital photo is created through a process is called demosaicing.
There’s an art and science to creating a demosaic method. There are many ways to demosaic a photo. Demosaic design choices can impact everything from the overall resolution of your photos, to the fidelity of small color areas, to the accurate rendition of fine details.
In its most basic form, demosaicing averages the color values of the neighboring pixels. For example, a pixel with a red filter over it would provide only record the intensity of red colors for that pixel. The demosaicing algorithm averages the values of the of all four neighboring blue pixels to determine what the most likely amount of blue would have been, and then do the same for the surrounding green pixels to arrive at the green value. This process of guessing what the most likely value would be is called interpolating and is an important part of the demosaicing process.
Demosaicing can be relatively straightforward in the areas of an image that have smooth gradients or constant color. Think blue skies and puffy white clouds. However, the process gets a lot trickier in the more complicated regions of an image. In areas with texture, fine details, repeating patterns, and sharp edges, standard demosaicing methods can struggle, producing lower resolution and problematic artifacts.
Advanced demosaicing methods can handle these complicated areas, but they can be very expensive computationally. Myriad mathematical calculations are required to perform the interpolation necessary to build an image. This takes time, even on the most powerful computer hardware.
As a result, software like Lightroom is constantly balancing the tradeoff between image fidelity and speed.
There are really just a handful of major demosaicing issues that need to be solved. But they keep popping up again and again, in image after image, in new and convoluted ways.
Small-scale details—Images with small details close to the resolution limit of the camera sensor are a big problem. If you’re lucky, you simply lose the details into a hopeless jumble of color. If you’re not, you can run into moiré patterns, where color artifacts arrange themselves into striking maze-like patterns.
False colors—When a demosaicing algorithm mis-interpolates across, rather than along, a sharp edge, you can see abrupt or unnatural shifts in color.
Zippering—At the edges of an image, where you lose half the pixels you would normally use to interpolate your color data, you can see edge blurring.
At Adobe, we are constantly striving to improve our demosaicing algorithms. Over the years, we’ve been able to refine our algorithms to the point that we’re doing very well for the vast majority of images. But these special, difficult cases require us to think about the problem in a different way.
Enter Adobe Sensei. Sensei integrates all branches of artificial intelligence, including the raw power of machine learning—what we refer to as “deep learning.”
Enhance Details uses an extensively trained convolutional neural net (CNN) to optimize for maximum image quality. We trained a neural network to demosaic raw images using problematic examples, then leveraged a new machine learning frameworks built into the latest Mac OS and Win10 operating systems to run this network. The deep neural network for Enhance Details was trained with over a billion examples.
Each of these billion examples contained one or more of the major issues listed above that give standard demosaicing methods serious trouble. We trained two models: one for the Bayer sensors, and another for the Fujifilm X-Trans sensors.
As a result, Enhance Details will deliver stunning results including higher resolution and more accurate rendering of edges and details, with fewer artifacts like false colors and moiré patterns.
We calculate that Enhance Details can give you up to 30% higher resolution on both Bayer and X-Trans raw files using Siemens Star resolution charts. If you’d like to try it out on your own, feel free to download this Fujifilm raw file.
To get the most out of Enhance Details, here are some tips and takeaways…
- Enhance Details requires Apple’s Core ML and Microsoft’s Windows ML, so it won’t work on older operating systems. Please be sure to update to macOS 10.13 or Win 10 (1809) or later.
- Enhance Details requires intensive processing, and it can take some time to do its work.
- Faster GPU means faster results. An eGPU (external GPU) can make a big difference.
- You’ll find Enhance Details most effective on the following image types:
- Images that you want to print or display at a large size.
- Images that display artifacts.
- Images with lots of fine detail.