Sony develops the first stacked CMOS image sensor technology with a 2-layer transistor pixel

Sony has announced that it has “succeeded in developing the world’s first stacked CMOS image sensor technology with a 2-layer transistor pixel”.

“Uh, what?” I hear you cry. Well, according to Sony, while the photodiodes and pixel transistors of conventional CMOS image sensors occupy the same substrate, this new technology separates the photodiodes and pixel transistors on different substrate layers. This would have the potential to roughly double the saturation signal level compared to conventional CMOS sensors, resulting in improved dynamic range and reduced noise.

If this sounds more like deja vu, then you’re not alone. I remember vaguely similar claims being touted when the back-illuminated sensor was first released, and again when Sony showed off the first iteration of its stacked CMOS sensor when it launched the RX100 IV and RX10 II cameras in 2015. Since then, we’ve seen a number of other “world’s first” implementations of stacked sensors in full-frame cameras and camera phones, each with incremental upgrades.

(Photo credit: Sony)

A “conventional” stacked CMOS sensor consists of back-illuminated pixels stacked on a logic chip where the signal processing circuits are formed. In the pixel chip, photodiodes for converting light into electrical signals and pixel transistors for controlling the signals are located next to each other on the same layer. Increasing the saturation signal level within the form factor constraints plays an important role in achieving high image quality with wide dynamic range.

Sony has now modified this original stacked sensor structure by packaging the photodiodes and pixel transistors on separate substrates stacked on top of each other. In conventional stacked CMOS image sensors, on the other hand, the photodiodes and pixel transistors are side by side on the same substrate.

This new stacking technology enables each of the photodiode and pixel transistor layers to be optimized, which Sony says can approximately double the saturation signal level compared to conventional image sensors and, in turn, can widen the dynamic range.

(Photo credit: Sony)

In addition, since the pixel transistors other than the transfer gates (TRG) occupy a layer without a photodiode, their size can be increased. This in turn could reduce image noise in low-light and nighttime images.

(Photo credit: Sony)

It should be noted that Sony has only announced that it has developed this sensor “technology”, which supposedly means that an actually working sensor using this technology may still be a long way off. We’ll have to sit tight to see if the 5th Gen A7/A7R is the first to practically demonstrate 2-Layer Transistor Pixel technology.

Read more:

• These are the best Sony cameras to buy right now
• Best Sony lenses for full-frame and APS-C Sony cameras
• Check out the best lenses for Sony A6000 cameras
• Sony A7R IV vs A7R III vs A7R II: what are the differences?

Michael C. Garrison