Before the launch of the Google Pixel 4A, we decided to find out what makes its cameras so special and unique. Let's find out below .
Google Pixel cell phones have become popular due to the high quality of their photographic section. / Photo: Pexels
LatinAmerican Post | Ariel Cipolla
Listen to this article
Leer en español: Google Pixel está revolucionando el campo de la fotografía móvil
Google cell phones are causing a sensation in the world of mobile phones. Recently, from the company of the largest search engine on the planet they launched the Google Pixel 4A, which, according to the CNET media, is a "bargain cell phone with an impressive camera."
That is to say, the photographic section seems to be one of the keys to this entire range of phones… although it may at first glance seem otherwise. The La Vanguardia website highlights that "it only has one camera on the back", which is 12 megapixels, being a very low number compared to other models.
However, the explanation for why it takes such good photos is not the hardware, but the software, an aspect that is being promoted by Google. Therefore, we decided to find out what are the details that explain why it is becoming so popular with users due to its quality.
The particularities of the Google Pixel cameras
The impact caused by the Google Pixel cameras was such that, according to El Español, the design company Adobe signed Marc Levoy, father of mobile computer software, in order to create a “universal application”. However, we should ask ourselves: what is behind these revolutionary photographic sensors?
The key term to understand is computational photography. Many refer to it as the "future of photography" as it involves processing on an internal scale. That is, the ISP of your smartphone allows you to learn the entire process of capturing a photo, adjusting the operation of the sensor and the lenses, in addition to applying adjustments in post-production automatically, according to the specialized outlet ComputerHoy.
This allows, as the Xatakafoto medium mentions, that the physical limits of the cameras "are complemented by the image processing technologies of the devices." This implies that, even though the physical camera reaches a certain number of megapixels or aperture, it can be improved.
The end result of this type of processing are images created by ones and zeros, in the sense that they are not optical images, but representations. It is a process in which computing allows mixing, internally, the different filters and settings in real time so that your physical hardware can better interpret the capture you take.
Therefore, in this area, it seems that Google is the undisputed leader. According to the specialized website Andro4All, Google is doing something amazing with computational photography. While most of the competition uses multiple sensors in their rear cameras (with 4 being the number most used today), here they consider one or two at most.
The belief of the company is that digital processing can be much more important than the physical sensor. Without a good computational analysis, the photo will not turn out better, no matter how many megapixels you have. The use of HDR +, a technology to view photographs with better contrasts, was a revolution in 2013, with the Google Nexus 5, although later with the Pixel range an improvement could be observed.
In this way, we can see that, for many, it is one of the best photographic results today. The specialized media El Androide Libre highlights that the Pixel 4A has "the best Android camera in a small mobile phone", showing that the interpretation of an image, coming from the processing, can be much more important than the lens and the focal aperture of the camera .