Let’s say you are a mobile photographer. Or make it easier- you like to take photos on your phone, and when you see a new phone that’s coming to the market, a good camera is one of your topmost priorities. Nowadays, there are phones with 108 Megapixels camera (Samsung Galaxy S20 Ultra) and when you see a phone like that, you are blown away on the spot, right at the moment, you see it. A hundred and eight megapixels, you utter in amazement and then see the price; a hefty £1,048 ($1294.16), and then you ask yourself- is it worth to pay such a huge sum, nearly a month’s income for a young student for a phone? Maybe. Maybe an £1100 phone like Galaxy S20 Ultra is worthy enough to buy, and that’s always a personal preference, I’m not going to interfere on that. I’m not going to talk about the worthiness of buying a phone with a thousand pounds. What I’m going to talk about is- how machine learning affects your photography experience on a smartphone and how it fares to a far older phone of the same calibre.
In recent years, Google took the smartphone world by storm by launching its phones- Pixel. The obvious name suggests it will be a phone greatly focused on the camera, and it beats out every competitor by their camera. What’s more fascinating is, even though Samsung and other phones were implying 2-3 cameras, Google did it by one camera only- their one camera was enough to beat the combined power of dual or triple cameras (their last phone had a dual camera, but still not 4 or 5 like others). How is Google doing it? Enter “Computational Photography”- the term which is getting lots of press lately as Apple and Google roll out their latest devices.
Computational photography is nothing new. All the digital cameras come with some form of ‘computational’ image processing, in that they have to take data coming from the digital image sensor and render it into a format which will then be used (such as JPEG, or maybe some RAW formats). Some cameras do little, while some cameras like Fujifilm & Sony mirrorless cameras, do amazing image enhancements that mimic old films types and other impressive effects, all performed directly in-camera. So why all the hype on a pre-existing technology? It’s because of the implementation of new technology within the latest generation of smartphones where machine learning is directly combined with traditional image processing has everyone super-excited, and for good reason. The great images of the Pixel phones are the result of clear beneficiaries of Google’s AI prowess—specifically, Google’s Visual Core, a co-processor that Google developed with Intel. Now, machine learning can ‘learn’ and differentiate using algorithms to classify photos into various scenes such as; landscape, portrait, night, daytime, etc. And this algorithm would perform this classification very accurately, giving the camera exactly what it must correctly set all of the colour information, white balance, exposure, and sharpening, all automatically when you take the photo.
Credit: Google News