English | اردو و
Friday, April 19th 2024
Today's Paper

Google Clips AI-Based Camera Was Trained With the Help of Pro Photographers

1 min read
G

Last October, Google unveiled an automatic camera called Google Clips. The Clips camera was designed to hold off from taking any picture until it sees the faces or frames it recognises as a good image. The intelligent camera has been designed to capture candid moments of familiar people and pets by using on-device machine intelligence. Google over the weekend began selling the camera priced at $249 (roughly Rs. 16,200), and, it’s already ‘out-of-stock’ on Google’s product store.
How does the Google Clips camera understand what makes for a beautiful and memorable photograph? In a blog post, Josh Lovejoy, UX Designer for Google, explained the process that his team used to integrate “human-centred approach” and an “AI-powered product”. Google wants the camera to avoid taking a number of shots of the same subjects and find one or two good ones. With human-centred machine learning, the camera is able to learn and select photos that are meaningful to users.
In order to feed examples into the algorithms in the camera, to identify the best images, Google called in professional photographers. Google hired a documentary filmmaker, a photojournalist, and a fine arts photographer to gather visual data to train the neural network powering the Clips camera. Josh Lovejoy wrote, “Together, we began gathering footage from people on the team and trying to answer the question, ‘What makes a memorable moment?'”
Notably, Google admits that training a camera like Clips can never be bug-free, regardless of how much data is provided to the device. It may recognise a well-framed, well-focussd shot but it could miss some important event. However, in the blog, Lovejoy says, “But it’s precisely this fuzziness that makes ML so useful! It’s what helps us craft dramatically more robust and dynamic ‘if’ statements, where we can design something to the effect of “when something looks sort of like x, do y.”
The blog essentially describes how the company’s UX engineers have been able to apply a new tool to embed human-centred design into projects like the Clips camera. In another blog post on Medium, Josh Lovejoy had explained the seven core principles behind human-centred machine learning.
It is also interesting to note that chief executive of Tesla, SpaceX, and SolarCity, Elon Musk, back in October had taken a jibe at Google clips camera saying, “This doesn’t even seem innocent.”