Google's newly released Pixel 2 phone has excellent video capture capabilities, its biggest bright spot is the use of both optical image stabilization and electronic anti-shake technology, even if you are walking shooting or riding shooting, Pixel 2 will eventually be able to show The most stable picture of how Google actually merged these two technologies? Now the official gives an explanation.
Unsurprisingly, Google, which has long been in love with AI technology, has used machine learning again, and Google said in a blog post that the system collects movement information from the handset gyroscope and OIS during video capture to ensure it is synchronized with the image Google then uses machine learning to predict how the phone will move to deal with common video jitter artifacts, such as shaking, rolling the shutter, etc. As the user moves the phone quickly, the algorithm can even introduce virtual motion to mask changes in sharpness .
Of course, this does not mean that the Pixel 2's shooting function is perfect, for example, some users reflect Pixel 2 sometimes anti-shake will crop some screen, while blurred low light area.In any case, this at least reflects Google in the AI technology Some substantial progress has been made in exploration, which users can feel at their fingertips.