Google Just Open Sourced JPEG Encoder That Reduces File Sizes By 35%
Google Just Open Sourced JPEG Encoder That Reduces File Sizes By 35%

We all know that the tech giant Google is probably the best online services company. Hence, recently it announced and open-sourced a new algorithm that reduces the size of JPEG images by 35% guaranteeing the best quality of them.

Google Just Open Sourced JPEG Encoder That Reduces File Sizes By 35%

The tech giant Google is probably the best online services company. For a variety of reasons, the tech giant Google tries to optimize every task so that its services always perform best and offer the best user experience.

Today, the tech giant Google announced a new algorithm that reduces the size of JPEG images by 35% guaranteeing the best quality of them.

In 2014 the tech giant Google presented the WebP format that can reduce the size of the images by 10%. For some years the company has been developing a new algorithm called Guetzli that can reduce the size of JPEG images by 35%.

According to the tech giant Google itself, this new method is similar to the Zopfli algorithm that actually is used to reduce the size of PNG files.

With this new method, Web developers can create web pages faster since the client does not need to transfer a large amount of data (which is usually JPEG files).

Yet according to the Google, the visual quality of JPEG images is directly correlated with its multi-step compression process. However, it is in the “quantization” phase, where the compression happens that the image also loses quality.

In this sense, the tech giant Google revealed that this new algorithm can find a balance between two visual perception schemes, which allows to compress the images while maintaining their “visual” quality.

Google
Google

The following three images (the left one has no compression, the middle one uses the traditional JPEG compression and the last one has been compressed with the Guetzli algorithm) show that the visual losses with the Guetzli algorithm are insignificant.

However, not everything is perfect since this algorithm needs to evaluate and compare two visual models, which means a slightly higher compression time compared to other methods. Anyone who wants can already evaluate this new algorithm since it is available on Github.

LEAVE A REPLY

Please enter your comment!
Please enter your name here