Apple will teach iPhone to take super clear pictures with Deep Fusion

Deep Fusion

Neuroimaging technology on the iPhone 11/11 Pro will be included in the next beta for developers. This was announced by reporters from The Verge.

What is it?

Deep Fusion should be one of the main software innovations of the eleventh iPhones. Apple’s senior vice president, Phil Schiller, called this technology “the crazy science of computational photography.”

What is Deep Fusion

With Deep Fusion, your photos should be sharper and better. The photographing process itself will look like this:

  1. By the time you click the shutter, your iPhone will have four images at very short exposure in the background burst;
  2. In addition, the smartphone manages to make four ordinary photos;
  3. At the moment of pressing the shutter button the picture is taken on a long exposure;
  4. Usual pictures from step 2 are combined with the picture taken using long exposure, a “synthetically long”;
  5. In parallel, the iPhone picks from the first three shots one with the most detail and combines it with “synthetically long”;
  6. Selected images are processed, and the noise on them is suppressed. This helps to increase the number of parts. One image coprocessor Neural Engine selects the parts with the other information on light and color;
  7. At this stage, all selected images are combined into one. Moreover, the system decides in what order to do it.

How is Deep Fusion different from Smart HDR?

Indeed, you might think that Deep Fusion works according to a similar algorithm to Smart HDR, because in both cases several images are combined at once. However, there are differences.

In the case of Smart HDR, several images are taken at once, which are combined into one. These few shots are over- and underexposed photographs. Thus, Apple achieved detail in mobile photography with backlight.

How Deep Fusion differs from Smart HDR

Deep Fusion is used in another case. The main thing here is not detailed in the shade, but the detail in general. And the photograph itself does not consist of nine shots, as is the case with Smart HDR, but from a smaller number.

Deep Fusion technology will be applied in low and medium light conditions. Smart HDR will take control of the board in conditions of excessively bright light.

At the same time, Deep Fusion will work only on a wide-angle and telephoto lens. Ultrashirik, as in the case of night mode, in flight. So if you, like us, expected an improvement in the quality of photos taken on it, then this will happen only in 2020, when Apple launches new iPhones.

How to activate Deep Fusion?

No way. The iPhone will decide to do this for you. There will be no buttons. You won’t even see information about Deep Fusion in EXIF’s photos. Proponents of conspiracy theories might think that there is no technology.

How to activate Deep Fusion

When will the technology be available?

According to The Verge, in the next beta for developers. Its serial number is not called, but some sources claim that this is iOS 13.2 Developer Beta 1. That is, the official release will take place when Apple releases iOS 13.2.

Beta for developers was due out on October 1, but without explanation, it was postponed for an indefinite period.

Which devices will have this?

As with night mode, Deep Fusion will only be available on the iPhone 11, 11 Pro and 11 Pro Max. The reason is the A13 Bionic processor and its Neural Engine coprocessor. Apparently, only he is able to carry out all the necessary calculations in a second.

Previous Article

Apple released beta versions iOS 13.2 and iPadOS 13.2

Next Article

Overview of amazing Apple Watch Series 5

Write a Comment

Leave a Comment

Your email address will not be published. Required fields are marked *