Neuroimaging technology on the iPhone 11/11 Pro will be included in the next beta for developers. This was announced by reporters from The Verge.
What is it?
Deep Fusion should be one of the main software innovations of the eleventh iPhones. Apple’s senior vice president, Phil Schiller, called this technology “the crazy science of computational photography.”
With Deep Fusion, your photos should be sharper and better. The photographing process itself will look like this:
- By the time you click the shutter, your iPhone will have four images at very short exposure in the background burst;
- In addition, the smartphone manages to make four ordinary photos;
- At the moment of pressing the shutter button the picture is taken on a long exposure;
- Usual pictures from step 2 are combined with the picture taken using long exposure, a “synthetically long”;
- In parallel, the iPhone picks from the first three shots one with the most detail and combines it with “synthetically long”;
- Selected images are processed, and the noise on them is suppressed. This helps to increase the number of parts. One image coprocessor Neural Engine selects the parts with the other information on light and color;
- At this stage, all selected images are combined into one. Moreover, the system decides in what order to do it.
How is Deep Fusion different from Smart HDR?
Indeed, you might think that Deep Fusion works according to a similar algorithm to Smart HDR, because in both cases several images are combined at once. However, there are differences.
In the case of Smart HDR, several images are taken at once, which are combined into one. These few shots are over- and underexposed photographs. Thus, Apple achieved detail in mobile photography with backlight.
Deep Fusion is used in another case. The main thing here is not detailed in the shade, but the detail in general. And the photograph itself does not consist of nine shots, as is the case with Smart HDR, but from a smaller number.
Deep Fusion technology will be applied in low and medium light conditions. Smart HDR will take control of the board in conditions of excessively bright light.
At the same time, Deep Fusion will work only on a wide-angle and telephoto lens. Ultrashirik, as in the case of night mode, in flight. So if you, like us, expected an improvement in the quality of photos taken on it, then this will happen only in 2020, when Apple launches new iPhones.
How to activate Deep Fusion?
No way. The iPhone will decide to do this for you. There will be no buttons. You won’t even see information about Deep Fusion in EXIF’s photos. Proponents of conspiracy theories might think that there is no technology.
When will the technology be available?
According to The Verge, in the next beta for developers. Its serial number is not called, but some sources claim that this is iOS 13.2 Developer Beta 1. That is, the official release will take place when Apple releases iOS 13.2.
Beta for developers was due out on October 1, but without explanation, it was postponed for an indefinite period.
Which devices will have this?
As with night mode, Deep Fusion will only be available on the iPhone 11, 11 Pro and 11 Pro Max. The reason is the A13 Bionic processor and its Neural Engine coprocessor. Apparently, only he is able to carry out all the necessary calculations in a second.