Digital photography is largely developed and the development steps of the core technologies are more in small steps than revolutionary. Even the developments that are revolutionary are of little use to photographers. The Sony A9 III caused quite a stir with its global shutter, but the excitement quickly died down. Only a few photographers benefit from it.
How can camera manufacturers make their new cameras attractive enough to attract new buyers? One possibility is computational functions that allow photographers to achieve their results faster. OM Digital Solutions started this some time ago. ND filters that allow a photographer to eliminate the need for physical filters. The latest highlight in the OM-1 Mark II is a GND filter, a graduated filter that allows, for example, a brighter sky to be darkened in relation to the foreground. So what else can there be to encourage people to buy new cameras?
What computational functions can we expect in the future?
A digital pol filter
OM Digital Solutions shows the direction in which this topic can develop. Filters can be sensed digitally. Even if the first variants can replace physical filters 100%, this will undoubtedly improve. Alongside ND and graduated filters, polarizing filters are one of the most essential filters in photography. So why not a digital polarizing filter that can reduce reflections in window panes or on water surfaces? Some will say that is not possible. However, with the advent of artificial intelligence in digital image processing, there will undoubtedly be many things that we still think are impossible in image processing today.
Addendum: Adobe has introduced a technology that makes it possible to calculate window reflections from images. Although this only works in certain situations, it is a first step towards a digital polarizing filter.
Combination of different computational functions
Currently, most of the computational functions in cameras are only useable independently. For example, it is not possible to combine the digital ND filter with the Live Composite in the OM-1 or OM-1 Mark II. In many situations, this would be certainly helpful. Also, you are not able to combine the digital ND filter with the digital GND filter, even though it is quite common to combine physical ND filter with physical GND filter. Therefore it would be a logical next step.
Computational functions for video
So far computational functions are available for images only. This is for sure, as in video, you must correct not only one image but at least 24 images each second. This increases the need for resources dramatically. However the needs for video are often quite similar to the ones in photography. The digital ND filter, for example, is even more important for video than in photography.
Simulation of bigger sensors
Sensor size is a widely discussed topic in photography. Even though I think sensor sizes are not that important in photography I can imagine that manufacturers of cameras with smaller sensor sizes are working on such kind of functionality. In software, you already can already simulate background blur with lens parameters. Why should this not work also in cameras. In addition, it can scale images thanks to artificial intelligence so easily that each photographer can increase resolution quite easily. When both are combined, it could lead to a function simulating cameras with bigger sensors.
Conclusion
Some ideas I share here may seem futuristic, but I don’t think we are too far away from this type of function. On the one hand, camera manufacturers must find ways to get customers to buy the new cameras. Secondly, technology is making huge progress in terms of artificial intelligence, so it’s only a matter of time before this can also be realized in a camera. What functions can you imagine? Leave me a comment. I look forward to discussing this with you.