Improved the motion blur module by implementing a modified version of Zheng et al.’s method.

I found this by going through Navarro et al.’s review paper of all kinds of motion blur. Navarro et al. tell us that there are several categories for motion blurring processes. Methodologies that are of interest for us are those that can be added in a post-processing step, as opposed to, for example, those that act during the rendering process. Methods of interest include Potmesil and Chakravarty who apply a series of image degrading transforms represented by point-spread functions; Brostow and Essa who have developed a method for stop-motion films that segmentates foreground from background with hierarchical optical flow and uses no scene information at all; and Zheng et al. who combine motion information from the ray tracer and optical flow. Appealing from Zheng et al. is that their method degrades gracefully when no ray tracer information is available at all, and then follows a simple model.

The modified Zheng model can be described as follows.

  1. Name the set \(C\) of all image coordinates \((x,y)\) of source image \(I\).
  2. Make the new blurred image \(B\) a copy of \(I\)
  3. Compute the motion vectors \(F(x,y) = (u_{xy}, v_{xy})\) for each pixel \(I(x,y)\) in the source image. For this, I have used Farnebäck’s method as implemented in OpenCV 3.3.1.
  4. Let \(C'\) be those image coordinates \((x,y)\) where the amount of motion \(\|F(x,y)\|\) is less than some threshold \(\theta\): \(C' = \{(x,y) : \|F(x,y)\| < \theta, (x,y) \in C\}\). In my current setup, I have set \(\theta\) to \(3\). This reduces computation time (because fewer pixels will be blurred), but reduces the faithfulness (?) of the motion blur simulation. This loss in faithfulness can be forgiven as we are talking about the slightest amount of motion which wouldn’t be a challenge for the used feature detector most probably.
  5. Find the maximum distance of blurring \(D = \max_{x,y} \|F(x,y)\|\). This will be the number of samples we will take over each motion vector.
  6. We create a new blurred image \(I_B\): for each coordinate \((x,y) \in C\):
    1. If \((x,y) \in C'\): \(I_B(x,y) = I(x,y)\)
    2. Else: \(I_B(x,y) = \mbox{avg}_D(I(x,y), I(x+u_{xy}, y+v_{xy}))\). The new blurred pixel \(I_B(x,y)\) is the average of \(D\) sampled values over the line from \(I(x,y)\) to \(I(x+u_{xy}, y+v_{xy})\).
  7. Return \(I_B\).

References

  • Michael Potmesil and Indranil Chakravarty. Modeling motion blur in computer-generated images. In Proceedings of the 10th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '83, pages 389–399, New York, NY, USA, 1983. ACM. [ bib | DOI ]
    
    @inproceedings{potmesil1983modeling,
      author = {Potmesil, Michael and Chakravarty, Indranil},
      title = {Modeling Motion Blur in Computer-generated Images},
      booktitle = {Proceedings of the 10th Annual Conference on Computer Graphics and Interactive Techniques},
      series = { {SIGGRAPH} '83},
      year = {1983},
      isbn = {0-89791-109-1},
      location = {Detroit, Michigan, USA},
      pages = {389--399},
      numpages = {11},
      doi = {10.1145/800059.801169},
      acmid = {801169},
      publisher = {ACM},
      address = {New York, NY, USA},
      keywords = {Camera model, Digital optics, Image restoration, Motion blur, Point-spread function}
    }
    
    
  • Gabriel J. Brostow and Irfan Essa. Image-based motion blur for stop motion animation. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '01, pages 561–566, New York, NY, USA, 2001. ACM. [ bib | DOI ]
    
    @inproceedings{brostow2001image,
      author = {Brostow, Gabriel J. and Essa, Irfan},
      title = {Image-based Motion Blur for Stop Motion Animation},
      booktitle = {Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques},
      series = {SIGGRAPH '01},
      year = {2001},
      isbn = {1-58113-374-X},
      pages = {561--566},
      numpages = {6},
      doi = {10.1145/383259.383325},
      acmid = {383325},
      publisher = {ACM},
      address = {New York, NY, USA},
      keywords = {animation, computer vision, image-based rendering, motion blur, stop motion animation, temporal antialiasing, video post-processing}
    }
    
    
  • Gunnar Farnebäck. Two-frame motion estimation based on polynomial expansion. In Scandinavian conference on Image analysis, pages 363–370. Springer, 2003. [ bib | DOI ]
    
    @inproceedings{farneback2003twoframe,
      title = {Two-frame motion estimation based on polynomial expansion},
      author = {Farneb{\"a}ck, Gunnar},
      booktitle = {Scandinavian conference on Image analysis},
      pages = {363--370},
      year = {2003},
      organization = {Springer},
      doi = {10.1007/3-540-45103-X_50}
    }
    
    
  • Yuan Zheng, Harald Köstler, Nils Thürey, and Ulrich Rüde. Enhanced motion blur calculation with optical flow. In Proceedings of Vision, Modeling and Visualization, pages 253–260, November 2006. [ bib ]
    
    @inproceedings{zheng2006enhanced,
      author = {Zheng, Yuan and Köstler, Harald and Thürey, Nils and Rüde, Ulrich},
      booktitle = {Proceedings of Vision, Modeling and Visualization},
      title = {Enhanced motion blur calculation with optical flow},
      location = {RWTH Aachen, Germany},
      year = {2006},
      month = nov,
      day = {22--24},
      pages = {253--260}
    }