Project from 15-869: Video Texture Synthesis


Goal:


Given a video texture sequence, a new video sequence can be synthesized, which is similar with the input one in texture and which can last longer.


Previous work:


So far, synthesis of dynamic natural phenomena has mostly been possible only by physically-based simulations. But these techniques will generate impressive results of compelling realism. In contrast, given an example of the desired result, statistical learning of time-varying textures(TVTs) is a good alternative. I have found two methods on this field, as shown in the following two papers,

1. "Texture Mixing and Texture Movie Synthesis using Statistical Learning", Ziv Bar-Joseph, Ran El-Yaniv, Dani Lischinski and Michael Werman.

2. "Pyramid-based texture analysis/synthesis", David J. Heeger and James R. Bergen.

The first method can be viewed as extensions of De Bonet's approach to multiple input samples and to time-varying textures. In the second paper, Heeger and Bergen applied their texture synthesis technique to the generation of 3D solid textures.

It seems that a good method in 2D texture synthesis perhaps can be extended to 3D, such as video texture synthesis. So, in this project, I will try to extend Efros's method to video texture synthesis.


Overview of approach:


  • STEP 1:
  • All the frames of the sample video sequence are stacked one by one into an image block, I', which has three dimensions, x, y and t. Then we can consider texture in 3D (x, y and t) and video synthesis becomes a kind of texture synthesis for 3D image block. Let I be the image block being synthesized.

  • STEP 2:
  • Let p be a pixel in I and let w(p) be a cuboid image block centered at p. Let d(w1, w2) denotes some perceptual distance between two blocks. We can find the closest matching blocks in I'.

  • STEP 3:
  • The center pixel values of the above blocks give us a histogram for p, which can be sampled, either uniformly or weighted by d.

    By this algorithm, arbitrarily taking a cuboid block from the sample image block as a seed, we can synthesize new image blocks similar to the sample image block, i.e., new video sequences similar to the input sample video sequence.

    How to test it:

    For testing this method, I have found some appropriate data as its inputs, such as the following video sequence,

    sea.avi

    The expected outputs of this method should be new video sequences similar to the inputs.


    Jing XIAO

    Nov. 15, 1999