Given a video texture sequence, a new video sequence can be synthesized, which is similar with the input one in texture and which can last longer.
So far, synthesis of dynamic natural phenomena has mostly been possible only by physically-based simulations. But these techniques will generate impressive results of compelling realism. In contrast, given an example of the desired result, statistical learning of time-varying textures(TVTs) is a good alternative. I have found two methods on this field, as shown in the following two papers,
1. "Texture Mixing and Texture Movie Synthesis using Statistical Learning", Ziv Bar-Joseph, Ran El-Yaniv, Dani Lischinski and Michael Werman.
2. "Pyramid-based texture analysis/synthesis", David J. Heeger and James R. Bergen.
The first method can be viewed as extensions of De Bonet's approach to multiple input samples and to time-varying textures. In the second paper, Heeger and Bergen applied their texture synthesis technique to the generation of 3D solid textures.
It seems that a good method in 2D texture synthesis perhaps can be extended to 3D, such as video texture synthesis. So, in this project, I will try to extend Efros's method to video texture synthesis.
For testing this method, I have found some appropriate data as its inputs, such as the following video sequence,