In this part, I implement the image quilting algorithm for texture synthesis and transfer from this paper. This involves placing different "patches" of texture and using various methods to smooth edges. Using sample images of different textures, this algorithm can generate new images with the same texture. By also considering correspondence maps, we can also "transfer" textures to other images.
The simplest way to implement this algorithm is to randomly sample patches from the texture images and tile them directly into the result. However, this leaves clear seams between patches and there is no attempt at making neighboring patches seamless.
| Original Texture | Generated Image |
|---|---|
|
|
In order to reduce the visual effects of seams between patches, we allow for overlap between patches. Instead of randomly sampling images from the sample texture, we choose patches that minimize the SSD (sum of squared differences) of the overlapping regions of patches, within a given tolerance. The SSD can be efficiently computed using filtering operations.
| Original Texture | Generated Image |
|---|---|
|
|
Adding overlap already greatly improves visual flow between patches, but there are still some visible seams due to patches being rectangular. To remove these edge artifacts, we find the min-cut in the overlap between patches instead of directly overlaying patches. This can be efficiently computed using dynamic programming.
| Original Texture | Generated Image |
|---|---|
|
|
Now that we can generate texture images, we can modify this algorithm to be used for texture transfer. The texture synthesis algorithm uses the SSD between overlapping portions of patches as a loss function for patch selection. By incorporating an extra term in this loss that describes similarity to a guidance image, we can influence the resulting image to have the same shape as the guidance image. Specifically, I use the SSD of each candidate patch with the current window in the guidance image. These two loss terms are combined using a parameter alpha.
| Original Texture | Guidance Image | Generated Image |
|---|---|---|
|
|
|
|
|
|
The texture transfer algorithm can be further improved by iteratively transferring texture, reducing block size with each iteration. Thus, patches are matched with overlapping patches but also with the previous iteration's result. Iterations with larger block sizes prioritize matching the guidance image, while the smaller blocks prioritize smoothness between patches. This allows for more detailed and smooth resultant images.
| Single-pass Texture Transfer | Iterative Texture Transfer |
|---|---|
|
|
|
|
In this part, I used sample datasets from the Stanford Light Field Archive to vary the perceived depth and aperture of images.
To change the focus of images, we simply need to shift the grid images according to their pixel distances from the "center" image and then average. Changing the scale factor of these displacements shifts the focus from far to close.
c = -0.1
|
c = 0.1
|
c = 0.3
|
As you can see, the focus shifts from the back of the chessboard toward the front.
To adjust the aperture, we keep the focused point constant while changing the number of images used in the averaging. Included pictures are determined by their camera coordinates in the grid; only pictures within a given radius of the center image are included. This radius mimics the aperture.
r = 0 (original center image)
|
r = 4
|
r = 8
|
I learned some interesting yet surprisingly simple techniques to generate interesting results in these projects. Thanks to course staff for a great semester!