Loading [MathJax]/extensions/MathMenu.js
Style Transfer by Relaxed Optimal Transport and Self-Similarity | IEEE Conference Publication | IEEE Xplore

Style Transfer by Relaxed Optimal Transport and Self-Similarity


Abstract:

The goal of style transfer algorithms is to render the content of one image using the style of another. We propose Style Transfer by Relaxed Optimal Transport and Self-Si...Show More

Abstract:

The goal of style transfer algorithms is to render the content of one image using the style of another. We propose Style Transfer by Relaxed Optimal Transport and Self-Similarity (STROTSS), a new optimization-based style transfer algorithm. We extend our method to allow user specified point-to-point or region-to-region control over visual similarity between the style image and the output. Such guidance can be used to either achieve a particular visual effect or correct errors made by unconstrained style transfer. In order to quantitatively compare our method to prior work, we conduct a large-scale user study designed to assess the style-content tradeoff across settings in style transfer algorithms. Our results indicate that for any desired level of content preservation, our method provides higher quality stylization than prior work.
Date of Conference: 15-20 June 2019
Date Added to IEEE Xplore: 09 January 2020
ISBN Information:

ISSN Information:

Conference Location: Long Beach, CA, USA
References is not available for this document.

1 Introduction

One of the main challenges of style transfer is formalizing ’content’ and ’style’, terms which evoke strong intuitions but are hard to even define semantically. We propose formulations of each term which are novel in the domain of style transfer, but have a long history of successful application in computer vision more broadly. We hope that related efforts to refine definitions of both style and content will eventually lead to more robust recognition systems, but in this work we solely focus on their utility for style transfer.

Select All
1.
K. Aberman, J. Liao, M. Shi, D. Lischinski, B. Chen and D. Cohen-Or, "Neural best-buddies", ACM Transactions on Graphics, vol. 37, no. 4, pp. 114, Jul 2018.
2.
G. Berger and R. Memisevic, "Incorporating longrange consistency in cnn-based texture generation", 2016.
3.
A. A. Efros and W. T. Freeman, "Image quilting for texture synthesis and transfer", Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 341-346, 2001.
4.
L. A. Gatys, A. S. Ecker and M. Bethge, "Image style transfer using convolutional neural networks", Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414-2423, 2016.
5.
L. A. Gatys, A. S. Ecker, M. Bethge, A. Hertzmann and E. Shechtman, "Controlling perceptual factors in neural style transfer", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
6.
S. Gu, C. Chen, J. Liao and L. Yuan, "Arbitrary style transfer with deep feature reshuffle".
7.
P. Haeberli, "Paint by numbers: Abstract image representations", ACM SIGGRAPH computer graphics, vol. 24, pp. 207-214, 1990.
8.
B. Hariharan, P. Arbeláez, R. Girshick and J. Malik, "Hypercolumns for object segmentation and fine-grained localization", Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 447-456, 2015.
9.
A. Hertzmann, "Painterly rendering with curved brush strokes of multiple sizes", Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pp. 453-460, 1998.
10.
A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless and D. H. Salesin, "Image analogies", Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 327-340, 2001.
11.
G. Hinton, N. Srivastava and K. Swersky, "Neural networks for machine learning lecture 6a overview of mini-batch gradient descent".
12.
X. Huang and S. J. Belongie, "Arbitrary style transfer in real-time with adaptive instance normalization".
13.
J. Johnson, A. Alahi and L. Fei-Fei, "Perceptual losses for real-time style transfer and super-resolution", European Conference on Computer Vision, pp. 694-711, 2016.
14.
M. Kusner, Y. Sun, N. Kolkin and K. Weinberger, "From word embeddings to document distances", International Conference on Machine Learning, pp. 957-966, 2015.
15.
C. Li and M. Wand, "Combining markov random fields and convolutional neural networks for image synthesis", Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2479-2486, 2016.
16.
Y. Li, M.-Y. Liu, X. Li, M.-H. Yang and J. Kautz, "A closed-form solution to photorealistic image stylization", 2018.
17.
J. Liao, Y. Yao, L. Yuan, G. Hua and S. B. Kang, "Visual attribute transfer through deep image analogy", SIGGRAPH, 2017.
18.
M. Lu, H. Zhao, A. Yao, F. Xu, Y. Chen and L. Zhang, "Decoder network over lightweight reconstructed feature for fast semantic style transfer", Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2469-2477, 2017.
19.
R. Mechrez, E. Shechtman and L. Zelnik-Manor, "Photorealistic style transfer with screened poisson equation", 2017.
20.
R. Mechrez, I. Talmi and L. Zelnik-Manor, "The contextual loss for image transformation with non-aligned data", 2018.
21.
M. Mostajabi, P. Yadollahpour and G. Shakhnarovich, "Feedforward semantic segmentation with zoom-out features", Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3376-3385, 2015.
22.
E. Risser, P. Wilmot and C. Barnes, "Stable and controllable neural texture synthesis and style transfer using histogram losses", 2017.
23.
Y. Rubner, C. Tomasi and L. J. Guibas, "A metric for distributions with applications to image databases", Computer Vision 1998. Sixth International Conference on, pp. 59-66, 1998.
24.
A. Sanakoyeu, D. Kotovenko, S. Lang and B. Ommer, "A style-aware content loss for real-time hd style transfer", 2018.
25.
E. Shechtman and M. Irani, "Matching local selfsimilarities across images and videos", Computer Vision and Pattern Recognition 2007. CVPR’07. IEEE Conference on, pp. 1-8, 2007.
26.
K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition", 2014.
27.
M. D. Zeiler and R. Fergus, "Visualizing and understanding convolutional networks", European conference on computer vision, pp. 818-833, 2014.

Contact IEEE to Subscribe

References

References is not available for this document.