We present SelectionConv, a method for transfering the weights 2D CNNs to graph networks, allowing for a wide variety of applications irregular image domains, such as spherical images, superpixel methods, and even 3D meshes.
Many irregular images can be represented well in a graph-like structure, but these structures cannot be used as inputs to traditional 2D CNNs.
In this work, we start with the original graph convolution operation by Kipf et. al., which simply multiplies each node by a learned weight matrix and then aggregates its neighboring nodes (i.e. \( \mathbf{X}^{(k+1)} = \tilde{\mathbf{A}} \mathbf{X}^{(k)} \mathbf{W}\) ).
We modify this convolution by preprocessing the adjacency matrix into multiple matrices, where each matrix represents a different spatial direction in the graph ( i.e. \( \mathbf{X}^{(k+1)} = \sum_m{\tilde{\mathbf{S}}_m \mathbf{X}^{(k)} \mathbf{W}_m} \) ).
By doing so, we make the graph convolution commensurate with 2D convolution, and thus can transfer weights to our irregular graphs.
Results
Since our representation works independent of network or graph design, we can illustrate the effectiveness of our approach on various networks and image types.
We can perform tasks such as style transfer, segmentation, and depth prediction on spherical images, masked images, superpixel images, etc.
For the results shown here, we focus on style transfer since it is an effective visualization of the benefits of our representation, but more tasks and domains are presented in our paper.
Spherical Style Transfer
Our method can effectively stylize spherical images, without causing seams or distortion, especially near the poles of the sphere.
Original
Naive
SelectionConv
In the next example, note the failure case near the ceiling where the graph transistion occurs across straight lines.
Original
SelectionConv
Masked Style Transfer
SelectionConv allows for native stylization of masked regions, without the need to pre- or post-process the output after the network is applied.
This makes for regions that more closely match stylization statistics.
Original
Mask
SelectionConv Result
We can also seamlessly combine multiple styles in the same image, as shown in the example below.
Mesh Style Transfer
Our method can improve stylization consistency of 3D meshes along the boundaries of texture map seams.
Original
Naive
SelectionConv
Checkout more results and examples in our paper and supplemental material.
@InProceedings{SelectionConv,
author = {Hart, David and Whitney, Michael and Morse, Bryan},
title = {{SelectionConv:} Convolutional Neural Networks for Non-rectilinear Image Data},
booktitle = {European Conference on Computer Vision},
month = {October},
year = {2022}
}