ASSET: Autoregressive Semantic Scene Editing with Transformers at High Resolutions

Kavli Affiliate: Matthew Fisher

| First 5 Authors: Difan Liu, Sandesh Shetty, Tobias Hinz, Matthew Fisher, Richard Zhang

| Summary:

We present ASSET, a neural architecture for automatically modifying an input
high-resolution image according to a user’s edits on its semantic segmentation
map. Our architecture is based on a transformer with a novel attention
mechanism. Our key idea is to sparsify the transformer’s attention matrix at
high resolutions, guided by dense attention extracted at lower image
resolutions. While previous attention mechanisms are computationally too
expensive for handling high-resolution images or are overly constrained within
specific image regions hampering long-range interactions, our novel attention
mechanism is both computationally efficient and effective. Our sparsified
attention mechanism is able to capture long-range interactions and context,
leading to synthesizing interesting phenomena in scenes, such as reflections of
landscapes onto water or flora consistent with the rest of the landscape, that
were not possible to generate reliably with previous convnets and transformer
approaches. We present qualitative and quantitative results, along with user
studies, demonstrating the effectiveness of our method.

| Search Query: ArXiv Query: search_query=au:”Matthew Fisher”&id_list=&start=0&max_results=10

Read More