View : 214 Download: 0

Robust Spatial-Temporal Motion Coherent Priors for Multi-View Video Coding Artifact Reduction

Title
Robust Spatial-Temporal Motion Coherent Priors for Multi-View Video Coding Artifact Reduction
Authors
Jeon, GyuleeLee, YeonjinLee, Jung-KyungKim, Yong-HwanKang, Je-Won
Ewha Authors
강제원
SCOPUS Author ID
강제원scopus
Issue Date
2023
Journal Title
IEEE ACCESS
ISSN
2169-3536JCR Link
Citation
IEEE ACCESS vol. 11, pp. 123104 - 123116
Keywords
Video compressionTransform codingThree-dimensional displaysSpatiotemporal phenomenaRendering (computer graphics)Quantization (signal)High efficiency video codingMulti-view video compressionvideo enhancementmotion vectorVVCMPEG-immersive videoTMIV
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Indexed
SCIE; SCOPUS WOS
Document Type
Article
Abstract
Multi-view video (MVV) data processed by three-dimensional (3D) video systems often suffer from compression artifacts, which can degrade the rendering quality of 3D spaces. In this paper, we focus on the task of artifact reduction in multi-view video compression using spatial and temporal motion priors. Previous MVV quality enhancement networks using a warping-and-fusion approach employed reference-to-target motion priors to exploit inter-view and temporal correlation among MVV frames. However, these motion priors were sensitive to quantization noise, and the warping accuracy was degraded, when the target frame used low-quality features in the corresponding search. To overcome these limitations, we propose a novel approach that utilizes bilateral spatial and temporal motion priors, leveraging the geometry relations of a structured MVV camera system, to exploit motion coherency. Our method involves a multi-view prior generation module that produces both unidirectional and bilateral warping vectors to exploit rich features in adjacent reference MVV frames and generate robust warping features. These features are further refined to account for unreliable alignments cross MVV frames caused by occlusions. The performance of the proposed method is evaluated in comparison with state-of-the-art MVV quality enhancement networks. Synthetic MVV dataset facilitates to train our network that produces various motion priors. Experimental results demonstrate that the proposed method significantly improves the quality of the reconstructed MVV frames in recent video coding standards such as the multi-view extension of High Efficiency Video Coding and the MPEG immersive video standard.
DOI
10.1109/ACCESS.2023.3329949
Appears in Collections:
공과대학 > 전자전기공학전공 > Journal papers
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

BROWSE