View : 242 Download: 0

Full metadata record

DC Field Value Language
dc.contributor.author강제원*
dc.date.accessioned2024-02-15T05:11:30Z-
dc.date.available2024-02-15T05:11:30Z-
dc.date.issued2023*
dc.identifier.issn2169-3536*
dc.identifier.otherOAK-34391*
dc.identifier.urihttps://dspace.ewha.ac.kr/handle/2015.oak/267694-
dc.description.abstractMulti-view video (MVV) data processed by three-dimensional (3D) video systems often suffer from compression artifacts, which can degrade the rendering quality of 3D spaces. In this paper, we focus on the task of artifact reduction in multi-view video compression using spatial and temporal motion priors. Previous MVV quality enhancement networks using a warping-and-fusion approach employed reference-to-target motion priors to exploit inter-view and temporal correlation among MVV frames. However, these motion priors were sensitive to quantization noise, and the warping accuracy was degraded, when the target frame used low-quality features in the corresponding search. To overcome these limitations, we propose a novel approach that utilizes bilateral spatial and temporal motion priors, leveraging the geometry relations of a structured MVV camera system, to exploit motion coherency. Our method involves a multi-view prior generation module that produces both unidirectional and bilateral warping vectors to exploit rich features in adjacent reference MVV frames and generate robust warping features. These features are further refined to account for unreliable alignments cross MVV frames caused by occlusions. The performance of the proposed method is evaluated in comparison with state-of-the-art MVV quality enhancement networks. Synthetic MVV dataset facilitates to train our network that produces various motion priors. Experimental results demonstrate that the proposed method significantly improves the quality of the reconstructed MVV frames in recent video coding standards such as the multi-view extension of High Efficiency Video Coding and the MPEG immersive video standard.*
dc.languageEnglish*
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC*
dc.subjectVideo compression*
dc.subjectTransform coding*
dc.subjectThree-dimensional displays*
dc.subjectSpatiotemporal phenomena*
dc.subjectRendering (computer graphics)*
dc.subjectQuantization (signal)*
dc.subjectHigh efficiency video coding*
dc.subjectMulti-view video compression*
dc.subjectvideo enhancement*
dc.subjectmotion vector*
dc.subjectVVC*
dc.subjectMPEG-immersive video*
dc.subjectTMIV*
dc.titleRobust Spatial-Temporal Motion Coherent Priors for Multi-View Video Coding Artifact Reduction*
dc.typeArticle*
dc.relation.volume11*
dc.relation.indexSCIE*
dc.relation.indexSCOPUS*
dc.relation.startpage123104*
dc.relation.lastpage123116*
dc.relation.journaltitleIEEE ACCESS*
dc.identifier.doi10.1109/ACCESS.2023.3329949*
dc.identifier.wosidWOS:001103083300001*
dc.author.googleJeon, Gyulee*
dc.author.googleLee, Yeonjin*
dc.author.googleLee, Jung-Kyung*
dc.author.googleKim, Yong-Hwan*
dc.author.googleKang, Je-Won*
dc.contributor.scopusid강제원(56367466400)*
dc.date.modifydate20240322125621*
Appears in Collections:
공과대학 > 전자전기공학전공 > Journal papers
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

BROWSE