View : 654 Download: 0

Full metadata record

DC Field Value Language
dc.contributor.author강제원*
dc.date.accessioned2021-12-01T16:30:54Z-
dc.date.available2021-12-01T16:30:54Z-
dc.date.issued2021*
dc.identifier.issn1520-9210*
dc.identifier.issn1941-0077*
dc.identifier.otherOAK-30570*
dc.identifier.urihttps://dspace.ewha.ac.kr/handle/2015.oak/259594-
dc.description.abstractFuture video prediction provides valuable information that helps a computer machine understand the surrounding environment and make critical decisions in real-time. However, long-term video prediction remains a challenging problem due to the complicated spatiotemporal dynamics in a video. In this paper, we propose a dynamic motion estimation and evolution (DMEE) network model to generate unseen future videos from the observed videos in the past. Our primary contribution is to use trained kernels in convolutional neural network (CNN) and long short-term memory (LSTM) architectures, adapted to each time step and sample position, to efficiently manage spatiotemporal dynamics. DMEE uses the motion estimation (ME) and motion update (MU) kernels to predict the future video using an end-to-end prediction-update process. In the prediction, the ME kernel estimates the temporal changes. In the update step, the MU kernel combines the estimates with the previously generated frames as reference frames using a weighted average. The kernels are not only used for a current frame, but also are evolved to generate successive frames to enable temporally specific filtering. We perform qualitative performance analysis and quantitative performance analysis based on the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and video classification score developed for examining the visual quality of the generated video. It is demonstrated with experiments that our algorithm provides better qualitative and quantitative performance superior to the current state-of-the-art algorithms. Our source codes are available in https://github.com/Nayoung-Kim-ICP/Video-Generation.*
dc.languageEnglish*
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC*
dc.subjectKernel*
dc.subjectDynamics*
dc.subjectConvolution*
dc.subjectStreaming media*
dc.subjectMotion estimation*
dc.subjectAdaptation models*
dc.subjectSpatiotemporal phenomena*
dc.subjectLong-term video generation and prediction*
dc.subjectvideo understanding and analysis*
dc.subjectdeep learning*
dc.subjectConvolutional Neural Network*
dc.subjectLong Short-term Memory*
dc.titleDynamic Motion Estimation and Evolution Video Prediction Network*
dc.typeArticle*
dc.relation.volume23*
dc.relation.indexSCIE*
dc.relation.indexSCOPUS*
dc.relation.startpage3986*
dc.relation.lastpage3998*
dc.relation.journaltitleIEEE TRANSACTIONS ON MULTIMEDIA*
dc.identifier.doi10.1109/TMM.2020.3035281*
dc.identifier.wosidWOS:000720519900006*
dc.author.googleKim, Nayoung*
dc.author.googleKang, Je-Won*
dc.contributor.scopusid강제원(56367466400)*
dc.date.modifydate20240322125621*
Appears in Collections:
공과대학 > 전자전기공학전공 > Journal papers
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

BROWSE