View : 526 Download: 0

Dynamic Motion Estimation and Evolution Video Prediction Network

Title
Dynamic Motion Estimation and Evolution Video Prediction Network
Authors
Kim, NayoungKang, Je-Won
Ewha Authors
강제원
SCOPUS Author ID
강제원scopus
Issue Date
2021
Journal Title
IEEE TRANSACTIONS ON MULTIMEDIA
ISSN
1520-9210JCR Link

1941-0077JCR Link
Citation
IEEE TRANSACTIONS ON MULTIMEDIA vol. 23, pp. 3986 - 3998
Keywords
KernelDynamicsConvolutionStreaming mediaMotion estimationAdaptation modelsSpatiotemporal phenomenaLong-term video generation and predictionvideo understanding and analysisdeep learningConvolutional Neural NetworkLong Short-term Memory
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Indexed
SCIE; SCOPUS WOS
Document Type
Article
Abstract
Future video prediction provides valuable information that helps a computer machine understand the surrounding environment and make critical decisions in real-time. However, long-term video prediction remains a challenging problem due to the complicated spatiotemporal dynamics in a video. In this paper, we propose a dynamic motion estimation and evolution (DMEE) network model to generate unseen future videos from the observed videos in the past. Our primary contribution is to use trained kernels in convolutional neural network (CNN) and long short-term memory (LSTM) architectures, adapted to each time step and sample position, to efficiently manage spatiotemporal dynamics. DMEE uses the motion estimation (ME) and motion update (MU) kernels to predict the future video using an end-to-end prediction-update process. In the prediction, the ME kernel estimates the temporal changes. In the update step, the MU kernel combines the estimates with the previously generated frames as reference frames using a weighted average. The kernels are not only used for a current frame, but also are evolved to generate successive frames to enable temporally specific filtering. We perform qualitative performance analysis and quantitative performance analysis based on the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and video classification score developed for examining the visual quality of the generated video. It is demonstrated with experiments that our algorithm provides better qualitative and quantitative performance superior to the current state-of-the-art algorithms. Our source codes are available in https://github.com/Nayoung-Kim-ICP/Video-Generation.
DOI
10.1109/TMM.2020.3035281
Appears in Collections:
공과대학 > 전자전기공학전공 > Journal papers
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

BROWSE