View : 542 Download: 0

Full metadata record

DC Field Value Language
dc.contributor.author용환승*
dc.date.accessioned2021-12-28T16:30:03Z-
dc.date.available2021-12-28T16:30:03Z-
dc.date.issued2021*
dc.identifier.issn1424-8220*
dc.identifier.otherOAK-30628*
dc.identifier.urihttps://dspace.ewha.ac.kr/handle/2015.oak/259674-
dc.description.abstractHuman action recognition (HAR) has gained significant attention recently as it can be adopted for a smart surveillance system in Multimedia. However, HAR is a challenging task because of the variety of human actions in daily life. Various solutions based on computer vision (CV) have been proposed in the literature which did not prove to be successful due to large video sequences which need to be processed in surveillance systems. The problem exacerbates in the presence of multi-view cameras. Recently, the development of deep learning (DL)-based systems has shown significant success for HAR even for multi-view camera systems. In this research work, a DL-based design is proposed for HAR. The proposed design consists of multiple steps including feature mapping, feature fusion and feature selection. For the initial feature mapping step, two pre-trained models are considered, such as DenseNet201 and InceptionV3. Later, the extracted deep features are fused using the Serial based Extended (SbE) approach. Later on, the best features are selected using Kurtosis-controlled Weighted KNN. The selected features are classified using several supervised learning algorithms. To show the efficacy of the proposed design, we used several datasets, such as KTH, IXMAS, WVU, and Hollywood. Experimental results showed that the proposed design achieved accuracies of 99.3%, 97.4%, 99.8%, and 99.9%, respectively, on these datasets. Furthermore, the feature selection step performed better in terms of computational time compared with the state-of-the-art.*
dc.languageEnglish*
dc.publisherMDPI*
dc.subjecthuman action recognition*
dc.subjectdeep learning*
dc.subjectfeatures fusion*
dc.subjectfeatures selection*
dc.subjectrecognition*
dc.titleHuman Action Recognition: A Paradigm of Best Deep Learning Features Selection and Serial Based Extended Fusion*
dc.typeArticle*
dc.relation.issue23*
dc.relation.volume21*
dc.relation.indexSCIE*
dc.relation.indexSCOPUS*
dc.relation.journaltitleSENSORS*
dc.identifier.doi10.3390/s21237941*
dc.identifier.wosidWOS:000734632000001*
dc.identifier.scopusid2-s2.0-85120087604*
dc.author.googleKhan, Seemab*
dc.author.googleKhan, Muhammad Attique*
dc.author.googleAlhaisoni, Majed*
dc.author.googleTariq, Usman*
dc.author.googleYong, Hwan-Seung*
dc.author.googleArmghan, Ammar*
dc.author.googleAlenezi, Fayadh*
dc.contributor.scopusid용환승(7101899751)*
dc.date.modifydate20240322133226*
Appears in Collections:
인공지능대학 > 컴퓨터공학과 > Journal papers
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

BROWSE