View : 508 Download: 0

Full metadata record

DC Field Value Language
dc.contributor.author이형준*
dc.date.accessioned2022-08-02T16:30:40Z-
dc.date.available2022-08-02T16:30:40Z-
dc.date.issued2021*
dc.identifier.issn2327-4662*
dc.identifier.otherOAK-31828*
dc.identifier.urihttps://dspace.ewha.ac.kr/handle/2015.oak/261644-
dc.description.abstractAs intelligence recently moves to the edge to tackle the problems of privacy, scalability, and network bandwidth in the centralized intelligence, it is necessary to construct an efficient yet robust deep learning model viable at edge devices, which are usually volatile in wireless links and device functionality. The intensive computation burden for deep learning at the edge side necessitates some level of parallel processing via acceleration. We propose EdgePipe, a deep learning framework based on deep neural networks (DNNs) with a mixture of model parallelism and pipeline training for high resource utilization over volatile wireless edge devices. To tackle the volatility problem in wireless links and device functionality, a concept of super neuron is defined to be a group of neurons across adjacent layers, which is the basis of model partitioning at edge devices. The relatively loss-resilient neuron structure prevents the entire forward or backward training paths from being totally broken down due to only some intermittent link or device failure caused by one or few devices. Furthermore, we design a subsequent pipeline training mechanism based on the prior super-neuron-based model partitioning for fast convergence with more training data in a fixed timeline. The experimental results have demonstrated that EdgePipe outperforms several counterpart algorithms including PipeDream under the volatile wireless lossy or device malfunctioning environments, while preserving the low interlayer communication overhead.*
dc.languageEnglish*
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC*
dc.subjectParallel processing*
dc.subjectPipelines*
dc.subjectDeep learning*
dc.subjectWireless communication*
dc.subjectComputational modeling*
dc.subjectTraining*
dc.subjectNeurons*
dc.subjectDistributed deep learning*
dc.subjectedge device*
dc.subjectmodel parallelism*
dc.subjectpipeline parallelism*
dc.subjectvolatile wireless links*
dc.titleEdgePipe: Tailoring Pipeline Parallelism With Deep Neural Networks for Volatile Wireless Edge Devices*
dc.typeArticle*
dc.relation.issue14*
dc.relation.volume9*
dc.relation.indexSCIE*
dc.relation.indexSCOPUS*
dc.relation.startpage11633*
dc.relation.lastpage11647*
dc.relation.journaltitleIEEE INTERNET OF THINGS JOURNAL*
dc.identifier.doi10.1109/JIOT.2021.3131407*
dc.identifier.wosidWOS:000821526800001*
dc.identifier.scopusid2-s2.0-85120546485*
dc.author.googleYoon, JinYi*
dc.author.googleByeon, Yeongsin*
dc.author.googleKim, Jeewoon*
dc.author.googleLee, HyungJune*
dc.contributor.scopusid이형준(22834789100)*
dc.date.modifydate20240322133709*
Appears in Collections:
인공지능대학 > 컴퓨터공학과 > Journal papers
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

BROWSE