View : 1165 Download: 0

Full metadata record

DC Field Value Language
dc.contributor.author오유란*
dc.date.accessioned2019-10-02T02:48:04Z-
dc.date.available2019-10-02T02:48:04Z-
dc.date.issued2019*
dc.identifier.issn1936-7228*
dc.identifier.otherOAK-25402*
dc.identifier.urihttps://dspace.ewha.ac.kr/handle/2015.oak/251564-
dc.description.abstractNavCog3 is a smartphone turn-by-turn navigation assistant system we developed specifically designed to enable independent navigation for people with visual impairments. Using off-the-shelf Bluetooth beacons installed in the surrounding environment and a commodity smartphone carried by the user, NavCog3 achieves unparalleled localization accuracy in real-world large-scale scenarios. By leveraging its accurate localization capabilities, NavCog3 guides the user through the environment and signals the presence of semantic features and points of interest in the vicinity (e.g., doorways, shops). To assess the capability of NavCog3 to promote independent mobility of individuals with visual impairments, we deployed and evaluated the system in two challenging real-world scenarios. The first scenario demonstrated the scalability of the system, which was permanently installed in a five-story shopping mall spanning three buildings and a public underground area. During the study, 10 participants traversed three fixed routes, and 43 participants traversed free-choice routes across the environment. The second scenario validated the system's usability in the wild in a hotel complex temporarily equipped with NavCog3 during a conference for individuals with visual impairments. In the hotel, almost 14.2h of system usage data were collected from 37 unique users who performed 280 travels across the environment, for a total of 30,200m traversed. © 2019 Association for Computing Machinery.*
dc.languageEnglish*
dc.publisherAssociation for Computing Machinery*
dc.subjectIndoor navigation*
dc.subjectPoints of interest*
dc.subjectUser evaluation*
dc.subjectVisual impairments*
dc.subjectVoice interaction*
dc.titleNavcog3 in the wild: Large-scale Blind Indoor Navigation Assistant with Semantic Features*
dc.typeArticle*
dc.relation.issue3*
dc.relation.volume12*
dc.relation.indexSCOPUS*
dc.relation.journaltitleACM Transactions on Accessible Computing*
dc.identifier.doi10.1145/3340319*
dc.identifier.scopusid2-s2.0-85072012389*
dc.author.googleSato D.*
dc.author.googleOh U.*
dc.author.googleGuerreiro J.*
dc.author.googleAhmetovic D.*
dc.author.googleNaito K.*
dc.author.googleTakagi H.*
dc.author.googleKitani K.M.*
dc.author.googleAsakawa C.*
dc.contributor.scopusid오유란(55569327700)*
dc.date.modifydate20240322133750*
Appears in Collections:
인공지능대학 > 컴퓨터공학과 > Journal papers
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

BROWSE