Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 민동보 | * |
dc.date.accessioned | 2021-08-12T16:31:27Z | - |
dc.date.available | 2021-08-12T16:31:27Z | - |
dc.date.issued | 2021 | * |
dc.identifier.issn | 0162-8828 | * |
dc.identifier.issn | 1939-3539 | * |
dc.identifier.other | OAK-29541 | * |
dc.identifier.uri | https://dspace.ewha.ac.kr/handle/2015.oak/258663 | - |
dc.description.abstract | We present the deep self-correlation (DSC) descriptor for establishing dense correspondences between images taken under different imaging modalities, such as different spectral ranges or lighting conditions. We encode local self-similar structure in a pyramidal manner that yields both more precise localization ability and greater robustness to non-rigid image deformations. Specifically, DSC first computes multiple self-correlation surfaces with randomly sampled patches over a local support window, and then builds pyramidal self-correlation surfaces through average pooling on the surfaces. The feature responses on the self-correlation surfaces are then encoded through spatial pyramid pooling in a log-polar configuration. To better handle geometric variations such as scale and rotation, we additionally propose the geometry-invariant DSC (GI-DSC) that leverages multi-scale self-correlation computation and canonical orientation estimation. In contrast to descriptors based on deep convolutional neural networks (CNNs), DSC and GI-DSC are training-free (i.e., handcrafted descriptors), are robust to cross-modality, and generalize well to various modality variations. Extensive experiments demonstrate the state-of-the-art performance of DSC and GI-DSC on challenging cases of cross-modal image pairs having photometric and/or geometric variations. | * |
dc.language | English | * |
dc.publisher | IEEE COMPUTER SOC | * |
dc.subject | Strain | * |
dc.subject | Lighting | * |
dc.subject | Estimation | * |
dc.subject | Benchmark testing | * |
dc.subject | Imaging | * |
dc.subject | Robustness | * |
dc.subject | Visualization | * |
dc.subject | Cross-modal correspondence | * |
dc.subject | pyramidal structure | * |
dc.subject | self-correlation | * |
dc.subject | local self-similarity | * |
dc.subject | non-rigid deformation | * |
dc.title | Dense Cross-Modal Correspondence Estimation With the Deep Self-Correlation Descriptor | * |
dc.type | Article | * |
dc.relation.issue | 7 | * |
dc.relation.volume | 43 | * |
dc.relation.index | SCIE | * |
dc.relation.index | SCOPUS | * |
dc.relation.startpage | 2345 | * |
dc.relation.lastpage | 2359 | * |
dc.relation.journaltitle | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE | * |
dc.identifier.doi | 10.1109/TPAMI.2020.2965528 | * |
dc.identifier.wosid | WOS:000692540900013 | * |
dc.identifier.scopusid | 2-s2.0-85108022643 | * |
dc.author.google | Kim, Seungryong | * |
dc.author.google | Min, Dongbo | * |
dc.author.google | Lin, Stephen | * |
dc.author.google | Sohn, Kwanghoon | * |
dc.contributor.scopusid | 민동보(7201669172) | * |
dc.date.modifydate | 20240322133757 | * |