<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://dspace.ewha.ac.kr/handle/2015.oak/267951">
    <title>DSpace Community:</title>
    <link>https://dspace.ewha.ac.kr/handle/2015.oak/267951</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://dspace.ewha.ac.kr/handle/2015.oak/274658" />
        <rdf:li rdf:resource="https://dspace.ewha.ac.kr/handle/2015.oak/274605" />
        <rdf:li rdf:resource="https://dspace.ewha.ac.kr/handle/2015.oak/273744" />
        <rdf:li rdf:resource="https://dspace.ewha.ac.kr/handle/2015.oak/273682" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-04T09:49:12Z</dc:date>
  </channel>
  <item rdf:about="https://dspace.ewha.ac.kr/handle/2015.oak/274658">
    <title>SeqDA-HLA: Language Model and Dual Attention-Based Network to Predict Peptide-HLA Class I Binding</title>
    <link>https://dspace.ewha.ac.kr/handle/2015.oak/274658</link>
    <description>Title: SeqDA-HLA: Language Model and Dual Attention-Based Network to Predict Peptide-HLA Class I Binding
Ewha Authors: 최장환
Abstract: Accurate prediction of peptide-HLA class I binding is crucial for immunotherapy and vaccine development, but existing methods often struggle to capture the intricate biological relationships between peptides and diverse HLA alleles. Here, we introduce SeqDA-HLA, a pan-specific prediction model that combines language model-based embeddings (ELMo) with a dual attention mechanism-self-aligned cross-attention and self-attention-to capture rich contextual features and pairwise interactions. Evaluations against 14 state-of-the-art methods on multiple benchmark datasets demonstrate that SeqDA-HLA consistently outperforms competing approaches, achieving an AUC value up to 0.9856 and accuracy as high as 0.9408. Notably, SeqDA-HLA maintains robust performance across peptide lengths (8-14) and HLA alleles, showcasing its generalizability. Beyond predictive accuracy, SeqDA-HLA offers interpretability by highlighting essential anchor residues and revealing key binding motifs, thereby aligning with experimentally validated biological insights. As a further demonstration of practical impact, we fine-tune SeqDA-HLA on an Influenza virus dataset, successfully predicting binding changes induced by single amino acid mutations. Overall, SeqDA-HLA serves as a powerful and interpretable tool for peptide-HLA binding prediction, with potential applications in epitope-based vaccine design and precision immunotherapy.</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://dspace.ewha.ac.kr/handle/2015.oak/274605">
    <title>UniTT-Stereo: Unified Training of Transformer for Enhanced Stereo Matching</title>
    <link>https://dspace.ewha.ac.kr/handle/2015.oak/274605</link>
    <description>Title: UniTT-Stereo: Unified Training of Transformer for Enhanced Stereo Matching
Ewha Authors: 민동보
Abstract: Unlike other vision tasks where Transformer-based approaches are becoming increasingly common, stereo depth estimation is still dominated by convolution-based models. This is mainly due to the limited availability of real-world ground truth for stereo matching, which hinders the performance improvement of transformer-based stereo approaches. In this paper, we propose UniTT-Stereo, a method to maximize the potential of Transformer-based stereo architectures by unifying self-supervised learning for pre-training with stereo matching framework based on supervised learning. Specifically, we design a dual-task learning scheme that reconstructs masked regions of an input image while simultaneously predicting corresponding points in the paired image. We demonstrate that this approach encourages the model to learn locality-aware representations, which are critical to overcoming the data inefficiency of Transformers. Moreover, to address these challenging tasks of reconstruction-and-prediction, we propose a variable masking ratio strategy that promotes robustness to varying levels of visual information. Additionally, we introduce losses that exploit stereo geometry and correspondence at the appearance, feature, and disparity levels. To further validate the effectiveness of our design, we conduct frequency decomposition and attention map visualization, which reveal how the model effectively captures fine-grained structures and cross-view correspondences. State-of-the-art performance of UniTT-Stereo is validated on various benchmarks such as the ETH3D, KITTI 2012, and KITTI 2015 datasets. Code is available at: https://github.com/00kim/UniTT-Stereo</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://dspace.ewha.ac.kr/handle/2015.oak/273744">
    <title>Enhancing Vulnerability Reports With Automated and Augmented Description Summarization</title>
    <link>https://dspace.ewha.ac.kr/handle/2015.oak/273744</link>
    <description>Title: Enhancing Vulnerability Reports With Automated and Augmented Description Summarization
Ewha Authors: 양대헌
Abstract: Public vulnerability databases, such as the National Vulnerability Database (NVD), document vulnerabilities and facilitate threat information sharing. However, they often suffer from short descriptions and outdated or insufficient information. In this paper, we introduce Zad, a system designed to enrich NVD vulnerability descriptions by leveraging external resources. Zad consists of two pipelines: one collects and filters supplementary data using two encoders to build a detailed dataset, while the other fine-tunes a pre-trained model on this dataset to generate enriched descriptions. By addressing brevity and improving content quality, Zad produces more comprehensive and cohesive vulnerability descriptions. We evaluate Zad using standard summarization metrics and human assessments, demonstrating its effectiveness in enhancing vulnerability information.</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://dspace.ewha.ac.kr/handle/2015.oak/273682">
    <title>Rethinking I/O Caching for Large Language Model Inference on Resource-Constrained Mobile Platforms</title>
    <link>https://dspace.ewha.ac.kr/handle/2015.oak/273682</link>
    <description>Title: Rethinking I/O Caching for Large Language Model Inference on Resource-Constrained Mobile Platforms
Ewha Authors: 반효경
Abstract: Large language models (LLMs) have traditionally relegated inference to remote servers, leaving mobile devices as thin clients. Recently, advances in mobile GPUs and NPUs have made on-device inference increasingly feasible, particularly for privacy-sensitive and personalized applications. However, executing LLMs directly on resource-constrained devices exposes severe I/O bottlenecks, as repeated accesses to large weight files can overwhelm limited memory and storage bandwidth. Prior studies have focused on internal mechanisms such as KV caching, while the role of the host OS buffer cache remains underexplored. This paper closes that gap with file-level trace analysis of real-world mobile LLM applications, and identifies three characteristic access patterns: (1) one-time sequential scans during initialization, (2) persistent hot sets (e.g., tokenizers, metadata, indices), and (3) recurring loop accesses to model weight files. Guided by these observations, we propose LLM-aware buffer cache strategies and derive cache-sizing guidelines that relate loop size, host-set coverage, and storage bandwidth. We further compare smartwatch-class and smartphone-class platforms to clarify feasible model sizes and practical hardware prerequisites for local inference. Our results provide system-level guidance for I/O subsystem design that enables practical on-device LLM inference in future mobile and IoT devices.</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

