View : 834 Download: 0

The realtime method based on audio scenegraph for 3D sound rendering

Title
The realtime method based on audio scenegraph for 3D sound rendering
Authors
Yi J.-S.Seong S.-J.Nam Y.-H.
Ewha Authors
남양희
SCOPUS Author ID
남양희scopus
Issue Date
2005
Journal Title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
ISSN
0302-9743JCR Link
Citation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) vol. 3767 LNCS, pp. 720 - 730
Indexed
SCOPUS WOS scopus
Document Type
Conference Paper
Abstract
Recent studies have shown that the combination of auditory and visual cues enhances the sense of immersion in virtual reality or interactive entertainment applications. However, realtime 3D audiovisual rendering requires high computational cost. In this paper, to reduce realtime computation, we suggest a novel framework of optimized 3D sound rendering, where we define Audio Scenegraph that contains reduced 3D scene information and the necessary parameters for computing early reflections of sound. During pre-computation phase using our framework, graphic reduction and sound source reduction are accomplished according to the environment containing complex 3D scene, sound sources, and a listener. That is, complex 3D scene is reduced to a set of significant facets for sound rendering, and the resulting scene is represented as Audio Scenegraph we defined. And then, the graph is transmitted to the sound engine which clusters a number of sound sources for reducing realtime calculation of sound propagation. For sound source reduction, it is required to estimate early reflection time to test perceptual culling and to cluster sounds which are reachable to facets of each sub space according to the estimation results. During realtime phase according to the position, direction and index of the space of a listener, sounds inside sub space are played by image method and sounds outside sub space are also played by assigning clustered sounds to buffers. Even if the number of sounds is increased, realtime calculation is very stable because most calculations about sounds can be performed offline. It took very consistent time for 3D sound rendering regardless of complexity of 3D scene including hundreds of sound sources by this method. As a future study, it is required to estimate the perceptual acceptance of grouping algorithm by user test. © Springer-Verlag Berlin Heidelberg 2005.
DOI
10.1007/11581772_63
ISBN
3540300279

9783540300274
Appears in Collections:
신산업융합대학 > 융합콘텐츠학과 > Journal papers
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

BROWSE