Liu Yang (刘扬),Zheng Fengbin,Zuo Xianyu.[J].高技术通讯(英文),2016,22(1):90~98 |
|
CSMCCVA: Framework of cross-modal semantic mapping based on cognitive computing of visual and auditory sensations |
|
DOI:10.3772/j.issn.1006-6748.2016.01.013 |
中文关键词: |
英文关键词: multimedia neural cognitive computing (MNCC), brain-like computing, cross-modal semantic mapping (CSM), selective attention, limbic system, multisensory integration, memory-enhancing mechanism |
基金项目: |
Author Name | Affiliation | Liu Yang (刘扬) | | Zheng Fengbin | | Zuo Xianyu | |
|
Hits: 1235 |
Download times: 1189 |
中文摘要: |
|
英文摘要: |
Cross-modal semantic mapping and cross-media retrieval are key problems of the multimedia search engine. This study analyzes the hierarchy, the functionality, and the structure in the visual and auditory sensations of cognitive system, and establishes a brain-like cross-modal semantic mapping framework based on cognitive computing of visual and auditory sensations. The mechanism of visual-auditory multisensory integration, selective attention in thalamo-cortical, emotional control in limbic system and the memory-enhancing in hippocampal were considered in the framework. Then, the algorithms of cross-modal semantic mapping were given. Experimental results show that the framework can be effectively applied to the cross-modal semantic mapping, and also provides an important significance for brain-like computing of non-von Neumann structure. |
View Full Text
View/Add Comment Download reader |
Close |
|
|
|