<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>混合专家 on 语音/音频论文速递</title>
    <link>https://nanless.github.io/audio-paper-digest-blog/tags/%E6%B7%B7%E5%90%88%E4%B8%93%E5%AE%B6/</link>
    <description>Recent content in 混合专家 on 语音/音频论文速递</description>
    <generator>Hugo</generator>
    <language>zh-cn</language>
    <lastBuildDate>Wed, 29 Apr 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://nanless.github.io/audio-paper-digest-blog/tags/%E6%B7%B7%E5%90%88%E4%B8%93%E5%AE%B6/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Adaptive Task-Incremental Learning For Underwater Acoustic Recognition Based on Mixture-of-Experts Adapter</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-adaptive-task-incremental-learning-for-underwater/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-adaptive-task-incremental-learning-for-underwater/</guid>
      <description>水下声学目标识别 | 7.0/10</description>
    </item>
    <item>
      <title>Dual-Perspective Multimodal Sentiment Analysis with MoE Fusion: Representation Learning via Semantic Resonance and Divergence</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-dual-perspective-multimodal-sentiment-analysis/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-dual-perspective-multimodal-sentiment-analysis/</guid>
      <description>多模态情感分析 | 7.0/10</description>
    </item>
    <item>
      <title>Improving Multimodal Brain Encoding Model with Dynamic Subject-Awareness Routing</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-improving-multimodal-brain-encoding-model-with/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-improving-multimodal-brain-encoding-model-with/</guid>
      <description>脑信号编码 | 8.0/10</description>
    </item>
    <item>
      <title>Nemotron 3 Nano Omni: Efficient and Open Multimodal Intelligence</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-nemotron-3-nano-omni-efficient-and-open/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-nemotron-3-nano-omni-efficient-and-open/</guid>
      <description>多模态模型 | 8.5/10</description>
    </item>
    <item>
      <title>Selective Hub Fusion with Modality-Heterogeneous Experts for Multimodal Emotion Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-selective-hub-fusion-with-modality-heterogeneous/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-selective-hub-fusion-with-modality-heterogeneous/</guid>
      <description>多模态模型 | 6.5/10</description>
    </item>
    <item>
      <title>SURE: Synergistic Uncertainty-Aware Reasoning for Multimodal Emotion Recognition in Conversations</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-sure-synergistic-uncertainty-aware-reasoning-for/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-sure-synergistic-uncertainty-aware-reasoning-for/</guid>
      <description>语音情感识别 | 7.5/10</description>
    </item>
    <item>
      <title>SwitchCodec: Adaptive Residual-Expert Sparse Quantization for High-Fidelity Neural Audio Coding</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-switchcodec-adaptive-residual-expert-sparse/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-switchcodec-adaptive-residual-expert-sparse/</guid>
      <description>音频生成 | 8.5/10</description>
    </item>
    <item>
      <title>MoVE: Translating Laughter and Tears via Mixture of Vocalization Experts in Speech-to-Speech Translation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-23-move-translating-laughter-and-tears-via-mixture/</link>
      <pubDate>Thu, 23 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-23-move-translating-laughter-and-tears-via-mixture/</guid>
      <description>这篇论文旨在解决语音到语音翻译（S2ST）系统普遍丢失源语音中非语言声音（如笑声、哭声）和情感信息的问题，这严重影响了跨语言交流的自然度和准确性。为此，作者提出了三项核心贡献：首先，设计了一个可扩展的自动化数据合成管道，用于生成大规模、高质量的英中富有表现力S2ST平行语料，克服了训练数据稀缺的瓶颈</description>
    </item>
  </channel>
</rss>
