<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>音频大模型 on 语音/音频论文速递</title>
    <link>https://nanless.github.io/audio-paper-digest-blog/tags/%E9%9F%B3%E9%A2%91%E5%A4%A7%E6%A8%A1%E5%9E%8B/</link>
    <description>Recent content in 音频大模型 on 语音/音频论文速递</description>
    <generator>Hugo</generator>
    <language>zh-cn</language>
    <lastBuildDate>Wed, 29 Apr 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://nanless.github.io/audio-paper-digest-blog/tags/%E9%9F%B3%E9%A2%91%E5%A4%A7%E6%A8%A1%E5%9E%8B/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>A Generative-First Neural Audio Autoencoder</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-a-generative-first-neural-audio-autoencoder/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-a-generative-first-neural-audio-autoencoder/</guid>
      <description>音乐生成 | 8.5/10</description>
    </item>
    <item>
      <title>AR&amp;D: A Framework for Retrieving and Describing Concepts for Interpreting AudioLLMs</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-ard-a-framework-for-retrieving-and-describing/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-ard-a-framework-for-retrieving-and-describing/</guid>
      <description>音频大模型 | 6.5/10</description>
    </item>
    <item>
      <title>Auditory Illusion Benchmark for Large Audio Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-auditory-illusion-benchmark-for-large-audio/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-auditory-illusion-benchmark-for-large-audio/</guid>
      <description>模型评估 | 7.0/10</description>
    </item>
    <item>
      <title>Can Large Audio Language Models Understand Audio Well? Speech, Scene and Events Understanding Benchmark for LALMs</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-can-large-audio-language-models-understand-audio/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-can-large-audio-language-models-understand-audio/</guid>
      <description>基准测试 | 7.0/10</description>
    </item>
    <item>
      <title>Deepaq: A Perceptual Audio Quality Metric Based on Foundational Models and Weakly Supervised Learning</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-deepaq-a-perceptual-audio-quality-metric-based-on/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-deepaq-a-perceptual-audio-quality-metric-based-on/</guid>
      <description>音频质量评估 | 7.5/10</description>
    </item>
    <item>
      <title>DisContSE: Single-Step Diffusion Speech Enhancement based on Joint Discrete and Continuous Embeddings</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-discontse-single-step-diffusion-speech/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-discontse-single-step-diffusion-speech/</guid>
      <description>语音增强 | 8.5/10</description>
    </item>
    <item>
      <title>DSpAST: Disentangled Representations for Spatial Audio Reasoning with Large Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-dspast-disentangled-representations-for-spatial/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-dspast-disentangled-representations-for-spatial/</guid>
      <description>音频问答 | 8.0/10</description>
    </item>
    <item>
      <title>ECHO: Frequency-Aware Hierarchical Encoding for Variable-Length Signals</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-echo-frequency-aware-hierarchical-encoding-for/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-echo-frequency-aware-hierarchical-encoding-for/</guid>
      <description>音频分类 | 9.5/10</description>
    </item>
    <item>
      <title>Emo-TTA: Improving Test-Time Adaptation of Audio-Language Models for Speech Emotion Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-emo-tta-improving-test-time-adaptation-of-audio/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-emo-tta-improving-test-time-adaptation-of-audio/</guid>
      <description>语音情感识别 | 7.0/10</description>
    </item>
    <item>
      <title>Emotional Damage: Investigating Safety Vulnerabilities of Large Audio-Language Models Under Speaker Emotional Variations</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-emotional-damage-investigating-safety/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-emotional-damage-investigating-safety/</guid>
      <description>音频安全 | 7.5/10</description>
    </item>
    <item>
      <title>Evaluating Compositional Structure in Audio Representations</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-evaluating-compositional-structure-in-audio/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-evaluating-compositional-structure-in-audio/</guid>
      <description>模型评估 | 7.0/10</description>
    </item>
    <item>
      <title>Exploring How Audio Effects Alter Emotion with Foundation Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-exploring-how-audio-effects-alter-emotion-with/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-exploring-how-audio-effects-alter-emotion-with/</guid>
      <description>音乐理解 | 7.0/10</description>
    </item>
    <item>
      <title>From Contrast to Commonality: Audio Commonality Captioning for Enhanced Audio-Text Cross-Modal Understanding in Multimodal LLMS</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-from-contrast-to-commonality-audio-commonality/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-from-contrast-to-commonality-audio-commonality/</guid>
      <description>音频场景理解 | 7.5/10</description>
    </item>
    <item>
      <title>ICASSP 2026 - 音频大模型 论文列表</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/icassp2026-task-123/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/icassp2026-task-123/</guid>
      <description>共 1 篇 ICASSP 2026 音频大模型 方向论文</description>
    </item>
    <item>
      <title>Improving Audio Question Answering with Variational Inference</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-improving-audio-question-answering-with/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-improving-audio-question-answering-with/</guid>
      <description>音频问答 | 7.5/10</description>
    </item>
    <item>
      <title>Investigating Modality Contribution in Audio LLMs for Music</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-investigating-modality-contribution-in-audio-llms/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-investigating-modality-contribution-in-audio-llms/</guid>
      <description>模型评估 | 6.5/10</description>
    </item>
    <item>
      <title>Keeping Models Listening: Segment- and time-aware attention rescaling at decoding time</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-keeping-models-listening-segment-and-time-aware/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-keeping-models-listening-segment-and-time-aware/</guid>
      <description>音频问答 | 7.5/10</description>
    </item>
    <item>
      <title>Modeling Inter-Segment Relationships in Speech for Dementia Detection with Audio Spectrogram Transformers and Graph Attention Networks</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-modeling-inter-segment-relationships-in-speech/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-modeling-inter-segment-relationships-in-speech/</guid>
      <description>语音生物标志物 | 7.0/10</description>
    </item>
    <item>
      <title>MSF-SER: Enriching Acoustic Modeling with Multi-Granularity Semantics for Speech Emotion Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-msf-ser-enriching-acoustic-modeling-with-multi/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-msf-ser-enriching-acoustic-modeling-with-multi/</guid>
      <description>语音情感识别 | 7.5/10</description>
    </item>
    <item>
      <title>Multimodal LLMs as Expert Speech Annotators: Acoustic Macro-Descriptors for Parkinson&#39;s Detection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-multimodal-llms-as-expert-speech-annotators/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-multimodal-llms-as-expert-speech-annotators/</guid>
      <description>语音生物标志物 | 6.5/10</description>
    </item>
    <item>
      <title>Scaling Ambiguity: Augmenting Human Annotation in Speech Emotion Recognition with Audio-Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-scaling-ambiguity-augmenting-human-annotation-in/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-scaling-ambiguity-augmenting-human-annotation-in/</guid>
      <description>语音情感识别 | 6.5/10</description>
    </item>
    <item>
      <title>Segmentwise Pruning in Audio-Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-segmentwise-pruning-in-audio-language-models/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-segmentwise-pruning-in-audio-language-models/</guid>
      <description>音频问答 | 7.0/10</description>
    </item>
    <item>
      <title>SONAR: Self-Distilled Continual Pre-Training for Domain Adaptive Audio Representation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-sonar-self-distilled-continual-pre-training-for/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-sonar-self-distilled-continual-pre-training-for/</guid>
      <description>音频事件检测 | 7.0/10</description>
    </item>
    <item>
      <title>Sparse Autoencoders Make Audio Foundation Models More Explainable</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-sparse-autoencoders-make-audio-foundation-models/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-sparse-autoencoders-make-audio-foundation-models/</guid>
      <description>模型评估 | 6.5/10</description>
    </item>
    <item>
      <title>Teaching Audio Models to Reason: A Unified Framework for Source- and Layer-Wise Distillation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-teaching-audio-models-to-reason-a-unified/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-teaching-audio-models-to-reason-a-unified/</guid>
      <description>音频问答 | 7.0/10</description>
    </item>
    <item>
      <title>Temporal Distillation for Music Representation Learning</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-temporal-distillation-for-music-representation/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-temporal-distillation-for-music-representation/</guid>
      <description>音乐信息检索 | 7.5/10</description>
    </item>
    <item>
      <title>Test-Time Scaling for Auditory Cognition in Audio Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-test-time-scaling-for-auditory-cognition-in-audio/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-test-time-scaling-for-auditory-cognition-in-audio/</guid>
      <description>音频问答 | 7.0/10</description>
    </item>
    <item>
      <title>The Muse Benchmark: Probing Music Perception and Auditory Relational Reasoning in Audio LLMs</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-the-muse-benchmark-probing-music-perception-and/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-the-muse-benchmark-probing-music-perception-and/</guid>
      <description>音乐理解 | 8.5/10</description>
    </item>
    <item>
      <title>Walking Through Uncertainty: An Empirical Study of Uncertainty Estimation for Audio-Aware Large Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-walking-through-uncertainty-an-empirical-study-of/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-walking-through-uncertainty-an-empirical-study-of/</guid>
      <description>音频问答 | 7.5/10</description>
    </item>
    <item>
      <title>When Noise Lowers the Loss: Rethinking Likelihood-Based Evaluation in Music Large Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-when-noise-lowers-the-loss-rethinking-likelihood/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-when-noise-lowers-the-loss-rethinking-likelihood/</guid>
      <description>音乐生成 | 7.0/10</description>
    </item>
    <item>
      <title>When Silence Matters: The Impact of Irrelevant Audio on Text Reasoning in Large Audio-Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-when-silence-matters-the-impact-of-irrelevant/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-when-silence-matters-the-impact-of-irrelevant/</guid>
      <description>模型评估 | 7.0/10</description>
    </item>
    <item>
      <title>When Voice Matters: A Controlled Study of Audio LLM Behavior in Clinical Decision-Making</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-when-voice-matters-a-controlled-study-of-audio/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-when-voice-matters-a-controlled-study-of-audio/</guid>
      <description>模型评估 | 7.0/10</description>
    </item>
    <item>
      <title>All That Glitters Is Not Audio: Rethinking Text Priors and Audio Reliance in Audio-Language Evaluation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-all-that-glitters-is-not-audio-rethinking-text/</link>
      <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-all-that-glitters-is-not-audio-rethinking-text/</guid>
      <description>音频问答 | 6.5/10</description>
    </item>
    <item>
      <title>HeadRouter: Dynamic Head-Weight Routing for Task-Adaptive Audio Token Pruning in Large Audio Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-headrouter-dynamic-head-weight-routing-for-task/</link>
      <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-headrouter-dynamic-head-weight-routing-for-task/</guid>
      <description>音频大模型 | 8.0/10</description>
    </item>
    <item>
      <title>Listening with Time: Precise Temporal Awareness for Long-Form Audio Understanding</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-27-listening-with-time-precise-temporal-awareness/</link>
      <pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-27-listening-with-time-precise-temporal-awareness/</guid>
      <description>音频场景理解 | 8.0/10</description>
    </item>
    <item>
      <title>Benign Fine-Tuning Breaks Safety Alignment in Audio LLMs</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-benign-fine-tuning-breaks-safety-alignment-in/</link>
      <pubDate>Wed, 22 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-benign-fine-tuning-breaks-safety-alignment-in/</guid>
      <description>这篇论文首次系统研究了良性（无害）音频数据微调对音频大模型安全对齐的破坏作用。**要解决的问题**是：用户出于提升模型性能目的进行的常规微调，是否会无意中破坏模型的安全防护？**方法**上，作者提出了一个基于嵌入空间邻近度的过滤框架，从语义、声学及混合维度，选择性地用与有害内容在表示空间上相近的良性</description>
    </item>
    <item>
      <title>HalluAudio: A Comprehensive Benchmark for Hallucination Detection in Large Audio-Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-halluaudio-a-comprehensive-benchmark-for/</link>
      <pubDate>Wed, 22 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-halluaudio-a-comprehensive-benchmark-for/</guid>
      <description>这篇论文旨在解决大型音频语言模型（LALM）中普遍存在的“幻觉”问题（即生成与音频证据不符的内容）缺乏系统性评估工具的难题。为此，作者构建并发布了**HalluAudio**，这是首个大规模、多领域（语音、环境声、音乐）、多任务（二分类、多选、属性验证、开放生成）的人工验证音频幻觉检测基准，包含超过</description>
    </item>
    <item>
      <title>Qwen3.5-Omni Technical Report</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-qwen35-omni-technical-report/</link>
      <pubDate>Wed, 22 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-qwen35-omni-technical-report/</guid>
      <description>这篇技术报告全面介绍了Qwen3.5-Omni，一个能够统一理解与生成文本、图像、音频和音视频内容的全模态大语言模型。**要解决的问题**是现有模型在实时交互、跨模态推理和自主智能体行为方面的局限性。**采用的方法**是基于“思考者-说话者”架构，引入了多项关键创新：1）思考者和说话者均采用混合注意</description>
    </item>
    <item>
      <title>Audio-Cogito: Towards Deep Audio Reasoning in Large Audio Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-audio-cogito-towards-deep-audio-reasoning-in/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-audio-cogito-towards-deep-audio-reasoning-in/</guid>
      <description>本文旨在解决大型音频语言模型（LALMs）在复杂音频推理任务中能力不足、推理过程不透明的问题。**核心贡献**是提出了一个名为 **Audio-Cogito** 的完全开源解决方案，其核心是一个四阶段的自动化数据构建管道 **Cogito-Pipe**，用于生成高质量、多样化的音频推理链（CoT）数</description>
    </item>
    <item>
      <title>Audio-DeepThinker: Progressive Reasoning-Aware Reinforcement Learning for High-Quality Chain-of-Thought Emergence in Audio Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-audio-deepthinker-progressive-reasoning-aware/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-audio-deepthinker-progressive-reasoning-aware/</guid>
      <description>这篇论文旨在解决大型音频语言模型（LALMs）缺乏显式、高质量推理能力的问题。现有方法要么受限于监督数据的质量，要么使用粗糙的奖励，导致生成的思维链形式良好但缺乏声学依据。作者提出了**Audio-DeepThinker**框架，其核心贡献有三：1）设计了一种**混合推理相似度奖励**，结合LLM评</description>
    </item>
    <item>
      <title>Benign Fine-Tuning Breaks Safety Alignment in Audio LLMs</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-benign-fine-tuning-breaks-safety-alignment-in/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-benign-fine-tuning-breaks-safety-alignment-in/</guid>
      <description>这篇论文首次系统研究了**良性音频数据微调对音频大模型安全对齐的破坏性影响**。核心问题是：用户出于提升性能的目的，在完全无害的音频数据上微调模型，是否会意外削弱其拒绝有害指令的能力？作者提出了一个**基于嵌入空间邻近性的过滤框架**，通过计算良性音频与有害音频在模型内部或外部参考编码器空间中的距离</description>
    </item>
    <item>
      <title>From Reactive to Proactive: Assessing the Proactivity of Voice Agents via ProVoice-Bench</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-from-reactive-to-proactive-assessing-the/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-from-reactive-to-proactive-assessing-the/</guid>
      <description>本文旨在解决现有语音代理评估基准主要关注被动响应，而忽略其主动感知与干预能力的问题。作者提出了**ProVoice-Bench**，这是首个专门用于评估主动式语音代理的基准测试框架。该框架通过一个包含数字状态构建、场景合成、对话生成、声学模拟和对话组装的多阶段数据合成管道，构建了包含1182个高质量</description>
    </item>
    <item>
      <title>ICLAD: In-Context Learning with Comparison-Guidance for Audio Deepfake Detection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-iclad-in-context-learning-with-comparison/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-iclad-in-context-learning-with-comparison/</guid>
      <description>本文针对音频深度伪造检测模型在真实场景（in-the-wild）中泛化能力差的核心问题，提出了一种名为ICLAD的全新范式。该框架利用音频语言模型（ALM）的上下文学习能力，实现了无需训练的快速适应。其核心是创新的**成对比较推理**策略：在离线阶段，引导ALM为每个样本同时生成“真实”和“伪造”的</description>
    </item>
    <item>
      <title>LLM-Codec: Neural Audio Codec Meets Language Model Objectives</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-llm-codec-neural-audio-codec-meets-language-model/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-llm-codec-neural-audio-codec-meets-language-model/</guid>
      <description>本文旨在解决语音语言模型（SLM）中一个根本性矛盾：神经音频编码器以波形重建为目标进行优化，而语言模型以序列预测为目标进行优化，这种目标不匹配导致生成的离散语音令牌熵值高、难以预测。为此，作者提出了LLM-Codec训练框架，在不改变编码器和语言模型架构的前提下，通过引入两个面向语言模型的正则化目标</description>
    </item>
    <item>
      <title>MoVE: Translating Laughter and Tears via Mixture of Vocalization Experts in Speech-to-Speech Translation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-move-translating-laughter-and-tears-via-mixture/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-move-translating-laughter-and-tears-via-mixture/</guid>
      <description>这篇论文旨在解决语音到语音翻译（S2ST）系统普遍缺失非语言声音（如笑声、哭泣）和情感韵律的问题，这严重限制了跨语言交流的自然度和语用准确性。作者提出了三大贡献：1) 一个**可扩展的表达性数据合成管道**，能自动生成高质量、带情感标注的S2ST训练对，克服了数据稀缺瓶颈；2) **MoVE（混合声</description>
    </item>
    <item>
      <title>VIBE: Voice-Induced open-ended Bias Evaluation for Large Audio-Language Models via Real-World Speech</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-vibe-voice-induced-open-ended-bias-evaluation-for/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-vibe-voice-induced-open-ended-bias-evaluation-for/</guid>
      <description>这篇论文旨在解决大型音频语言模型（LALM）在开放生成任务中社会偏见评估不足的问题。现有基准多依赖合成语音和选择题（MCQ），无法捕捉模型在真实交互中自然流露的刻板印象。为此，作者提出了**VIBE**框架，其核心是使用**真实人声录音**输入模型，并通过**开放生成任务**（如故事创作、个性化推荐</description>
    </item>
    <item>
      <title>Discrete Token Modeling for Multi-Stem Music Source Separation with Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-discrete-token-modeling-for-multi-stem-music/</link>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-discrete-token-modeling-for-multi-stem-music/</guid>
      <description>本文提出了一种用于多轨音乐源分离的生成式框架，其核心创新在于将分离任务重新定义为**条件离散令牌生成**问题。传统方法直接在时频域估计连续信号，而本文方法首先利用**HCodec**神经音频编解码器将音频波形转换为离散的声学与语义令牌序列。然后，一个基于**Conformer**的条件编码器从混合音</description>
    </item>
    <item>
      <title>NaijaS2ST: A Multi-Accent Benchmark for Speech-to-Speech Translation in Low-Resource Nigerian Languages</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-naijas2st-a-multi-accent-benchmark-for-speech-to/</link>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-naijas2st-a-multi-accent-benchmark-for-speech-to/</guid>
      <description>这篇论文旨在解决非洲低资源语言在语音翻译（S2ST和S2TT）研究中面临的高质量、多口音平行语音数据严重匮乏的核心瓶颈。为此，作者构建了**NaijaS2ST**数据集，涵盖豪萨语、伊博语、约鲁巴语和尼日利亚皮钦语与英语的平行语音，每种语言约50小时，捕获了真实的说话者与口音多样性。基于此数据集，论</description>
    </item>
    <item>
      <title>TinyMU: A Compact Audio-Language Model for Music Understanding</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-tinymu-a-compact-audio-language-model-for-music/</link>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-tinymu-a-compact-audio-language-model-for-music/</guid>
      <description>本文针对现有大型音频语言模型（LALM）参数庞大（数十亿级）、训练推理成本高、难以部署在边缘设备的问题，提出了 TinyMU——一个仅有 229M 参数的紧凑音乐语言模型。为此，作者构建了 MusicSkills-3.5M 数据集，包含 350 万个涵盖多选、二元判断和开放式格式的音乐问答样本，结合</description>
    </item>
    <item>
      <title>Audio-Cogito: Towards Deep Audio Reasoning in Large Audio Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-audio-cogito-towards-deep-audio-reasoning-in/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-audio-cogito-towards-deep-audio-reasoning-in/</guid>
      <description>这篇论文旨在解决大型音频语言模型（LALMs）在复杂音频推理任务上能力不足且依赖昂贵闭源数据的问题。作者提出了一个名为**Audio-Cogito**的全开源解决方案，其核心是**Cogito-Pip</description>
    </item>
    <item>
      <title>AVID: A Benchmark for Omni-Modal Audio-Visual Inconsistency Understanding via Agent-Driven Construction</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-avid-a-benchmark-for-omni-modal-audio-visual/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-avid-a-benchmark-for-omni-modal-audio-visual/</guid>
      <description>这篇论文旨在解决当前全模态大模型在音视频不一致性理解能力上缺乏系统性评估的问题。现有基准要么只关注音视频对齐事件，要么局限于检测深度伪造中的低级伪影，无法评估模型对长视频中语义级矛盾的理解。为此，作者</description>
    </item>
    <item>
      <title>Beyond Transcription: Unified Audio Schema for Perception-Aware AudioLLMs</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-beyond-transcription-unified-audio-schema-for/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-beyond-transcription-unified-audio-schema-for/</guid>
      <description>这篇论文旨在解决当前音频大语言模型（AudioLLMs）在细粒度声学感知任务上表现不佳的核心问题。作者指出，主流的以自动语音识别（ASR）为中心的训练范式，通过将音频映射到纯文本转录，系统性地丢弃了副</description>
    </item>
    <item>
      <title>Hijacking Large Audio-Language Models via Context-Agnostic and Imperceptible Auditory Prompt Injection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-hijacking-large-audio-language-models-via-context/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-hijacking-large-audio-language-models-via-context/</guid>
      <description>这篇论文揭示了针对音频大语言模型（LALM）的一种新型安全威胁：**上下文无关且不可感知的音频提示注入攻击**。攻击者仅需篡改输入音频数据（如会议录音、音乐片段），即可在用户不知情的情况下，劫持模型行</description>
    </item>
    <item>
      <title>Listen, Pause, and Reason: Toward Perception-Grounded Hybrid Reasoning for Audio Understanding</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-listen-pause-and-reason-toward-perception/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-listen-pause-and-reason-toward-perception/</guid>
      <description>本文旨在解决大型音频语言模型在复杂音频场景中因感知错误导致的推理失败问题。受听觉场景分析启发，作者提出了一个感知接地的混合推理框架。首先，他们构建了一个名为PAQA的新数据集，通过层次化解耦策略（区分</description>
    </item>
    <item>
      <title>MoshiRAG: Asynchronous Knowledge Retrieval for Full-Duplex Speech Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-moshirag-asynchronous-knowledge-retrieval-for/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-moshirag-asynchronous-knowledge-retrieval-for/</guid>
      <description>本文提出了MoshiRAG，这是首个集成检索增强生成功能的全双工语音语言模型。**要解决的问题**是全双工语音模型在保持实时交互性的同时，事实准确性不足的挑战。**核心方法**是基于Moshi模型，设</description>
    </item>
    <item>
      <title>SpotSound: Enhancing Large Audio-Language Models with Fine-Grained Temporal Grounding</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-spotsound-enhancing-large-audio-language-models/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-spotsound-enhancing-large-audio-language-models/</guid>
      <description>本文旨在解决大型音频语言模型在**细粒度音频事件时间定位**上的不足。现有模型因训练数据缺乏精确时间戳、基准测试过于简单，导致在长音频中定位短暂事件（“大海捞针”）时表现不可靠。为此，作者提出了**S</description>
    </item>
    <item>
      <title>Towards Fine-grained Temporal Perception: Post-Training Large Audio-Language Models with Audio-Side Time Prompt</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-towards-fine-grained-temporal-perception-post/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-towards-fine-grained-temporal-perception-post/</guid>
      <description>这篇论文旨在解决大型音频语言模型（LALM）在细粒度时间感知（如精确定位声音事件的起止时间）上的不足。作者提出了**TimePro-RL**框架，其核心是两步走策略：首先，提出**音频侧时间提示（AS</description>
    </item>
    <item>
      <title>Why Your Tokenizer Fails in Information Fusion: A Timing-Aware Pre-Quantization Fusion for Video-Enhanced Audio Tokenization</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-why-your-tokenizer-fails-in-information-fusion-a/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-why-your-tokenizer-fails-in-information-fusion-a/</guid>
      <description>这篇论文深入探讨了在端到端音频语言模型中，将视觉信息融入音频分词器时普遍存在的“理解提升但重建质量下降”的核心矛盾。作者通过系统性实验，揭示了三个关键发现：融合位置（在量化前还是量化后）至关重要；在离</description>
    </item>
  </channel>
</rss>
