<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>大语言模型 on 语音/音频论文速递</title>
    <link>https://nanless.github.io/audio-paper-digest-blog/tags/%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B/</link>
    <description>Recent content in 大语言模型 on 语音/音频论文速递</description>
    <generator>Hugo</generator>
    <language>zh-cn</language>
    <lastBuildDate>Wed, 29 Apr 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://nanless.github.io/audio-paper-digest-blog/tags/%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>A Dataset of Robot-Patient and Doctor-Patient Medical Dialogues for Spoken Language Processing Tasks</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-a-dataset-of-robot-patient-and-doctor-patient/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-a-dataset-of-robot-patient-and-doctor-patient/</guid>
      <description>语音对话系统 | 7.5/10</description>
    </item>
    <item>
      <title>A LLM-Driven Acoustic Semantic Enriched Framework for Underwater Acoustic Target Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-a-llm-driven-acoustic-semantic-enriched-framework/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-a-llm-driven-acoustic-semantic-enriched-framework/</guid>
      <description>音频分类 | 7.0/10</description>
    </item>
    <item>
      <title>A Personalized Real-Time Proactive Voice Memory Assistant</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-a-personalized-real-time-proactive-voice-memory/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-a-personalized-real-time-proactive-voice-memory/</guid>
      <description>实时处理 | 7.0/10</description>
    </item>
    <item>
      <title>Advancing Speech Understanding in Speech-Aware Language Models with GRPO</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-advancing-speech-understanding-in-speech-aware/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-advancing-speech-understanding-in-speech-aware/</guid>
      <description>语音问答 | 7.0/10</description>
    </item>
    <item>
      <title>Aligning Language Models for Lyric-to-Melody Generation with Rule-Based Musical Constraints</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-aligning-language-models-for-lyric-to-melody/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-aligning-language-models-for-lyric-to-melody/</guid>
      <description>音乐生成 | 7.5/10</description>
    </item>
    <item>
      <title>An Anomaly-Aware and Audio-Enhanced Dual-Pathway Framework for Alzheimer’s Disease Progression Classification</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-an-anomaly-aware-and-audio-enhanced-dual-pathway/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-an-anomaly-aware-and-audio-enhanced-dual-pathway/</guid>
      <description>语音生物标志物 | 7.0/10</description>
    </item>
    <item>
      <title>AUDIOGENIE-Reasoner: A Training-Free Multi-Agent Framework for Coarse-to-Fine Audio Deep Reasoning</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-audiogenie-reasoner-a-training-free-multi-agent/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-audiogenie-reasoner-a-training-free-multi-agent/</guid>
      <description>音频问答 | 7.0/10</description>
    </item>
    <item>
      <title>ClawMark: A Living-World Benchmark for Multi-Turn, Multi-Day, Multimodal Coworker Agents</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-clawmark-a-living-world-benchmark-for-multi-turn/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-clawmark-a-living-world-benchmark-for-multi-turn/</guid>
      <description>基准测试 | 7.0/10</description>
    </item>
    <item>
      <title>Clue2Emo: A Brain-Inspired Framework for Open-Vocabulary Multimodal Emotion Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-clue2emo-a-brain-inspired-framework-for-open/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-clue2emo-a-brain-inspired-framework-for-open/</guid>
      <description>语音情感识别 | 8.5/10</description>
    </item>
    <item>
      <title>Confidence-Guided Error Correction for Disordered Speech Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-confidence-guided-error-correction-for-disordered/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-confidence-guided-error-correction-for-disordered/</guid>
      <description>语音识别 | 7.5/10</description>
    </item>
    <item>
      <title>Content Anonymization for Privacy in Long-Form Audio</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-content-anonymization-for-privacy-in-long-form/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-content-anonymization-for-privacy-in-long-form/</guid>
      <description>语音匿名化 | 7.5/10</description>
    </item>
    <item>
      <title>Context-Aware Dynamic Graph Learning for Multimodal Emotion Recognition with Missing Modalities</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-context-aware-dynamic-graph-learning-for/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-context-aware-dynamic-graph-learning-for/</guid>
      <description>语音情感识别 | 8.8/10</description>
    </item>
    <item>
      <title>Cutscene Agent: An LLM Agent Framework for Automated 3D Cutscene Generation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-cutscene-agent-an-llm-agent-framework-for/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-cutscene-agent-an-llm-agent-framework-for/</guid>
      <description>生成模型 | 8.5/10</description>
    </item>
    <item>
      <title>Easy Turn: Integrating Acoustic and Linguistic Modalities for Robust Turn-Taking in Full-Duplex Spoken Dialogue Systems</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-easy-turn-integrating-acoustic-and-linguistic/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-easy-turn-integrating-acoustic-and-linguistic/</guid>
      <description>语音对话系统 | 7.0/10</description>
    </item>
    <item>
      <title>EMORL-TTS: Reinforcement Learning for Fine-Grained Emotion Control in LLM-based TTS</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-emorl-tts-reinforcement-learning-for-fine-grained/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-emorl-tts-reinforcement-learning-for-fine-grained/</guid>
      <description>语音合成 | 8.5/10</description>
    </item>
    <item>
      <title>EmoShift: Lightweight Activation Steering for Enhanced Emotion-Aware Speech Synthesis</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-emoshift-lightweight-activation-steering-for/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-emoshift-lightweight-activation-steering-for/</guid>
      <description>语音合成 | 7.0/10</description>
    </item>
    <item>
      <title>Equipping Large Language Model with Directional Speech Understanding Capabilities</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-equipping-large-language-model-with-directional/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-equipping-large-language-model-with-directional/</guid>
      <description>语音识别 语音翻译 | 7.0/10</description>
    </item>
    <item>
      <title>Flexi-LoRA with Input-Adaptive Ranks: Efficient Finetuning for Speech and Reasoning Tasks</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-flexi-lora-with-input-adaptive-ranks-efficient/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-flexi-lora-with-input-adaptive-ranks-efficient/</guid>
      <description>语音识别 | 7.5/10</description>
    </item>
    <item>
      <title>Generative UI as an Accessibility Bridge: Lessons from C2C E-Commerce</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-generative-ui-as-an-accessibility-bridge-lessons/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-generative-ui-as-an-accessibility-bridge-lessons/</guid>
      <description>无障碍 | 6.5/10</description>
    </item>
    <item>
      <title>HD-PPT: Hierarchical Decoding of Content- and Prompt-Preference Tokens for Instruction-Based TTS</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-hd-ppt-hierarchical-decoding-of-content-and/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-hd-ppt-hierarchical-decoding-of-content-and/</guid>
      <description>语音合成 | 8.0/10</description>
    </item>
    <item>
      <title>Hierarchical Tokenization of Multimodal Music Data for Generative Music Retrieval</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-hierarchical-tokenization-of-multimodal-music/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-hierarchical-tokenization-of-multimodal-music/</guid>
      <description>音乐检索 | 7.0/10</description>
    </item>
    <item>
      <title>Improving Contextual Asr Via Multi-Grained Fusion With Large Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-improving-contextual-asr-via-multi-grained-fusion/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-improving-contextual-asr-via-multi-grained-fusion/</guid>
      <description>语音识别 | 8.5/10</description>
    </item>
    <item>
      <title>Integrating Speaker Embeddings and LLM-Derived Semantic Representations for Streaming Speaker Diarization</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-integrating-speaker-embeddings-and-llm-derived/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-integrating-speaker-embeddings-and-llm-derived/</guid>
      <description>说话人分离 | 6.5/10</description>
    </item>
    <item>
      <title>K-Function: Joint Pronunciation Transcription and Feedback for Evaluating Kids Language Function</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-k-function-joint-pronunciation-transcription-and/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-k-function-joint-pronunciation-transcription-and/</guid>
      <description>语音识别 | 7.5/10</description>
    </item>
    <item>
      <title>LAMB: LLM-Based Audio Captioning with Modality Gap Bridging Via Cauchy-Schwarz Divergence</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-lamb-llm-based-audio-captioning-with-modality-gap/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-lamb-llm-based-audio-captioning-with-modality-gap/</guid>
      <description>音频描述 | 7.0/10</description>
    </item>
    <item>
      <title>LESS: Large Language Model Enhanced Semi-Supervised Learning for Speech Foundational Models Using in-the-wild Data</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-less-large-language-model-enhanced-semi/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-less-large-language-model-enhanced-semi/</guid>
      <description>语音识别 语音翻译 | 7.5/10</description>
    </item>
    <item>
      <title>Leveraging Segment-Level Speech Representations for LLM-Based Speech Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-leveraging-segment-level-speech-representations/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-leveraging-segment-level-speech-representations/</guid>
      <description>语音识别 | 7.0/10</description>
    </item>
    <item>
      <title>LLM-Based Post-ASR Error Correction for Disordered Speech</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-llm-based-post-asr-error-correction-for/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-llm-based-post-asr-error-correction-for/</guid>
      <description>语音识别 | 7.5/10</description>
    </item>
    <item>
      <title>MAGE: A Coarse-to-Fine Speech Enhancer with Masked Generative Model</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-mage-a-coarse-to-fine-speech-enhancer-with-masked/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-mage-a-coarse-to-fine-speech-enhancer-with-masked/</guid>
      <description>语音增强 | 8.0/10</description>
    </item>
    <item>
      <title>MCF: Text LLMS for Multimodal Emotional Causality</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-mcf-text-llms-for-multimodal-emotional-causality/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-mcf-text-llms-for-multimodal-emotional-causality/</guid>
      <description>情感分析 | 8.0/10</description>
    </item>
    <item>
      <title>Medical ASR Enhancement by Domain-Specific Reinforcement Fine-Tuning</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-medical-asr-enhancement-by-domain-specific/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-medical-asr-enhancement-by-domain-specific/</guid>
      <description>语音识别 | 6.5/10</description>
    </item>
    <item>
      <title>MIDI-LLaMA: An Instruction-Following Multimodal LLM for Symbolic Music Understanding</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-midi-llama-an-instruction-following-multimodal/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-midi-llama-an-instruction-following-multimodal/</guid>
      <description>音乐理解 | 7.5/10</description>
    </item>
    <item>
      <title>Modeling Both Intra- And Inter-Utterance Variability for Conversational Emotion Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-modeling-both-intra-and-inter-utterance/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-modeling-both-intra-and-inter-utterance/</guid>
      <description>语音情感识别 | 6.5/10</description>
    </item>
    <item>
      <title>OMNI-AVSR: Towards Unified Multimodal Speech Recognition With Large Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-omni-avsr-towards-unified-multimodal-speech/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-omni-avsr-towards-unified-multimodal-speech/</guid>
      <description>语音识别 | 8.5/10</description>
    </item>
    <item>
      <title>OV-INSTRUCTTTS: Towards Open-Vocabulary Instruct Text-to-Speech</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-ov-instructtts-towards-open-vocabulary-instruct/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-ov-instructtts-towards-open-vocabulary-instruct/</guid>
      <description>语音合成 | 8.0/10</description>
    </item>
    <item>
      <title>PAC: Pronunciation-Aware Contextualized Large Language Model-Based Automatic Speech Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-pac-pronunciation-aware-contextualized-large/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-pac-pronunciation-aware-contextualized-large/</guid>
      <description>语音识别 | 7.0/10</description>
    </item>
    <item>
      <title>PhoenixDSR: Phoneme-Guided and LLM-Enhanced Dysarthric Speech Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-phoenixdsr-phoneme-guided-and-llm-enhanced/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-phoenixdsr-phoneme-guided-and-llm-enhanced/</guid>
      <description>语音识别 | 7.0/10</description>
    </item>
    <item>
      <title>Phoneme-Level Visual Speech Recognition via Point-Visual Fusion and Language Model Reconstruction</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-phoneme-level-visual-speech-recognition-via-point/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-phoneme-level-visual-speech-recognition-via-point/</guid>
      <description>视觉语音识别 | 7.5/10</description>
    </item>
    <item>
      <title>PROST-LLM: Progressively Enhancing the Speech-to-Speech Translation Capability in LLMs</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-prost-llm-progressively-enhancing-the-speech-to/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-prost-llm-progressively-enhancing-the-speech-to/</guid>
      <description>语音翻译 | 7.5/10</description>
    </item>
    <item>
      <title>Relative Time Intervals Representation For Word-Level Timestamping With Masked Training</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-relative-time-intervals-representation-for-word/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-relative-time-intervals-representation-for-word/</guid>
      <description>语音识别 | 8.0/10</description>
    </item>
    <item>
      <title>Rethinking Music Captioning with Music Metadata LLMS</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-rethinking-music-captioning-with-music-metadata/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-rethinking-music-captioning-with-music-metadata/</guid>
      <description>音乐理解 | 7.0/10</description>
    </item>
    <item>
      <title>RRPO: Robust Reward Policy Optimization for LLM-Based Emotional TTS</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-rrpo-robust-reward-policy-optimization-for-llm/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-rrpo-robust-reward-policy-optimization-for-llm/</guid>
      <description>语音合成 | 7.5/10</description>
    </item>
    <item>
      <title>SEP-ST: Incorporating Speech Entity Prompt Into Large Language Models for Speech Translation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-sep-st-incorporating-speech-entity-prompt-into/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-sep-st-incorporating-speech-entity-prompt-into/</guid>
      <description>语音翻译 | 7.5/10</description>
    </item>
    <item>
      <title>SPADE: Structured Pruning and Adaptive Distillation for Efficient LLM-TTS</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-spade-structured-pruning-and-adaptive/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-spade-structured-pruning-and-adaptive/</guid>
      <description>语音合成 | 7.5/10</description>
    </item>
    <item>
      <title>SPAM: Style Prompt Adherence Metric for Prompt-Based TTS</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-spam-style-prompt-adherence-metric-for-prompt/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-spam-style-prompt-adherence-metric-for-prompt/</guid>
      <description>语音合成 | 7.0/10</description>
    </item>
    <item>
      <title>SpeechMapper: Speech-To-Text Embedding Projector for LLMs</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-speechmapper-speech-to-text-embedding-projector/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-speechmapper-speech-to-text-embedding-projector/</guid>
      <description>语音大模型 | 7.0/10</description>
    </item>
    <item>
      <title>Still Thinking or Stopped Talking? Dialogue Silence Intention Classification Using Multimodal Large Language Model</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-still-thinking-or-stopped-talking-dialogue/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-still-thinking-or-stopped-talking-dialogue/</guid>
      <description>语音对话系统 | 6.5/10</description>
    </item>
    <item>
      <title>Style Attack Disguise: When Fonts Become a Camouflage for Adversarial Intent</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-style-attack-disguise-when-fonts-become-a/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-style-attack-disguise-when-fonts-become-a/</guid>
      <description>对抗样本 | 7.0/10</description>
    </item>
    <item>
      <title>Synthetic Data Domain Adaptation for ASR via LLM-Based Text and Phonetic Respelling Augmentation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-synthetic-data-domain-adaptation-for-asr-via-llm/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-synthetic-data-domain-adaptation-for-asr-via-llm/</guid>
      <description>语音识别 | 8.0/10</description>
    </item>
    <item>
      <title>TAG: Structured Temporal Audio Generation via LLM-Guided Manual Scription and Control</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-tag-structured-temporal-audio-generation-via-llm/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-tag-structured-temporal-audio-generation-via-llm/</guid>
      <description>音频生成 | 7.5/10</description>
    </item>
    <item>
      <title>Target-Speaker LLM-ASR with Speaker-Aware Speech Encoder</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-target-speaker-llm-asr-with-speaker-aware-speech/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-target-speaker-llm-asr-with-speaker-aware-speech/</guid>
      <description>语音识别 | 8.8/10</description>
    </item>
    <item>
      <title>Test-Time Scaling for Auditory Cognition in Audio Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-test-time-scaling-for-auditory-cognition-in-audio/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-test-time-scaling-for-auditory-cognition-in-audio/</guid>
      <description>音频问答 | 7.0/10</description>
    </item>
    <item>
      <title>Text2midi-InferAlign: Improving Symbolic Music Generation with Inference-Time Alignment</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-text2midi-inferalign-improving-symbolic-music/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-text2midi-inferalign-improving-symbolic-music/</guid>
      <description>音乐生成 | 7.5/10</description>
    </item>
    <item>
      <title>The Structured Output Benchmark: A Multi-Source Benchmark for Evaluating Structured Output Quality in Large Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-the-structured-output-benchmark-a-multi-source/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-the-structured-output-benchmark-a-multi-source/</guid>
      <description>基准测试 | 7.0/10</description>
    </item>
    <item>
      <title>Thinking While Listening: Simple Test Time Scaling for Audio Classification</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-thinking-while-listening-simple-test-time-scaling/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-thinking-while-listening-simple-test-time-scaling/</guid>
      <description>音频分类 | 6.5/10</description>
    </item>
    <item>
      <title>Towards Distance-Aware Synthetic Audio Mixtures for Universal Sound Separation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-towards-distance-aware-synthetic-audio-mixtures/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-towards-distance-aware-synthetic-audio-mixtures/</guid>
      <description>语音分离 | 6.5/10</description>
    </item>
    <item>
      <title>Towards Orthographically-Informed Evaluation of Speech Recognition Systems for Indian Languages</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-towards-orthographically-informed-evaluation-of/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-towards-orthographically-informed-evaluation-of/</guid>
      <description>语音识别 | 7.0/10</description>
    </item>
    <item>
      <title>Towards Robust Dysarthric Speech Recognition: LLM-Agent Post-ASR Correction Beyond WER</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-towards-robust-dysarthric-speech-recognition-llm/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-towards-robust-dysarthric-speech-recognition-llm/</guid>
      <description>语音识别 | 9.0/10</description>
    </item>
    <item>
      <title>UVT-LM: Unifying Visual and Tactile Perception with Language Model</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-uvt-lm-unifying-visual-and-tactile-perception/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-uvt-lm-unifying-visual-and-tactile-perception/</guid>
      <description>跨模态 | 7.0/10</description>
    </item>
    <item>
      <title>Whisper: Courtside Edition - Enhancing ASR Performance through LLM-Driven Context Generation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-whisper-courtside-edition-enhancing-asr/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-whisper-courtside-edition-enhancing-asr/</guid>
      <description>语音识别 | 6.5/10</description>
    </item>
    <item>
      <title>Z-Scores: A Metric for Linguistically Assessing Disfluency Removal</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-z-scores-a-metric-for-linguistically-assessing/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-z-scores-a-metric-for-linguistically-assessing/</guid>
      <description>模型评估 | 6.5/10</description>
    </item>
    <item>
      <title>All That Glitters Is Not Audio: Rethinking Text Priors and Audio Reliance in Audio-Language Evaluation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-all-that-glitters-is-not-audio-rethinking-text/</link>
      <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-all-that-glitters-is-not-audio-rethinking-text/</guid>
      <description>音频问答 | 6.5/10</description>
    </item>
    <item>
      <title>CineAGI: Character-Consistent Movie Creation through LLM-Orchestrated Multi-Modal Generation and Cross-Scene Integration</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-cineagi-character-consistent-movie-creation/</link>
      <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-cineagi-character-consistent-movie-creation/</guid>
      <description>跨模态 | 8.0/10</description>
    </item>
    <item>
      <title>DM-ASR: Diarization-aware Multi-speaker ASR with Large Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-27-dm-asr-diarization-aware-multi-speaker-asr-with/</link>
      <pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-27-dm-asr-diarization-aware-multi-speaker-asr-with/</guid>
      <description>说话人识别 | 8.0/10</description>
    </item>
    <item>
      <title>语音/音频论文速递 2026-04-27</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-27/</link>
      <pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-27/</guid>
      <description>共分析 13 篇语音/AI 论文</description>
    </item>
    <item>
      <title>MOMO: A framework for seamless physical, verbal, and graphical robot skill learning and adaptation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-25-momo-a-framework-for-seamless-physical-verbal-and/</link>
      <pubDate>Sat, 25 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-25-momo-a-framework-for-seamless-physical-verbal-and/</guid>
      <description>机器人技能学习 | 7.5/10</description>
    </item>
    <item>
      <title>语音/音频论文速递 2026-04-25</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-25/</link>
      <pubDate>Sat, 25 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-25/</guid>
      <description>共分析 2 篇语音/AI 论文</description>
    </item>
    <item>
      <title>ATRIE: Adaptive Tuning for Robust Inference and Emotion in Persona-Driven Speech Synthesis</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-24-atrie-adaptive-tuning-for-robust-inference-and/</link>
      <pubDate>Fri, 24 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-24-atrie-adaptive-tuning-for-robust-inference-and/</guid>
      <description>语音合成 | 7.0/10</description>
    </item>
    <item>
      <title>Evaluation of Automatic Speech Recognition Using Generative Large Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-24-evaluation-of-automatic-speech-recognition-using/</link>
      <pubDate>Fri, 24 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-24-evaluation-of-automatic-speech-recognition-using/</guid>
      <description>语音识别 | 7.5/10</description>
    </item>
    <item>
      <title>Hierarchical Policy Optimization for Simultaneous Translation of Unbounded Speech</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-24-hierarchical-policy-optimization-for-simultaneous/</link>
      <pubDate>Fri, 24 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-24-hierarchical-policy-optimization-for-simultaneous/</guid>
      <description>语音翻译 | 7.5/10</description>
    </item>
    <item>
      <title>Low-Rank Adaptation Redux for Large Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-24-low-rank-adaptation-redux-for-large-models/</link>
      <pubDate>Fri, 24 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-24-low-rank-adaptation-redux-for-large-models/</guid>
      <description>大语言模型 | 5.5/10</description>
    </item>
    <item>
      <title>语音/音频论文速递 2026-04-24</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-24/</link>
      <pubDate>Fri, 24 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-24/</guid>
      <description>共分析 21 篇语音/AI 论文</description>
    </item>
    <item>
      <title>FastTurn: Unifying Acoustic and Streaming Semantic Cues for Low-Latency and Robust Turn Detection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-23-fastturn-unifying-acoustic-and-streaming-semantic/</link>
      <pubDate>Thu, 23 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-23-fastturn-unifying-acoustic-and-streaming-semantic/</guid>
      <description>这篇论文针对全双工语音对话系统中需要低延迟、高精度判断用户是否结束发言（轮次检测）的难题，提出了FastTurn统一框架。其核心方法是将流式CTC解码提供的快速部分语义信息，与Conformer编码器提取的声学特征，通过适配器输入给大语言模型（LLM）进行推理，并最终融合声学与语义特征进行轮次预测。</description>
    </item>
    <item>
      <title>MOMO: A framework for seamless physical, verbal, and graphical robot skill learning and adaptation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-23-momo-a-framework-for-seamless-physical-verbal-and/</link>
      <pubDate>Thu, 23 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-23-momo-a-framework-for-seamless-physical-verbal-and/</guid>
      <description>1. **问题**：工业机器人需要频繁适应新任务和环境，但现有技能调整方法（如手动重编程）对非专家用户不友好，且单一交互模态无法高效处理所有类型的调整需求。 2. **方法核心**：提出MOMO框架，集成三种互补交互模态：动觉接触（用于精确空间修正）、自然语言（用于高层语义修改）和图形界面（用于参数</description>
    </item>
    <item>
      <title>Detecting Hallucinations in SpeechLLMs at Inference Time Using Attention Maps</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-detecting-hallucinations-in-speechllms-at/</link>
      <pubDate>Wed, 22 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-detecting-hallucinations-in-speechllms-at/</guid>
      <description>本文旨在解决语音大模型（SpeechLLMs）在推理时产生的“幻觉”问题，即生成与输入音频不符的流畅文本。现有方法依赖昂贵的黄金标准输出，而文本LLM的方法无法捕捉音频特有信号。为此，作者提出了四个基于注意力图的轻量级指标（AudioRatio, AudioConsistency, AudioEnt</description>
    </item>
    <item>
      <title>NVBench: A Benchmark for Speech Synthesis with Non-Verbal Vocalizations</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-nvbench-a-benchmark-for-speech-synthesis-with-non/</link>
      <pubDate>Wed, 22 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-nvbench-a-benchmark-for-speech-synthesis-with-non/</guid>
      <description>这篇论文旨在解决语音合成（TTS）领域中一个关键但被忽视的问题：如何标准化评估系统生成非语言声音（NVV，如笑声、叹息）的能力。作者提出了**NVBench**，一个包含**45类NVV统一分类体系**的双语（英/中）基准。其核心方法包括：1）构建了一个每类50例、总计4500例的高质量平衡评估数据</description>
    </item>
    <item>
      <title>Towards Streaming Target Speaker Extraction via Chunk-wise Interleaved Splicing of Autoregressive Language Model</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-towards-streaming-target-speaker-extraction-via/</link>
      <pubDate>Wed, 22 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-towards-streaming-target-speaker-extraction-via/</guid>
      <description>这篇论文旨在解决生成式目标说话人提取（TSE）模型在流式实时应用中因依赖全局上下文而导致性能严重下降的核心问题。作者首次提出了一个基于自回归语言模型（LauraGPT）的流式TSE框架。其核心创新是“分块交织拼接范式”，通过将混合音频块与对应的目标语音离散编码块交错排列作为模型输入，严格保证了推理的</description>
    </item>
    <item>
      <title>语音/音频论文速递 2026-04-22</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22/</link>
      <pubDate>Wed, 22 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22/</guid>
      <description>共分析 21 篇语音/AI 论文</description>
    </item>
    <item>
      <title>SELF-EMO: Emotional Self-Evolution from Recognition to Consistent Expression</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-self-emo-emotional-self-evolution-from/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-self-emo-emotional-self-evolution-from/</guid>
      <description>本文旨在解决对话系统中情感识别（ERC）与情感表达能力受限于高质量标注数据稀缺且静态的问题。**核心贡献**是提出了一个心理学动机的自我进化框架 **SELF-EMO**。**关键方法**是构建一个角色扮演的自博弈范式，使模型同时充当“情绪识别者”和“对话响应者”，并通过一个“生成-筛选-重用”的数</description>
    </item>
    <item>
      <title>ActorMind: Emulating Human Actor Reasoning for Speech Role-Playing</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-actormind-emulating-human-actor-reasoning-for/</link>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-actormind-emulating-human-actor-reasoning-for/</guid>
      <description>这篇论文旨在解决现有角色扮演研究局限于文本模态，而忽视了日常交流中主导的语音模态的问题。为此，作者首先**定义了“语音角色扮演”任务**，要求模型能根据角色、场景和对话历史，生成带有个性化语音特征（如特定情感、语调）的自发性回应。为此，他们构建了**ActorMindBench**，这是一个基于《老</description>
    </item>
    <item>
      <title>Full-Duplex-Bench-v3: Benchmarking Tool Use for Full-Duplex Voice Agents Under Real-World Disfluency</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-full-duplex-bench-v3-benchmarking-tool-use-for/</link>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-full-duplex-bench-v3-benchmarking-tool-use-for/</guid>
      <description>这篇论文针对当前全双工语音代理评估缺乏真实性（依赖合成语音）和任务简单性（单步调用）的问题，提出了**Full-Duplex-Bench-v3 (FDB-v3)** 基准。该基准的核心创新在于使用**100条真实人类录音**（含五种不流畅性注释），在四个任务域中设计了需要**多步API链式调用**的</description>
    </item>
    <item>
      <title>Interactive ASR: Towards Human-Like Interaction and Semantic Coherence Evaluation for Agentic Speech Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-interactive-asr-towards-human-like-interaction/</link>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-interactive-asr-towards-human-like-interaction/</guid>
      <description>这篇论文针对传统ASR的两大盲区——WER指标对语义错误不敏感、以及系统无法通过自然交互进行纠错——提出了Interactive ASR框架。首先，作者引入S²ER（Sentence-level Semantic Error Rate），利用LLM-as-a-Judge二元判断识别结果与参考文本是否</description>
    </item>
    <item>
      <title>PS-TTS: Phonetic Synchronization in Text-to-Speech for Achieving Natural Automated Dubbing</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-ps-tts-phonetic-synchronization-in-text-to-speech/</link>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-ps-tts-phonetic-synchronization-in-text-to-speech/</guid>
      <description>这篇论文旨在解决自动配音（AD）中目标语音与源语音在时长和唇形上的同步难题。其核心贡献是提出了一套两阶段的文本改写方法，并集成到TTS系统中：首先通过语言模型进行**等时性**改写，确保目标语音时长匹配源语音；其次引入**音素同步（PS）**，使用动态时间规整（DTW）和从训练数据中学习的元音距离，</description>
    </item>
    <item>
      <title>Adaptive Test-Time Scaling for Zero-Shot Respiratory Audio Classification</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-adaptive-test-time-scaling-for-zero-shot/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-adaptive-test-time-scaling-for-zero-shot/</guid>
      <description>本文旨在解决零样本呼吸音频分类中“一刀切”的推理计算浪费问题。为此，提出了TRIAGE框架，这是一个三层自适应推理管道：第一层（Tier-L）进行快速的标签-文本相似度匹配；若置信度不足则升级至第二层</description>
    </item>
    <item>
      <title>Diffusion Language Models for Speech Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-diffusion-language-models-for-speech-recognition/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-diffusion-language-models-for-speech-recognition/</guid>
      <description>这篇论文探索了将扩散语言模型（DLM）应用于自动语音识别（ASR）任务的新方法。其核心目标是利用扩散模型的双向注意和并行生成能力，来提升基于传统编码器（如CTC）生成的ASR候选假设的准确性。论文主要</description>
    </item>
    <item>
      <title>Few-Shot and Pseudo-Label Guided Speech Quality Evaluation with Large Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-few-shot-and-pseudo-label-guided-speech-quality/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-few-shot-and-pseudo-label-guided-speech-quality/</guid>
      <description>本文旨在解决非侵入式语音质量评估在标注数据有限场景下的性能瓶颈。作者提出了GatherMOS框架，其核心是将大语言模型（如GPT-5）作为一个元评估器，通过精心设计的文本提示，融合多类异构信号：包括手</description>
    </item>
    <item>
      <title>Listen, Pause, and Reason: Toward Perception-Grounded Hybrid Reasoning for Audio Understanding</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-listen-pause-and-reason-toward-perception/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-listen-pause-and-reason-toward-perception/</guid>
      <description>本文旨在解决大型音频语言模型在复杂音频场景中因感知错误导致的推理失败问题。受听觉场景分析启发，作者提出了一个感知接地的混合推理框架。首先，他们构建了一个名为PAQA的新数据集，通过层次化解耦策略（区分</description>
    </item>
    <item>
      <title>MoshiRAG: Asynchronous Knowledge Retrieval for Full-Duplex Speech Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-moshirag-asynchronous-knowledge-retrieval-for/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-moshirag-asynchronous-knowledge-retrieval-for/</guid>
      <description>本文提出了MoshiRAG，这是首个集成检索增强生成功能的全双工语音语言模型。**要解决的问题**是全双工语音模型在保持实时交互性的同时，事实准确性不足的挑战。**核心方法**是基于Moshi模型，设</description>
    </item>
  </channel>
</rss>
