<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>音视频 on 语音/音频论文速递</title>
    <link>https://nanless.github.io/audio-paper-digest-blog/tags/%E9%9F%B3%E8%A7%86%E9%A2%91/</link>
    <description>Recent content in 音视频 on 语音/音频论文速递</description>
    <generator>Hugo</generator>
    <language>zh-cn</language>
    <lastBuildDate>Wed, 29 Apr 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://nanless.github.io/audio-paper-digest-blog/tags/%E9%9F%B3%E8%A7%86%E9%A2%91/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>AISHELL6-Whisper: A Chinese Mandarin Audio-Visual Whisper Speech Dataset with Speech Recognition Baselines</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-aishell6-whisper-a-chinese-mandarin-audio-visual/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-aishell6-whisper-a-chinese-mandarin-audio-visual/</guid>
      <description>语音识别 | 8.3/10</description>
    </item>
    <item>
      <title>An Audio-Visual Speech Separation Network with Joint Cross-Attention and Iterative Modeling</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-an-audio-visual-speech-separation-network-with/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-an-audio-visual-speech-separation-network-with/</guid>
      <description>语音分离 | 7.5/10</description>
    </item>
    <item>
      <title>Assessing Identity Leakage in Talking Face Generation: Metrics and Evaluation Framework</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-assessing-identity-leakage-in-talking-face/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-assessing-identity-leakage-in-talking-face/</guid>
      <description>说话人脸生成 | 7.5/10</description>
    </item>
    <item>
      <title>Asynchrony-Aware Decoupled Multimodal Control for Cued Speech Video Generation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-asynchrony-aware-decoupled-multimodal-control-for/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-asynchrony-aware-decoupled-multimodal-control-for/</guid>
      <description>语音合成 | 7.5/10</description>
    </item>
    <item>
      <title>Attentive AV-Fusionnet: Audio-Visual Quality Prediction with Hybrid Attention</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-attentive-av-fusionnet-audio-visual-quality/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-attentive-av-fusionnet-audio-visual-quality/</guid>
      <description>音视频 | 7.0/10</description>
    </item>
    <item>
      <title>Audio-Visual Feature Fusion for Calibrating Relevance Scores of Video Moment Retrieval</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-audio-visual-feature-fusion-for-calibrating/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-audio-visual-feature-fusion-for-calibrating/</guid>
      <description>视频片段检索 | 7.0/10</description>
    </item>
    <item>
      <title>AVO-65: A Large-Scale Hierarchical Audio-Visual Object Dataset</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-avo-65-a-large-scale-hierarchical-audio-visual/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-avo-65-a-large-scale-hierarchical-audio-visual/</guid>
      <description>音视频 | 7.0/10</description>
    </item>
    <item>
      <title>Bimodal Fusion Framework for Dynamic Facial Expression Recognition In-The-Wild</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-bimodal-fusion-framework-for-dynamic-facial/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-bimodal-fusion-framework-for-dynamic-facial/</guid>
      <description>语音情感识别 | 7.0/10</description>
    </item>
    <item>
      <title>Can Hierarchical Cross-Modal Fusion Predict Human Perception of AI Dubbed Content?</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-can-hierarchical-cross-modal-fusion-predict-human/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-can-hierarchical-cross-modal-fusion-predict-human/</guid>
      <description>模型评估 | 6.0/10</description>
    </item>
    <item>
      <title>CoVA: Text-Guided Composed Video Retrieval for Audio-Visual Content</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-cova-text-guided-composed-video-retrieval-for/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-cova-text-guided-composed-video-retrieval-for/</guid>
      <description>跨模态检索 | 6.5/10</description>
    </item>
    <item>
      <title>Cross-Modal Bottleneck Fusion for Noise Robust Audio-Visual Speech Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-cross-modal-bottleneck-fusion-for-noise-robust/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-cross-modal-bottleneck-fusion-for-noise-robust/</guid>
      <description>语音识别 | 7.5/10</description>
    </item>
    <item>
      <title>DepthTalk: Few-Shot Talking Head Generation with Depth-Aware 3D Gaussian Field Motion</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-depthtalk-few-shot-talking-head-generation-with/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-depthtalk-few-shot-talking-head-generation-with/</guid>
      <description>说话人生成 | 7.0/10</description>
    </item>
    <item>
      <title>Efficient Audio-Visual Inference Via Token Clustering And Modality Fusion</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-efficient-audio-visual-inference-via-token/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-efficient-audio-visual-inference-via-token/</guid>
      <description>音频问答 | 7.5/10</description>
    </item>
    <item>
      <title>FastAV: Efficient Token Pruning for Audio-Visual Large Language Model Inference</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-fastav-efficient-token-pruning-for-audio-visual/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-fastav-efficient-token-pruning-for-audio-visual/</guid>
      <description>音频问答 | 7.0/10</description>
    </item>
    <item>
      <title>FoleyBench: A Benchmark for Video-to-Audio Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-foleybench-a-benchmark-for-video-to-audio-models/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-foleybench-a-benchmark-for-video-to-audio-models/</guid>
      <description>音频生成 | 7.5/10</description>
    </item>
    <item>
      <title>GMS-CAVP: Improving Audio-Video Correspondence with Multi-Scale Constrative and Generative Pretraining</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-gms-cavp-improving-audio-video-correspondence/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-gms-cavp-improving-audio-video-correspondence/</guid>
      <description>音频生成 | 7.5/10</description>
    </item>
    <item>
      <title>ICASSP 2026 - 音视频 论文列表</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/icassp2026-task-112/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/icassp2026-task-112/</guid>
      <description>共 6 篇 ICASSP 2026 音视频 方向论文</description>
    </item>
    <item>
      <title>Meanflow-Accelerated Multimodal Video-to-Audio Synthesis Via One-Step Generation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-meanflow-accelerated-multimodal-video-to-audio/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-meanflow-accelerated-multimodal-video-to-audio/</guid>
      <description>音频生成 | 7.5/10</description>
    </item>
    <item>
      <title>Mitigating Attention Sinks and Massive Activations in Audio-Visual Speech Recognition with LLMs</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-mitigating-attention-sinks-and-massive/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-mitigating-attention-sinks-and-massive/</guid>
      <description>语音识别 | 7.0/10</description>
    </item>
    <item>
      <title>Motionbeat: Motion-Aligned Music Representation via Embodied Contrastive Learning and Bar-Equivariant Contact-Aware Encoding</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-motionbeat-motion-aligned-music-representation/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-motionbeat-motion-aligned-music-representation/</guid>
      <description>舞蹈生成 | 7.5/10</description>
    </item>
    <item>
      <title>MSCT: Differential Cross-Modal Attention for Deepfake Detection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-msct-differential-cross-modal-attention-for/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-msct-differential-cross-modal-attention-for/</guid>
      <description>音频深度伪造检测 | 6.5/10</description>
    </item>
    <item>
      <title>Multimodal Self-Attention Network with Temporal Alignment for Audio-Visual Emotion Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-multimodal-self-attention-network-with-temporal/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-multimodal-self-attention-network-with-temporal/</guid>
      <description>语音情感识别 | 8.0/10</description>
    </item>
    <item>
      <title>Noise-Robust AV-ASR Using Visual Features both in the Whisper Encoder and Decoder</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-noise-robust-av-asr-using-visual-features-both-in/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-noise-robust-av-asr-using-visual-features-both-in/</guid>
      <description>语音识别 | 8.0/10</description>
    </item>
    <item>
      <title>OMNI-AVSR: Towards Unified Multimodal Speech Recognition With Large Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-omni-avsr-towards-unified-multimodal-speech/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-omni-avsr-towards-unified-multimodal-speech/</guid>
      <description>语音识别 | 8.5/10</description>
    </item>
    <item>
      <title>PerformSinger: Multimodal Singing Voice Synthesis Leveraging Synchronized Lip Cues from Singing Performance Videos</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-performsinger-multimodal-singing-voice-synthesis/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-performsinger-multimodal-singing-voice-synthesis/</guid>
      <description>歌唱语音合成 | 4.5/10</description>
    </item>
    <item>
      <title>Prototype-Guided Cross-Modal Contrastive Learning for Continual Audio-Visual Sound Separation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-prototype-guided-cross-modal-contrastive-learning/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-prototype-guided-cross-modal-contrastive-learning/</guid>
      <description>语音分离 | 7.5/10</description>
    </item>
    <item>
      <title>PSTalker: Realistic 3D Talking Head Synthesis via a Semantic-Aware Audio-Driven Point-Based Shape</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-pstalker-realistic-3d-talking-head-synthesis-via/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-pstalker-realistic-3d-talking-head-synthesis-via/</guid>
      <description>说话人合成 | 7.5/10</description>
    </item>
    <item>
      <title>Purification Before Fusion: Toward Mask-Free Speech Enhancement for Robust Audio-Visual Speech Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-purification-before-fusion-toward-mask-free/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-purification-before-fusion-toward-mask-free/</guid>
      <description>语音识别 | 7.5/10</description>
    </item>
    <item>
      <title>RAP: Real-Time Audio-Driven Portrait Animation with Video Diffusion Transformer</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-rap-real-time-audio-driven-portrait-animation/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-rap-real-time-audio-driven-portrait-animation/</guid>
      <description>音视频 | 7.0/10</description>
    </item>
    <item>
      <title>Rethinking Entity Disambiguation in Complex Modalities</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-rethinking-entity-disambiguation-in-complex/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-rethinking-entity-disambiguation-in-complex/</guid>
      <description>实体消歧 | 8.0/10</description>
    </item>
    <item>
      <title>Semantic-Guided Pseudo-Feature Attention Network for Audio-Visual Zero-Shot Learning</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-semantic-guided-pseudo-feature-attention-network/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-semantic-guided-pseudo-feature-attention-network/</guid>
      <description>音频分类 零样本学习 | 7.0/10</description>
    </item>
    <item>
      <title>SightSound-R1: Cross-Modal Reasoning Distillation from Vision to Audio Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-sightsound-r1-cross-modal-reasoning-distillation/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-sightsound-r1-cross-modal-reasoning-distillation/</guid>
      <description>音频问答 | 7.5/10</description>
    </item>
    <item>
      <title>SIREN: Spatially-Informed Reconstruction of Binaural Audio with Vision</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-siren-spatially-informed-reconstruction-of/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-siren-spatially-informed-reconstruction-of/</guid>
      <description>空间音频 | 7.0/10</description>
    </item>
    <item>
      <title>Sounding Highlights: Dual-Pathway Audio Encoders for Audio-Visual Video Highlight Detection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-sounding-highlights-dual-pathway-audio-encoders/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-sounding-highlights-dual-pathway-audio-encoders/</guid>
      <description>视频高光检测 | 8.5/10</description>
    </item>
    <item>
      <title>Sparse-View Visual-Acoustic Latent Learning for Novel-View Audio Synthesis</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-sparse-view-visual-acoustic-latent-learning-for/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-sparse-view-visual-acoustic-latent-learning-for/</guid>
      <description>空间音频 | 7.5/10</description>
    </item>
    <item>
      <title>Spiking Temporal-Enhanced Network for Zero-Shot Audio-Visual Learning</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-spiking-temporal-enhanced-network-for-zero-shot/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-spiking-temporal-enhanced-network-for-zero-shot/</guid>
      <description>音频分类 | 7.0/10</description>
    </item>
    <item>
      <title>Streamingbench: Assessing the Gap for MLLMs to Achieve Streaming Video Understanding</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-streamingbench-assessing-the-gap-for-mllms-to/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-streamingbench-assessing-the-gap-for-mllms-to/</guid>
      <description>基准测试 | 7.5/10</description>
    </item>
    <item>
      <title>Teacher-Guided Pseudo Supervision and Cross-Modal Alignment for Audio-Visual Video Parsing</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-teacher-guided-pseudo-supervision-and-cross-modal/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-teacher-guided-pseudo-supervision-and-cross-modal/</guid>
      <description>音视频 | 7.0/10</description>
    </item>
    <item>
      <title>Temporal-Spatial Decouple Before Act: Disentangled Representation Learning for Multimodal Sentiment Analysis</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-temporal-spatial-decouple-before-act-disentangled/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-temporal-spatial-decouple-before-act-disentangled/</guid>
      <description>情感分析 | 7.5/10</description>
    </item>
    <item>
      <title>The Synergistic Role of Audio and Large Video-Language Model in Source-Free Video Domain Adaptation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-the-synergistic-role-of-audio-and-large-video/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-the-synergistic-role-of-audio-and-large-video/</guid>
      <description>领域适应 | 7.0/10</description>
    </item>
    <item>
      <title>Training-Free Multimodal Guidance for Video to Audio Generation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-training-free-multimodal-guidance-for-video-to/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-training-free-multimodal-guidance-for-video-to/</guid>
      <description>音频生成 | 8.0/10</description>
    </item>
    <item>
      <title>Uncertainty-Aware 3D Emotional Talking Face Synthesis with Emotion Prior Distillation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-uncertainty-aware-3d-emotional-talking-face/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-uncertainty-aware-3d-emotional-talking-face/</guid>
      <description>音视频 | 8.0/10</description>
    </item>
    <item>
      <title>V2A-DPO: Omni-Preference Optimization for Video-To-Audio Generation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-v2a-dpo-omni-preference-optimization-for-video-to/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-v2a-dpo-omni-preference-optimization-for-video-to/</guid>
      <description>视频到音频生成 | 7.5/10</description>
    </item>
    <item>
      <title>VividTalker: A Modular Framework for Expressive 3D Talking Avatars with Controllable Gaze and Blink</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-vividtalker-a-modular-framework-for-expressive-3d/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-vividtalker-a-modular-framework-for-expressive-3d/</guid>
      <description>语音合成 | 7.5/10</description>
    </item>
    <item>
      <title>β-AVSDNET: A Novel End-To-End Neural Network Architecture For Audio-Visual Speaker Diarization</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-avsdnet-a-novel-end-to-end-neural-network/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-avsdnet-a-novel-end-to-end-neural-network/</guid>
      <description>说话人分离 | 7.5/10</description>
    </item>
    <item>
      <title>Hallo-Live: Real-Time Streaming Joint Audio-Video Avatar Generation with Asynchronous Dual-Stream and Human-Centric Preference Distillation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-hallo-live-real-time-streaming-joint-audio-video/</link>
      <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-hallo-live-real-time-streaming-joint-audio-video/</guid>
      <description>音视频 | 8.5/10</description>
    </item>
    <item>
      <title>Talker-T2AV: Joint Talking Audio-Video Generation with Autoregressive Diffusion Modeling</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-talker-t2av-joint-talking-audio-video-generation/</link>
      <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-talker-t2av-joint-talking-audio-video-generation/</guid>
      <description>语音合成 | 7.5/10</description>
    </item>
    <item>
      <title>Audio Video Verbal Analysis (AVVA) for Capturing Classroom Dialogues</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-27-audio-video-verbal-analysis-avva-for-capturing/</link>
      <pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-27-audio-video-verbal-analysis-avva-for-capturing/</guid>
      <description>音频问答 | 6.0/10</description>
    </item>
    <item>
      <title>Misinformation Span Detection in Videos via Audio Transcripts</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-24-misinformation-span-detection-in-videos-via-audio/</link>
      <pubDate>Fri, 24 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-24-misinformation-span-detection-in-videos-via-audio/</guid>
      <description>音频安全 | 7.5/10</description>
    </item>
    <item>
      <title>Video-Robin: Autoregressive Diffusion Planning for Intent-Grounded Video-to-Music Generation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-24-video-robin-autoregressive-diffusion-planning-for/</link>
      <pubDate>Fri, 24 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-24-video-robin-autoregressive-diffusion-planning-for/</guid>
      <description>音乐生成 | 7.0/10</description>
    </item>
    <item>
      <title>APRVOS: 1st Place Winner of 5th PVUW MeViS-Audio Track</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-aprvos-1st-place-winner-of-5th-pvuw-mevis-audio/</link>
      <pubDate>Wed, 22 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-aprvos-1st-place-winner-of-5th-pvuw-mevis-audio/</guid>
      <description>这篇论文报告了APRVOS系统，一个专为MEVIS_Audio（音频条件下的指代视频对象分割）任务设计的冠军方案。**要解决的问题**是传统文本指代分割模型无法直接处理包含噪声、不完整且可能描述视频中不存在物体的语音输入。**采用的方法**是一个四阶段流水线：首先使用VibeVoice-ASR将语音</description>
    </item>
    <item>
      <title>UAF: A Unified Audio Front-end LLM for Full-Duplex Speech Interaction</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-uaf-a-unified-audio-front-end-llm-for-full-duplex/</link>
      <pubDate>Wed, 22 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-uaf-a-unified-audio-front-end-llm-for-full-duplex/</guid>
      <description>**核心贡献**：本文提出了首个专为全双工语音交互设计的统一音频前端大模型（UAF）。它打破了传统级联式前端处理的范式，将语音活动检测（VAD）、说话人识别（SR）、自动语音识别（ASR）、轮次检测（TD）和问答（QA）等多个任务，统一建模为一个自回归序列预测问题。  **关键方法**：模型采用“音</description>
    </item>
    <item>
      <title>AVRT: Audio-Visual Reasoning Transfer through Single-Modality Teachers</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-avrt-audio-visual-reasoning-transfer-through/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-avrt-audio-visual-reasoning-transfer-through/</guid>
      <description>本文旨在解决多模态大模型在音视频联合推理任务上缺乏高质量训练数据的核心挑战。**核心贡献**是提出了AVRT框架，通过组合单模态专家模型的能力来合成多模态推理数据。**关键方法**分为两步：1）**数据生成**：使用专门的视觉教师（Kimi-VL-Thinking）和音频教师（Audio Flami</description>
    </item>
    <item>
      <title>Video-Robin: Autoregressive Diffusion Planning for Intent-Grounded Video-to-Music Generation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-video-robin-autoregressive-diffusion-planning-for/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-video-robin-autoregressive-diffusion-planning-for/</guid>
      <description>本文针对现有视频到音乐（V2M）生成模型缺乏对创作者风格、主题等细粒度意图控制的问题，提出了Video-Robin，一个结合文本提示的视频配乐框架。其核心方法是将生成过程解耦为两个阶段：首先，一个多模态自回归规划头（AR-Head）整合视频帧和文本提示，通过语义语言模型、有限标量量化（FSQ）和残差</description>
    </item>
    <item>
      <title>Beyond Monologue: Interactive Talking-Listening Avatar Generation with Conversational Audio Context-Aware Kernels</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-beyond-monologue-interactive-talking-listening/</link>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-beyond-monologue-interactive-talking-listening/</guid>
      <description>本文旨在解决从单向“独白”式虚拟人生成迈向自然“全双工”交互式生成的核心挑战。**核心问题**在于，现有方法要么因严格的帧对齐而反应僵硬，要么因引入全局注意力而破坏唇同步。**关键方法**是提出一个基于多头高斯核（MHGK）的统一注意力架构，该机制通过为不同的注意力头分配从窄到宽的高斯分布感受野，使</description>
    </item>
    <item>
      <title>Generalizable Audio-Visual Navigation via Binaural Difference Attention and Action Transition Prediction</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-generalizable-audio-visual-navigation-via/</link>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-generalizable-audio-visual-navigation-via/</guid>
      <description>本文旨在解决音频-视觉导航（AVN）智能体在未见环境和未闻声音类别下泛化能力差的核心问题。作者指出，现有方法性能下降主要源于两个因素：一是音频表征混淆了语义与空间信息，导致对未闻声��定位不准；二是强化学习策略过拟合于训练环境的动态和布局。为此，本文提出了一个名为BDATP的即插即用框架。在感知层面</description>
    </item>
    <item>
      <title>PS-TTS: Phonetic Synchronization in Text-to-Speech for Achieving Natural Automated Dubbing</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-ps-tts-phonetic-synchronization-in-text-to-speech/</link>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-ps-tts-phonetic-synchronization-in-text-to-speech/</guid>
      <description>这篇论文旨在解决自动配音（AD）中目标语音与源语音在时长和唇形上的同步难题。其核心贡献是提出了一套两阶段的文本改写方法，并集成到TTS系统中：首先通过语言模型进行**等时性**改写，确保目标语音时长匹配源语音；其次引入**音素同步（PS）**，使用动态时间规整（DTW）和从训练数据中学习的元音距离，</description>
    </item>
    <item>
      <title>AVID: A Benchmark for Omni-Modal Audio-Visual Inconsistency Understanding via Agent-Driven Construction</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-avid-a-benchmark-for-omni-modal-audio-visual/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-avid-a-benchmark-for-omni-modal-audio-visual/</guid>
      <description>这篇论文旨在解决当前全模态大模型在音视频不一致性理解能力上缺乏系统性评估的问题。现有基准要么只关注音视频对齐事件，要么局限于检测深度伪造中的低级伪影，无法评估模型对长视频中语义级矛盾的理解。为此，作者</description>
    </item>
    <item>
      <title>Listening Deepfake Detection: A New Perspective Beyond Speaking-Centric Forgery Analysis</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-listening-deepfake-detection-a-new-perspective/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-listening-deepfake-detection-a-new-perspective/</guid>
      <description>本文首次提出了“聆听深度伪造检测”这一新任务，旨在识别视频中人物在倾听状态下（非说话时）的伪造反应，弥补了现有研究主要集中于“说话”场景的不足。为解决此任务数据稀缺的问题，作者构建了首个专门数据集Li</description>
    </item>
    <item>
      <title>Tora3: Trajectory-Guided Audio-Video Generation with Physical Coherence</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-tora3-trajectory-guided-audio-video-generation/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-tora3-trajectory-guided-audio-video-generation/</guid>
      <description>本文针对现有音视频（AV）生成模型中存在的运动不真实、声音与运动事件不同步、声音强度与运动强度不匹配等问题，提出了Tora3框架。其核心创新在于**将物体轨迹视为连接视觉与听觉模态的共享运动学先验**</description>
    </item>
  </channel>
</rss>
