<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>语音伪造检测 on 语音/音频论文速递</title>
    <link>https://nanless.github.io/audio-paper-digest-blog/tags/%E8%AF%AD%E9%9F%B3%E4%BC%AA%E9%80%A0%E6%A3%80%E6%B5%8B/</link>
    <description>Recent content in 语音伪造检测 on 语音/音频论文速递</description>
    <generator>Hugo</generator>
    <language>zh-cn</language>
    <lastBuildDate>Wed, 29 Apr 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://nanless.github.io/audio-paper-digest-blog/tags/%E8%AF%AD%E9%9F%B3%E4%BC%AA%E9%80%A0%E6%A3%80%E6%B5%8B/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>A Parameter-Efficient Multi-Scale Convolutional Adapter for Synthetic Speech Detection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-a-parameter-efficient-multi-scale-convolutional/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-a-parameter-efficient-multi-scale-convolutional/</guid>
      <description>A Parameter-Efficient Multi-Scale Convolutional Adapter for Synthetic Speech Detection</description>
    </item>
    <item>
      <title>Addressing Gradient Misalignment in Data-Augmented Training for Robust Speech Deepfake Detection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-addressing-gradient-misalignment-in-data/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-addressing-gradient-misalignment-in-data/</guid>
      <description>语音伪造检测 | 7.0/10</description>
    </item>
    <item>
      <title>Audio-Visual Deepfake Generation and Detection: An Exploratory Survey</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-audio-visual-deepfake-generation-and-detection-an/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-audio-visual-deepfake-generation-and-detection-an/</guid>
      <description>音频深度伪造检测 | 6.5/10</description>
    </item>
    <item>
      <title>Detecting and Attributing Synthetic Spanish Speech: The HISPASpoof Dataset</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-detecting-and-attributing-synthetic-spanish/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-detecting-and-attributing-synthetic-spanish/</guid>
      <description>语音伪造检测 | 7.5/10</description>
    </item>
    <item>
      <title>Disentangled Authenticity Representation for Partially Deepfake Audio Localization</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-disentangled-authenticity-representation-for/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-disentangled-authenticity-representation-for/</guid>
      <description>音频深度伪造检测 | 6.5/10</description>
    </item>
    <item>
      <title>EchoFake: A Replay-Aware Dataset For Practical Speech Deepfake Detection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-echofake-a-replay-aware-dataset-for-practical/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-echofake-a-replay-aware-dataset-for-practical/</guid>
      <description>音频深度伪造检测 | 8.5/10</description>
    </item>
    <item>
      <title>Fake Speech Wild: Detecting Deepfake Speech on Social Media Platform</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-fake-speech-wild-detecting-deepfake-speech-on/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-fake-speech-wild-detecting-deepfake-speech-on/</guid>
      <description>语音伪造检测 | 7.0/10</description>
    </item>
    <item>
      <title>Fine-Grained Frame Modeling in Multi-Head Self-Attention for Speech Deepfake Detection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-fine-grained-frame-modeling-in-multi-head-self/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-fine-grained-frame-modeling-in-multi-head-self/</guid>
      <description>语音伪造检测 | 8.0/10</description>
    </item>
    <item>
      <title>Hybrid Pruning: In-Situ Compression of Self-Supervised Speech Models for Speaker Verification and Anti-Spoofing</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-hybrid-pruning-in-situ-compression-of-self/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-hybrid-pruning-in-situ-compression-of-self/</guid>
      <description>说话人验证 | 8.0/10</description>
    </item>
    <item>
      <title>ICASSP 2026 - 语音伪造检测 论文列表</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/icassp2026-task-056/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/icassp2026-task-056/</guid>
      <description>共 8 篇 ICASSP 2026 语音伪造检测 方向论文</description>
    </item>
    <item>
      <title>Mind Your [m]S, Cross Your [t]S: a Large-Scale Phonetic Analysis of Speech Reproduction in Modern Speech Generators</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-mind-your-ms-cross-your-ts-a-large-scale-phonetic/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-mind-your-ms-cross-your-ts-a-large-scale-phonetic/</guid>
      <description>语音伪造检测 | 7.0/10</description>
    </item>
    <item>
      <title>Multi-Task Transformer for Explainable Speech Deepfake Detection via Formant Modeling</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-multi-task-transformer-for-explainable-speech/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-multi-task-transformer-for-explainable-speech/</guid>
      <description>语音伪造检测 | 7.5/10</description>
    </item>
    <item>
      <title>Speech Quality-Based Localization of Low-Quality Speech and Text-to-Speech Synthesis Artefacts</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-speech-quality-based-localization-of-low-quality/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-speech-quality-based-localization-of-low-quality/</guid>
      <description>语音质量评估 | 7.0/10</description>
    </item>
    <item>
      <title>Tri-Attention Fusion: Joint Temporal-Spectral and Bidirectional Modeling for Speech Spoofing Detection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-tri-attention-fusion-joint-temporal-spectral-and/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-tri-attention-fusion-joint-temporal-spectral-and/</guid>
      <description>语音伪造检测 | 7.0/10</description>
    </item>
    <item>
      <title>WaveSP-Net: Learnable Wavelet-Domain Sparse Prompt Tuning for Speech Deepfake Detection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-wavesp-net-learnable-wavelet-domain-sparse-prompt/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-wavesp-net-learnable-wavelet-domain-sparse-prompt/</guid>
      <description>语音伪造检测 | 8.0/10</description>
    </item>
    <item>
      <title>RTCFake: Speech Deepfake Detection in Real-Time Communication</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-rtcfake-speech-deepfake-detection-in-real-time/</link>
      <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-rtcfake-speech-deepfake-detection-in-real-time/</guid>
      <description>语音伪造检测 | 7.0/10</description>
    </item>
    <item>
      <title>Spectro-Temporal Modulation Representation Framework for Human-Imitated Speech Detection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-spectro-temporal-modulation-representation/</link>
      <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-28-spectro-temporal-modulation-representation/</guid>
      <description>语音伪造检测 | 6.5/10</description>
    </item>
    <item>
      <title>Neural Encoding Detection is Not All You Need for Synthetic Speech Detection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-neural-encoding-detection-is-not-all-you-need-for/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-neural-encoding-detection-is-not-all-you-need-for/</guid>
      <description>这篇综述论文的核心贡献在于**揭示并论证了当前合成语音检测领域的一个关键误区：过度依赖“神经编码检测”**。论文首先系统回顾了基于SincNet、自监督学习（SSL）和神经编码检测的三类数据驱动方法，指出当前性能最佳的SSL模型实际上主要捕捉的是声码器（vocoder）在波形生成阶段引入的痕迹，而非</description>
    </item>
    <item>
      <title>Listening Deepfake Detection: A New Perspective Beyond Speaking-Centric Forgery Analysis</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-listening-deepfake-detection-a-new-perspective/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-listening-deepfake-detection-a-new-perspective/</guid>
      <description>本文首次提出了“聆听深度伪造检测”这一新任务，旨在识别视频中人物在倾听状态下（非说话时）的伪造反应，弥补了现有研究主要集中于“说话”场景的不足。为解决此任务数据稀缺的问题，作者构建了首个专门数据集Li</description>
    </item>
    <item>
      <title>ProSDD: Learning Prosodic Representations for Speech Deepfake Detection against Expressive and Emotional Attacks</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-prosdd-learning-prosodic-representations-for/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-prosdd-learning-prosodic-representations-for/</guid>
      <description>这篇论文旨在解决当前语音深度伪造检测（SDD）系统在面对富有表现力和情感的合成语音攻击时泛化能力不足的核心问题。现有方法过度依赖伪造数据，容易学习数据集特定的伪影，而非自然语音的可迁移特征。为此，作者</description>
    </item>
  </channel>
</rss>
