<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>对抗样本 on 语音/音频论文速递</title>
    <link>https://nanless.github.io/audio-paper-digest-blog/tags/%E5%AF%B9%E6%8A%97%E6%A0%B7%E6%9C%AC/</link>
    <description>Recent content in 对抗样本 on 语音/音频论文速递</description>
    <generator>Hugo</generator>
    <language>zh-cn</language>
    <lastBuildDate>Wed, 29 Apr 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://nanless.github.io/audio-paper-digest-blog/tags/%E5%AF%B9%E6%8A%97%E6%A0%B7%E6%9C%AC/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Adversarial Fine-Tuning on Speech Foundation Model with Vulnerable Attention Consistency Regularization for Robust Speech Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-adversarial-fine-tuning-on-speech-foundation/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-adversarial-fine-tuning-on-speech-foundation/</guid>
      <description>语音识别 | 7.5/10</description>
    </item>
    <item>
      <title>Are Modern Speech Enhancement Systems Vulnerable to Adversarial Attacks?</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-are-modern-speech-enhancement-systems-vulnerable/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-are-modern-speech-enhancement-systems-vulnerable/</guid>
      <description>语音增强 | 7.5/10</description>
    </item>
    <item>
      <title>Audio Classification Models are Vulnerable to Filter Perturbations</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-audio-classification-models-are-vulnerable-to/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-audio-classification-models-are-vulnerable-to/</guid>
      <description>音频分类 | 7.5/10</description>
    </item>
    <item>
      <title>Audio-Text Jailbreak Attack on Large Audio-Language Models: Towards Generality and Stealthiness</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-audio-text-jailbreak-attack-on-large-audio/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-audio-text-jailbreak-attack-on-large-audio/</guid>
      <description>音频安全 | 7.0/10</description>
    </item>
    <item>
      <title>Cooperative Multi-Agent Reinforcement Learning for Adaptive Aggregation in Semi-Supervised Federated Learning with non-IID Data</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-cooperative-multi-agent-reinforcement-learning/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-cooperative-multi-agent-reinforcement-learning/</guid>
      <description>联邦学习 | 7.0/10</description>
    </item>
    <item>
      <title>Emotional Damage: Investigating Safety Vulnerabilities of Large Audio-Language Models Under Speaker Emotional Variations</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-emotional-damage-investigating-safety/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-emotional-damage-investigating-safety/</guid>
      <description>音频安全 | 7.5/10</description>
    </item>
    <item>
      <title>ICASSP 2026 - 对抗样本 论文列表</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/icassp2026-task-024/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/icassp2026-task-024/</guid>
      <description>共 1 篇 ICASSP 2026 对抗样本 方向论文</description>
    </item>
    <item>
      <title>Impact of Phonetics on Speaker Identity in Adversarial Voice Attack</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-impact-of-phonetics-on-speaker-identity-in/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-impact-of-phonetics-on-speaker-identity-in/</guid>
      <description>说话人验证 | 7.0/10</description>
    </item>
    <item>
      <title>Listen, But Don&#39;t Leak: Sensitive Data Protection for Privacy Aware Automatic Speech Recognition with Acoustic Triggers</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-listen-but-dont-leak-sensitive-data-protection/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-listen-but-dont-leak-sensitive-data-protection/</guid>
      <description>语音识别 | 7.5/10</description>
    </item>
    <item>
      <title>Membership Inference Attack against Music Diffusion Models via Generative Manifold Perturbation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-membership-inference-attack-against-music/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-membership-inference-attack-against-music/</guid>
      <description>音频安全 | 7.5/10</description>
    </item>
    <item>
      <title>PRSA: Preventing Malicious Speaker Recognition and Speech Synthesis Simultaneously with Adversarial Examples</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-prsa-preventing-malicious-speaker-recognition-and/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-prsa-preventing-malicious-speaker-recognition-and/</guid>
      <description>语音匿名化 | 7.0/10</description>
    </item>
    <item>
      <title>RoCo: Robust Code for Fast and Effective Proactive Defense against Voice Cloning Attack</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-roco-robust-code-for-fast-and-effective-proactive/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-roco-robust-code-for-fast-and-effective-proactive/</guid>
      <description>音频安全 | 7.5/10</description>
    </item>
    <item>
      <title>Style Attack Disguise: When Fonts Become a Camouflage for Adversarial Intent</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-style-attack-disguise-when-fonts-become-a/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-style-attack-disguise-when-fonts-become-a/</guid>
      <description>对抗样本 | 7.0/10</description>
    </item>
    <item>
      <title>When Noise Lowers the Loss: Rethinking Likelihood-Based Evaluation in Music Large Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-when-noise-lowers-the-loss-rethinking-likelihood/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-when-noise-lowers-the-loss-rethinking-likelihood/</guid>
      <description>音乐生成 | 7.0/10</description>
    </item>
    <item>
      <title>Benign Fine-Tuning Breaks Safety Alignment in Audio LLMs</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-benign-fine-tuning-breaks-safety-alignment-in/</link>
      <pubDate>Wed, 22 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-benign-fine-tuning-breaks-safety-alignment-in/</guid>
      <description>这篇论文首次系统研究了良性（无害）音频数据微调对音频大模型安全对齐的破坏作用。**要解决的问题**是：用户出于提升模型性能目的进行的常规微调，是否会无意中破坏模型的安全防护？**方法**上，作者提出了一个基于嵌入空间邻近度的过滤框架，从语义、声学及混合维度，选择性地用与有害内容在表示空间上相近的良性</description>
    </item>
    <item>
      <title>语音/音频论文速递 2026-04-22</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22/</link>
      <pubDate>Wed, 22 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22/</guid>
      <description>共分析 21 篇语音/AI 论文</description>
    </item>
    <item>
      <title>Benign Fine-Tuning Breaks Safety Alignment in Audio LLMs</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-benign-fine-tuning-breaks-safety-alignment-in/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-benign-fine-tuning-breaks-safety-alignment-in/</guid>
      <description>这篇论文首次系统研究了**良性音频数据微调对音频大模型安全对齐的破坏性影响**。核心问题是：用户出于提升性能的目的，在完全无害的音频数据上微调模型，是否会意外削弱其拒绝有害指令的能力？作者提出了一个**基于嵌入空间邻近性的过滤框架**，通过计算良性音频与有害音频在模型内部或外部参考编码器空间中的距离</description>
    </item>
    <item>
      <title>Hijacking Large Audio-Language Models via Context-Agnostic and Imperceptible Auditory Prompt Injection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-hijacking-large-audio-language-models-via-context/</link>
      <pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-19-hijacking-large-audio-language-models-via-context/</guid>
      <description>这篇论文揭示了针对音频大语言模型（LALM）的一种新型安全威胁：**上下文无关且不可感知的音频提示注入攻击**。攻击者仅需篡改输入音频数据（如会议录音、音乐片段），即可在用户不知情的情况下，劫持模型行</description>
    </item>
  </channel>
</rss>
