<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>工业应用 on 语音/音频论文速递</title>
    <link>https://nanless.github.io/audio-paper-digest-blog/tags/%E5%B7%A5%E4%B8%9A%E5%BA%94%E7%94%A8/</link>
    <description>Recent content in 工业应用 on 语音/音频论文速递</description>
    <generator>Hugo</generator>
    <language>zh-cn</language>
    <lastBuildDate>Wed, 29 Apr 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://nanless.github.io/audio-paper-digest-blog/tags/%E5%B7%A5%E4%B8%9A%E5%BA%94%E7%94%A8/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>AI-Generated Music Detection in Broadcast Monitoring</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-ai-generated-music-detection-in-broadcast/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-ai-generated-music-detection-in-broadcast/</guid>
      <description>音频深度伪造检测 | 7.0/10</description>
    </item>
    <item>
      <title>Attentive AV-Fusionnet: Audio-Visual Quality Prediction with Hybrid Attention</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-attentive-av-fusionnet-audio-visual-quality/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-attentive-av-fusionnet-audio-visual-quality/</guid>
      <description>音视频 | 7.0/10</description>
    </item>
    <item>
      <title>BBPE16: UTF-16-Based Byte-Level Byte-Pair Encoding for Improved Multilingual Speech Recognition</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-bbpe16-utf-16-based-byte-level-byte-pair-encoding/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-bbpe16-utf-16-based-byte-level-byte-pair-encoding/</guid>
      <description>语音识别 | 7.0/10</description>
    </item>
    <item>
      <title>Cutscene Agent: An LLM Agent Framework for Automated 3D Cutscene Generation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-cutscene-agent-an-llm-agent-framework-for/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-cutscene-agent-an-llm-agent-framework-for/</guid>
      <description>生成模型 | 8.5/10</description>
    </item>
    <item>
      <title>ECHO: Frequency-Aware Hierarchical Encoding for Variable-Length Signals</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-echo-frequency-aware-hierarchical-encoding-for/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-echo-frequency-aware-hierarchical-encoding-for/</guid>
      <description>音频分类 | 9.5/10</description>
    </item>
    <item>
      <title>Generative UI as an Accessibility Bridge: Lessons from C2C E-Commerce</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-generative-ui-as-an-accessibility-bridge-lessons/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-generative-ui-as-an-accessibility-bridge-lessons/</guid>
      <description>无障碍 | 6.5/10</description>
    </item>
    <item>
      <title>Hierarchical Tokenization of Multimodal Music Data for Generative Music Retrieval</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-hierarchical-tokenization-of-multimodal-music/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-hierarchical-tokenization-of-multimodal-music/</guid>
      <description>音乐检索 | 7.0/10</description>
    </item>
    <item>
      <title>HVAC-EAR: Eavesdropping Human Speech Using HVAC Systems</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-hvac-ear-eavesdropping-human-speech-using-hvac/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-hvac-ear-eavesdropping-human-speech-using-hvac/</guid>
      <description>音频安全 | 8.5/10</description>
    </item>
    <item>
      <title>Improving Anomalous Sound Detection with Attribute-Aware Representation from Domain-Adaptive Pre-Training</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-improving-anomalous-sound-detection-with/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-improving-anomalous-sound-detection-with/</guid>
      <description>音频事件检测 | 8.0/10</description>
    </item>
    <item>
      <title>Monitoring exposure-length variations in submarine power cables using distributed fiber-optic sensing</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-monitoring-exposure-length-variations-in/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-monitoring-exposure-length-variations-in/</guid>
      <description>音频事件检测 | 6.5/10</description>
    </item>
    <item>
      <title>Multimodal Fusion-Based IPCLIP Network for Mixed Reality Surgical Assistance</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-multimodal-fusion-based-ipclip-network-for-mixed/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-multimodal-fusion-based-ipclip-network-for-mixed/</guid>
      <description>多模态模型 | 6.5/10</description>
    </item>
    <item>
      <title>Natural Language to Spatial Audio Parameters: Lightweight Deterministic Rendering for Creative Authoring</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-natural-language-to-spatial-audio-parameters/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-natural-language-to-spatial-audio-parameters/</guid>
      <description>空间音频 | 7.5/10</description>
    </item>
    <item>
      <title>Peeking Into the Future for Contextual Biasing</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-peeking-into-the-future-for-contextual-biasing/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-peeking-into-the-future-for-contextual-biasing/</guid>
      <description>语音识别 | 7.0/10</description>
    </item>
    <item>
      <title>Phase-Space Signal Processing of Acoustic Data for Advanced Manufacturing In-Situ Monitoring</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-phase-space-signal-processing-of-acoustic-data/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-phase-space-signal-processing-of-acoustic-data/</guid>
      <description>音频事件检测 | 7.0/10</description>
    </item>
    <item>
      <title>Production-Scale Dynamic Vocabulary ASR Biasing with Word-Level FST and Robust Training</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-production-scale-dynamic-vocabulary-asr-biasing/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-production-scale-dynamic-vocabulary-asr-biasing/</guid>
      <description>语音识别 | 7.5/10</description>
    </item>
    <item>
      <title>RCAL: Reinforced Cross-Modal Alignment for Multimodal Sentiment Analysis with Sparse Visual Frames</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-rcal-reinforced-cross-modal-alignment-for/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-rcal-reinforced-cross-modal-alignment-for/</guid>
      <description>多模态模型 | 8.5/10</description>
    </item>
    <item>
      <title>Refgen: Reference-Guided Synthetic Data Generation for Anomalous Sound Detection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-refgen-reference-guided-synthetic-data-generation/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-refgen-reference-guided-synthetic-data-generation/</guid>
      <description>音频事件检测 | 7.5/10</description>
    </item>
    <item>
      <title>Representation-Based Data Quality Audits for Audio</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-representation-based-data-quality-audits-for-audio/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-representation-based-data-quality-audits-for-audio/</guid>
      <description>数据集 | 7.5/10</description>
    </item>
    <item>
      <title>TextlessRAG: End-to-End Visual Document RAG by Speech without Text</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-textlessrag-end-to-end-visual-document-rag-by/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-textlessrag-end-to-end-visual-document-rag-by/</guid>
      <description>语音问答 | 8.5/10</description>
    </item>
    <item>
      <title>Toward Faithful Explanations in Acoustic Anomaly Detection</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-toward-faithful-explanations-in-acoustic-anomaly/</link>
      <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-29-toward-faithful-explanations-in-acoustic-anomaly/</guid>
      <description>音频事件检测 | 7.5/10</description>
    </item>
    <item>
      <title>MOMO: A framework for seamless physical, verbal, and graphical robot skill learning and adaptation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-25-momo-a-framework-for-seamless-physical-verbal-and/</link>
      <pubDate>Sat, 25 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-25-momo-a-framework-for-seamless-physical-verbal-and/</guid>
      <description>机器人技能学习 | 7.5/10</description>
    </item>
    <item>
      <title>语音/音频论文速递 2026-04-25</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-25/</link>
      <pubDate>Sat, 25 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-25/</guid>
      <description>共分析 2 篇语音/AI 论文</description>
    </item>
    <item>
      <title>Deep Hierarchical Knowledge Loss for Fault Intensity Diagnosis</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-23-deep-hierarchical-knowledge-loss-for-fault/</link>
      <pubDate>Thu, 23 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-23-deep-hierarchical-knowledge-loss-for-fault/</guid>
      <description>1.  **要解决什么问题**：传统故障强度诊断方法将各类故障视为独立标签，忽略了物理状态之间固有的层次依赖关系（如“空化”是“初期空化”、“稳定空化”等的父类），这限制了模型的性能和鲁棒性。 2.  **方法核心是什么**：提出一个名为DHK的通用框架，其核心是设计两个新的损失函数：**层次树损失</description>
    </item>
    <item>
      <title>MOMO: A framework for seamless physical, verbal, and graphical robot skill learning and adaptation</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-23-momo-a-framework-for-seamless-physical-verbal-and/</link>
      <pubDate>Thu, 23 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-23-momo-a-framework-for-seamless-physical-verbal-and/</guid>
      <description>1. **问题**：工业机器人需要频繁适应新任务和环境，但现有技能调整方法（如手动重编程）对非专家用户不友好，且单一交互模态无法高效处理所有类型的调整需求。 2. **方法核心**：提出MOMO框架，集成三种互补交互模态：动觉接触（用于精确空间修正）、自然语言（用于高层语义修改）和图形界面（用于参数</description>
    </item>
    <item>
      <title>Disentangling Damage from Operational Variability: A Label-Free Self-Supervised Representation Learning Framework for Output-Only Structural Damage Identification</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-disentangling-damage-from-operational-variability/</link>
      <pubDate>Wed, 22 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-22-disentangling-damage-from-operational-variability/</guid>
      <description>本文针对结构健康监测中损伤信号易被环境与操作变异掩盖的核心挑战，提出了一种**无标签、自监督的解缠表示学习框架**。该框架采用双流自编码器架构，通过**时间序列重构损失**确保信息完整性，并利用**VICReg自监督损失**（基于假设损伤状态不变的基线期数据）强制损伤敏感表征（`z_dmg`）对操作</description>
    </item>
    <item>
      <title>MimicLM: Zero-Shot Voice Imitation through Autoregressive Modeling of Pseudo-Parallel Speech Corpora</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-mimiclm-zero-shot-voice-imitation-through/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-21-mimiclm-zero-shot-voice-imitation-through/</guid>
      <description>这篇论文旨在解决零样本语音模仿任务中高质量平行训练数据稀缺的核心瓶颈。传统方法要么依赖复杂的解耦架构，要么使用合成语音作为训练目标，导致输出质量受限于合成系统的能力。作者提出了一种名为 **MimicLM** 的新框架，其核心创新在于**“角色交换”的数据构建策略**：使用TTS生成的语音作为**训</description>
    </item>
  </channel>
</rss>
