<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>音乐分离， on 语音/音频论文速递</title>
    <link>https://nanless.github.io/audio-paper-digest-blog/tags/%E9%9F%B3%E4%B9%90%E5%88%86%E7%A6%BB/</link>
    <description>Recent content in 音乐分离， on 语音/音频论文速递</description>
    <generator>Hugo</generator>
    <language>zh-cn</language>
    <lastBuildDate>Mon, 20 Apr 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://nanless.github.io/audio-paper-digest-blog/tags/%E9%9F%B3%E4%B9%90%E5%88%86%E7%A6%BB/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Discrete Token Modeling for Multi-Stem Music Source Separation with Language Models</title>
      <link>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-discrete-token-modeling-for-multi-stem-music/</link>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://nanless.github.io/audio-paper-digest-blog/posts/2026-04-20-discrete-token-modeling-for-multi-stem-music/</guid>
      <description>本文提出了一种用于多轨音乐源分离的生成式框架，其核心创新在于将分离任务重新定义为**条件离散令牌生成**问题。传统方法直接在时频域估计连续信号，而本文方法首先利用**HCodec**神经音频编解码器将音频波形转换为离散的声学与语义令牌序列。然后，一个基于**Conformer**的条件编码器从混合音</description>
    </item>
  </channel>
</rss>
