钱洋  (副教授)

硕士生导师

出生日期:1995-06-20

入职时间:2021-01-07

所在单位:电子商务系

学历:博士研究生毕业

性别:男

联系方式:soberqian@hfut.edu.cn

学位:博士学位

在职信息:在职

   
当前位置: 中文主页 >> 研究动态 >> 教师博客

2026年3月团队成果被CCF A期刊ACM TOIS录用和发表;

点击次数:

基础信息

  • 标题:Balancing Imperceptible and Aggressive Poisoning Attack for Recommender Systems: A Simple Multinomial Diffusion Model

  • 作者:朱峻, 姜元春*, 柴一栋, 钱洋, 汪洋

  • 发表期刊/来源:ACM Transactions on Information Systems

  • 链接:https://dl.acm.org/doi/abs/10.1145/3797026


摘要

Online platforms’ openness makes recommender systems (RSs) susceptible to data poisoning attacks, where malicious user profiles are injected into the training dataset to distort recommendation outcomes. However, existing poisoning attack methods often struggle to achieve an optimal effectiveness on both imperceptibility and aggressiveness. To address this issue, we propose a novel poisoning attack method for RSs, named MDPAttack, which consists of three key modules, each focusing on imperceptibility and aggressiveness. Specifically, we first train a Multinomial Diffusion Model (MDM) to model discrete rating data, effectively minimizing information loss during data processing and thereby enhancing the imperceptibility of the generated profiles. Then, we combine the influence function with the Fast Gradient Sign Method (FGSM) to iteratively improve the aggressiveness of poisoning profiles by leveraging template profiles. Finally, these two properties are seamlessly integrated within the MDPAttack framework. Extensive experiments on both classic and modern deep learning-based RSs demonstrate that MDPAttack generates highly imperceptible profiles while maintaining attack performance comparable to state-of-the-art methods.


中文翻译:在线平台的开放性使得推荐系统(RS)容易受到数据投毒攻击,即恶意用户画像被注入训练数据集中,以扭曲推荐结果。然而,现有的投毒攻击方法往往难以在隐蔽性(imperceptibility)和攻击性(aggressiveness)之间取得最佳平衡。为解决这一问题,我们提出了一种面向推荐系统的新型投毒攻击方法——MDPAttack,该方法包含三个关键模块,分别聚焦于提升隐蔽性与攻击性。具体而言,我们首先训练一个多类别扩散模型(Multinomial Diffusion Model, MDM)来建模离散的评分数据,有效减少数据处理过程中的信息损失,从而增强生成画像的隐蔽性。随后,我们将影响函数(influence function)与快速梯度符号法(Fast Gradient Sign Method, FGSM)相结合,通过利用模板画像迭代优化投毒画像的攻击性。最后,这两种特性在MDPAttack框架中得以无缝融合。在经典及现代深度学习推荐系统上的大量实验表明,MDPAttack能够生成高度隐蔽的恶意用户画像,同时保持与当前最先进方法相当的攻击效果。


下一条: 2026年2月团队成果被FMS B期刊IPM录用和发表;