Collaborative denoised graph contrastive learning for multi-modal recommendation

Document Type

Article

Publication Date

9-1-2024

Abstract

Graph neural networks, with their capacity to capture complex hierarchical relations, are extensively employed in multi-modal recommendation. Previous graph-based multi-modal recommendation studies primarily focus on integrating multi-modal features that capture the neighbor relations as auxiliary information. However, such methods heavily rely on graph structure properties for collaborative relations. Furthermore, while the massive implicit feedbacks alleviate the data sparsity issue, the drawback is that they are not as reliable in accurately reflecting users true interests. We propose a Collaborative Denoised Graph Contrastive Learning framework named CDGCL for multi-modal recommendation. Specifically, we present a novel modality-aware item representation with contrastive learning to capture the modality-aware collaborative relations. Besides, we develop a Multi-Policy Denoised module (MPD) to filter out irrelevant interactions. Extensive experiments that include cold-start and warm-start experimental scenarios demonstrate the superiority of CDGCL over baselines.

Keywords

Recommendation, Multi-modal recommendation, Graph learning, Contrastive learning

Divisions

Education

Funders

National Social Science Foundation (19BYY076),Natural Science Foundation of Shandong Province (ZR2023QF006)

Publication Title

Information Sciences

Volume

679

Publisher

Elsevier

Publisher Location

STE 800, 230 PARK AVE, NEW YORK, NY 10169 USA

This document is currently not available here.

Share

COinS