Etd

Multi-View Contrastive Learning for Sequential Recommendation

Öffentlich Deposited

Herunterladbarer Inhalt

open in viewer

Sequential recommendation has become an increasingly prominent subject both in academia and industrial sectors, particularly within the e-commerce domain. Its primary aim is to extract user preference from a user's historical item list and predict the subsequent items the user might purchase based on that history. Recent trends show a surge in the application of using contrastive learning and graph-based neural network to extract more expressive representation from user's historical item list, where graph contains information of relationship between nodes while ID based representation contains more specific information. However, researchers have not paid attention on multi view contrastive learning between the ID and graph to further improve quality of user and item representation learning. Therefore, in this study, we propose a novel framework called MultiView Contrastive learning for sequential recommendation (MVCrec). This framework is designed to combine information from both sequential and graph views. It incorporates three facets of contrastive learning: one for sequential view, another one for graph view and the other one for cross-view. To leverage the representations derived from the contrastive learning, we propose a multi-view attention fusion module, which integrates both global and local attentions and measures how likely a target user will purchase a target item. Comprehensive experiments underscore the superiority of our model, as evidenced by its performance on four real-world benchmark datasets and ablation study.

Creator
Mitwirkende
Degree
Unit
Publisher
Identifier
  • etd-119449
Stichwort
Advisor
Defense date
Year
  • 2024
UN Sustainable Development Goals
Date created
  • 2024-03-26
Resource type
Source
  • etd-119449
Rights statement
License

Beziehungen

In Collection:

Objekte

Artikel

Permanent link to this page: https://digital.wpi.edu/show/rr172254d