Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Many real-world datasets are both sequential and relational: each node
carries an event sequence while edges encode interactions. Existing methods in
sequence modeling and graph modeling often neglect one modality or the other.
We argue that sequences and graphs are not separate problems but complementary
facets of the same dataset, and should be learned jointly. We introduce BRIDGE,
a unified end-to-end architecture that couples a sequence encoder with a GNN
under a single objective, allowing gradients to flow across both modules and
learning task-aligned representations. To enable fine-grained token-level
message passing among neighbors, we add TOKENXATTN, a token-level
cross-attention layer that passes messages between events in neighboring
sequences. Across two settings, friendship prediction (Brightkite) and fraud
detection (Amazon), BRIDGE consistently outperforms static GNNs, temporal graph
methods, and sequence-only baselines on ranking and classification metrics.
Authors (8)
Yuen Chen
Yulun Wu
Samuel Sharpe
Igor Melnyk
Nam H. Nguyen
Furong Huang
+2 more
Submitted
October 29, 2025
Key Contributions
Introduces BRIDGE, a unified end-to-end architecture that jointly learns from sequential and relational (graph) data. It couples a sequence encoder with a GNN under a single objective and incorporates TOKENXATTN for fine-grained message passing between events in neighboring sequences, enabling gradients to flow across both modalities.
Business Value
Enables more accurate predictions and insights from complex datasets that exhibit both temporal and relational structures, improving applications like targeted advertising, risk assessment, and user behavior analysis.