Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Motivated by the burgeoning interest in cross-domain learning, we present a
novel generative modeling challenge: generating counterfactual samples in a
target domain based on factual observations from a source domain. Our approach
operates within an unsupervised paradigm devoid of parallel or joint datasets,
relying exclusively on distinct observational samples and causal graphs for
each domain. This setting presents challenges that surpass those of
conventional counterfactual generation. Central to our methodology is the
disambiguation of exogenous causes into effect-intrinsic and domain-intrinsic
categories. This differentiation facilitates the integration of domain-specific
causal graphs into a unified joint causal graph via shared effect-intrinsic
exogenous variables. We propose leveraging Neural Causal models within this
joint framework to enable accurate counterfactual generation under standard
identifiability assumptions. Furthermore, we introduce a novel loss function
that effectively segregates effect-intrinsic from domain-intrinsic variables
during model training. Given a factual observation, our framework combines the
posterior distribution of effect-intrinsic variables from the source domain
with the prior distribution of domain-intrinsic variables from the target
domain to synthesize the desired counterfactuals, adhering to Pearl's causal
hierarchy. Intriguingly, when domain shifts are restricted to alterations in
causal mechanisms without accompanying covariate shifts, our training regimen
parallels the resolution of a conditional optimal transport problem. Empirical
evaluations on a synthetic dataset show that our framework generates
counterfactuals in the target domain that very closely resemble the ground
truth.