LEARNING CAUSAL BOUNDS USING MARGINAL INDEPENDENCE INFORMATION WITH APPLICATIONS TO GENE EXPRESSION ANALYSIS
Abstract
Discovering causal relations is a fundamental goal of science. Randomized controlled experiments were often considered to be the only reliable method for tackling this task. However, in recent years, various causal discovery methods have been proposed that are capable of identifying causal relations from purely observational data. While these causal discovery methods provide a theoretical framework for bridging the gap from statistical relations to causal conclusions, causal discovery remains a challenging task in practice; this challenge arises because many of the assumptions made in obtaining these theoretical results are often not met in practice. This is especially true when one considers causal discovery in the landscape of genomic data. The critical challenge in learning causal relations in genomic data concerns a marked contrast between the sample size requirements of the aforementioned causal discovery algorithms and the size of the samples obtained through genomic experiments. These causal discovery tools require at least thousands of samples for identifying small causal networks in domains with five to ten variables; in genomic data, we often face networks with hundreds of nodes while the available sample is limited to thousands at best. Besides this factor, in genomic studies, we have to deal with measurement errors, averaging effects, and feedback loops, all of which undermine the theoretical assumptions that come at the core of any typical causal discovery algorithms. In this work, we propose a series of improvements to the available causal discovery algorithms and propose new causal discovery tools that overcome all the aforementioned challenges allowing one to learn and uncover relations of causal nature in gene expression measurements.