DeepDive
Citations Over TimeTop 10% of 2017 papers
Abstract
The dark data extraction or knowledge base construction (KBC) problem is to populate a relational database with information from unstructured data sources, such as emails, webpages, and PDFs. KBC is a long-standing problem in industry and research that encompasses problems of data extraction, cleaning, and integration. We describe DeepDive, a system that combines database and machine learning ideas to help to develop KBC systems. The key idea in DeepDive is to frame traditional extract-transform-load (ETL) style data management problems as a single large statistical inference task that is declaratively defined by the user. DeepDive leverages the effectiveness and efficiency of statistical inference and machine learning for difficult extraction tasks, whereas not requiring users to directly write any probabilistic inference algorithms. Instead, domain experts interact with DeepDive by defining features or rules about the domain. DeepDive has been successfully applied to domains such as pharmacogenomics, paleobiology, and antihuman trafficking enforcement, achieving human-caliber quality at machine-caliber scale. We present the applications, abstractions, and techniques used in DeepDive to accelerate the construction of such dark data extraction systems.
Related Papers
- → Framework of probabilistic power system planning(2015)27 cited
- → Probabilistic component-based analysis for networks(2016)2 cited
- → POSITIVE-NEGATIVE ASYMMETRY IN MENTAL STATE INFERENCE: REPLICATION AND EXTENSION(2006)1 cited
- → On assessing the viability of probabilistic scheduling with dependent tasks(2019)2 cited
- → Towards an analysis framework for tasks with probabilistic execution times and probabilistic inter-arrival times(2012)1 cited