Here is a collection of linguistic data, including a collection of parsed texts from Voice of America, Project Gutenberg, the simple English Wikipedia, and a portion of the full English Wikipedia. This data is the result of many CPU-years worth of number-crunching, and is meant to provide pre-digested input for higher order linguistic processing. Two types of data are provided: parsed and tagged texts, and large SQL tables of statistical correlations.
The texts were dependency parsed with a combination of RelEx and Link Grammar, and are marked with both dependencies (subject, object, prepositional relations, etc.), with features (part-of-speech tags, verb-tense and noun-number tags, etc., with Link Grammar linkage relations, and with phrasal constituency structure. The data is in the RelEx compact output format. This format captures all of the parser output in an easy-to-handle format, meant to be easy-to-treat with basic perl scripts. For example, these texts can be quickly and easily input into OpenCog using a perl script from the RelEx package, the src/perl/cff-to-opencog.pl script.
The Lexical Attraction package was used to compile tables of statistical correlations, including mutual information between word pairs, and conditional probabilities of observing specific link-grammar linkages. In particular, the Mihalcea word-sense disambiguation algorithm was used to tag text with likely word-senses taken from WordNet 3.0, and correlations between these and link-grammar linkages were compiled. The lexat directory contains database dumps of these tables.
The full set of data files are in the data directory. Some highlights of what can be found:
Created June 2008, last updated January 2010. Contact Linas Vepstas at linasvepstas at gmail dot com for more details. See also the affiliated OpenCog Project for more info.