Two other machine learning systems, Linguistic Profiling and Ti MBL, come close to this result, at least when the input is first preprocessed with PCA. Introduction In the Netherlands, we have a rather unique resource in the form of the Twi NL data set: a daily updated collection that probably contains at least 30% of the Dutch public tweet production since 2011 (Tjong Kim Sang and van den Bosch 2013).

However, as any collection that is harvested automatically, its usability is reduced by a lack of reliable metadata.

In this case, the Twitter profiles of the authors are available, but these consist of freeform text rather than fixed information fields.

And, obviously, it is unknown to which degree the information that is present is true.

Their highest score when using just text features was 75.5%, testing on all the tweets by each author (with a train set of 3.3 million tweets and a test set of about 418,000 tweets). (2012) used SVMlight to classify gender on Nigerian twitter accounts, with tweets in English, with a minimum of 50 tweets.

Their features were hash tags, token unigrams and psychometric measurements provided by the Linguistic Inquiry of Word Count software (LIWC; (Pennebaker et al. Although LIWC appears a very interesting addition, it hardly adds anything to the classification.

The general quality of the assignment is unknown, but in the (for this purpose) rather unrepresentative sample of users we considered for our own gender assignment corpus (see below), we find that about 44% of the users are assigned a gender, which is correct in about 87% of the cases.

Another system that predicts the gender for Dutch Twitter users is Tweet Genie ( that one can provide with a Twitter user name, after which the gender and age are estimated, based on the user s last 200 tweets.

Gender recognition has also already been applied to Tweets. (2010) examined various traits of authors from India tweeting in English, combining character N-grams and sociolinguistic features like manner of laughing, honorifics, and smiley use.

With lexical N-grams, they reached an accuracy of 67.7%, which the combination with the sociolinguistic features increased to 72.33%. (2011) attempted to recognize gender in tweets from a whole set of languages, using word and character N-grams as features for machine learning with Support Vector Machines (SVM), Naive Bayes and Balanced Winnow2.

Later, in 2004, the group collected a Blog Authorship Corpus (BAC; (Schler et al.

2006)), containing about 700,000 posts to (in total about 140 million words) by almost 20,000 bloggers. Slightly more information seems to be coming from content (75.1% accuracy) than from style (72.0% accuracy). We see the women focusing on personal matters, leading to important content words like love and boyfriend, and important style words like I and other personal pronouns.

2004), with and without preprocessing the input vectors with Principal Component Analysis (PCA; (Pearson 1901); (Hotelling 1933)).