TY - JOUR AB - Purpose The purpose of this paper is to investigate whether machine learning induces gender biases in the sense of results that are more accurate for male authors or for female authors. It also investigates whether training separate male and female variants could improve the accuracy of machine learning for sentiment analysis.Design/methodology/approach This paper uses ratings-balanced sets of reviews of restaurants and hotels (3 sets) to train algorithms with and without gender selection.Findings Accuracy is higher on female-authored reviews than on male-authored reviews for all data sets, so applications of sentiment analysis using mixed gender data sets will over represent the opinions of women. Training on same gender data improves performance less than having additional data from both genders.Practical implications End users of sentiment analysis should be aware that its small gender biases can affect the conclusions drawn from it and apply correction factors when necessary. Users of systems that incorporate sentiment analysis should be aware that performance will vary by author gender. Developers do not need to create gender-specific algorithms unless they have more training data than their system can cope with.Originality/value This is the first demonstration of gender bias in machine learning sentiment analysis. VL - 42 IS - 3 SN - 1468-4527 DO - 10.1108/OIR-05-2017-0153 UR - https://doi.org/10.1108/OIR-05-2017-0153 AU - Thelwall Mike PY - 2018 Y1 - 2018/01/01 TI - Gender bias in machine learning for sentiment analysis T2 - Online Information Review PB - Emerald Publishing Limited SP - 343 EP - 354 Y2 - 2024/04/19 ER -