The R language offers many ways to perform various analyses on text data such as text mining, content analysis, topic modeling, semantic analysis and analysis of style (stylometry), including forensic analysis to determine authorship. This fullday workshop covers those topics using three popular approaches.
Dictionarybased content analysis is the simplest: develop a “dictionary” of words and phrases that define various topics or concepts, and have the computer “code” your documents to uncover the topics they contain. In the end, you know the number or percent of documents that included each topic. This simple approach can succeed where more advanced methods fail, such as with smaller sets of documents, or shortanswer survey items that lack the cooccurrence of terms that other methods rely on. In addition, this approach also excels at preparing data for more complex approaches.
Latent Semantic Analysis (LSA) identifies the topics in a set of documents by automatically detecting sets of words that define each topic. For example, if it sees the words: judge, jury, court, and lawyer frequently appearing near one another, it will create a numeric variable that measures the amount of that topic in each document. Seeing that combination, you might name the new variable “justice.” It essentially applies the method of factor analysis to text data.
Latent Dirichlet Allocation (LDA) results in a similar set of numeric measures for each topic, but in this case the numeric values are probabilities that each document contains the topic.
Most of our time will be spent working through examples that you may run simultaneously on your computer. You will see both the instructor’s screen and yours, as we run the examples and discuss the output. However, the handouts include each step and its output, so feel free to skip the computing; it’s easy to just relax and take notes. The slides and programming steps are numbered so you can easily switch from computing to slides and back again.
This workshop is available at your organization’s site, or via webinars.
The 0nsite version is the most engaging by far, generating much discussion and occasionally veering off briefly to cover topics specific to a particular organization. The instructor presents a topic for around twenty minutes. Then we switch to exercises, which are already open in another tabbed window. The exercises contain hints that show the general structure of the solution; you adapt those hints to get the final solution. The complete solutions are in a third tabbed window, so if you get stuck the answers are a click away. The typical schedule for training on site is located here.
A webinar version is also available. The approach is saves travel expenses and is especially useful for organizations with branch offices. It’s offered as two halfday sessions, often with a day or two skipped in between to give participants a chance to do the exercises and catch up on other work. There is time for questions on the lecture topics (live) and the exercises (via email). However, webinar participants are typically much less engaged, and far less discussion takes place.
For further details or to arrange a webinar or site visit, contact the instructor, Bob Muenchen, at muenchen.bob@gmail.com.
Prerequisites
This workshop assumes a basic knowledge of R. Introductory knowledge of statistics is helpful, but not required.
Learning Outcomes
When finished, participants will be able to use R to import documents in a variety of formats and analyze them with regard to topics or style.
Presenter
Robert A. Muenchen is the author of R for SAS and SPSS Users and, with Joseph M. Hilbe, R for Stata Users. He is also the creator of r4stats.com, a popular web site devoted to analyzing trends in analytics software and helping people learn the R language. Bob is an ASA Accredited Professional Statistician™ with 35 years of experience and is currently the manager of OIT Research Computing Support (formerly the Statistical Consulting Center) at the University of Tennessee. He has taught workshops on research computing topics for more than 500 organizations and has offered training in partnership with the American Statistical Association, DataCamp.com, New Horizons Computer Learning Centers, Revolution Analytics, RStudio and Xerox Learning Services. Bob has written or coauthored over 70 articles published in scientific journals and conference proceedings, and has provided guidance on more than 1,000 graduate theses and dissertations.
Bob has served on the advisory boards of SAS Institute, SPSS Inc., StatAce OOD, Intuitics, the Statistical Graphics Corporation and PC Week Magazine (now eWeek). His suggested improvements have been incorporated into SAS, SPSS, JMP, STATGRAPHICS and several R packages. His research interests include statistical computing, data graphics and visualization, text analytics, and data mining.
Computer Requirements
Onsite training is best done in a computer lab with a projector and, for large rooms, a PA system. The webinar version is delivered to your computer using Zoom (or similar webinar systems if your organization has a preference.)
Course programs, data, and exercises will be sent to you a week before the workshop. The instructions include installing R, which you can download R for free here: http://www.rproject.org/. We will also use RStudio, which you can download for free here: http://RStudio.com. If you already know a different R editor, that’s fine too.
Course Outline

Basic text analysis concepts
 How to define a document
 Corpus details
 Metadata details
 Terms / Tokens / Features
 Vocabulary lists
 Stop lists
 High / low frequency terms & their problems
 Term frequency divided by document frequency (TF/IDF)
 Bag of words
 Ngrams
 Topics: detecting, coding, scoring
 Style & forensic analysis

Dictionarybased Content Analysis
 The algorithm, pros, cons
 Overview of the quanteda and readtext packages
 Creating, viewing, summarizing, a corpus
 Keywords in Context (KWIC)
 Tokenizing words, sentences, punctuation, symbols, etc.
 Documentfeature matrices
 Finding popular terms
 Taking advantage of text diversity measures
 Discovering useful phrases (ngrams)
 Creating & applying a phrase dictionary
 Advantanges & dangers of using a stop list
 Creating and applying a thesaurus
 Finding words that differentiate documents (TF/IDF)
 Advantages & dangers of stemming & lemmatization
 Creating & applying a topic dictionary
 Adding scores back to main data set for mixedmethods analyses
 Finding & studying documents with zero topics
 Studying topic pairs for potential combinations
 Studying single topics for potential splits
 What we can learn from “big talkers”
 Summarizing topics using tidytext
 Visualizing topics using ggplot2
 Visualizing topics using word clouds
 Extracting and studying document subsets

Latent Semantic Analysis (LSA)
 Overview of the algorithm (detailed math optional)
 Pros & cons of this approach
 Converting quanteda’s documentfeature matrices into the termdocument matrices needed by the lsa package
 Applying local and global weights (TF/IDF, Entropy)
 Creating the maximum LSA space
 Plotting scores to estimate number of topics
 Creating a reduced LSA space
 Interpreting the topics (or factors)
 Adding factor scores to original data for mixedmethods analyses
 Scoring a new set of documents (careful!)

Latent Dirichlet Allocation (detailed math optional)
 The algorithm, pros, cons
 Converting quanteda’s documentfeature matrix into the documentterm matrix needed by the topicmodels package
 Performing the analysis
 Finding top words for each topic using tidytext
 Visualizing the topics using ggplot2
 Combining scores with original data for mixedmethods analyses

Analyzing the style of writing (stylometry)
 Developing a style guide
 Repeating above analyses based on style
 Determining authorship

Using Standard Dictionaries
 Importing popular formats such as WordStat
 Sentiment analysis – how happy were these people?
 The Lie Scale – are they telling the truth?
 Psychological scales for depression, etc.

Comparison with commercial packages
 WordStat
 SAS Text Miner
 SPSS Text Analytics for Surveys
 Summary of topics learned
Here is a slide show of previous workshops.