The R language offers many ways to perform various analyses on text data such as text mining, content analysis, topic modeling, semantic analysis and analysis of style (stylometry), including forensic analysis to determine authorship. This full-day workshop covers those topics using three popular approaches.
Photography by Steve Chastain http://www.stevechastainphotography.com/
Dictionary-based content analysis is the simplest: develop a “dictionary” of words and phrases that define various topics or concepts, and have the computer “code” your documents to uncover the topics they contain. In the end, you know the number or percent of documents that included each topic. This simple approach can succeed where more advanced methods fail, such as with smaller sets of documents, or short-answer survey items that lack the co-occurrence of terms that other methods rely on. In addition, this approach also excels at preparing data for more complex approaches.
Latent Semantic Analysis (LSA) identifies the topics in a set of documents by automatically detecting sets of words that define each topic. For example, if it sees the words: judge, jury, court, and lawyer frequently appearing near one another, it will create a numeric variable that measures the amount of that topic in each document. Seeing that combination, you might name the new variable “justice.” It essentially applies the method of factor analysis to text data.
Latent Dirichlet Allocation (LDA) results in a similar set of numeric measures for each topic, but in this case the numeric values are probabilities that each document contains the topic.
Most of our time will be spent working through examples that you may run simultaneously on your computer. You will see both the instructor’s screen and yours, as we run the examples and discuss the output. However, the handouts include each step and its output, so feel free to skip the computing; it’s easy to just relax and take notes. The slides and programming steps are numbered so you can easily switch from computing to slides and back again.
This workshop is available at your organization’s site, or via webinars.
The 0n-site version is the most engaging by far, generating much discussion and occasionally veering off briefly to cover topics specific to a particular organization. The instructor presents a topic for around twenty minutes. Then we switch to exercises, which are already open in another tabbed window. The exercises contain hints that show the general structure of the solution; you adapt those hints to get the final solution. The complete solutions are in a third tabbed window, so if you get stuck the answers are a click away. The typical schedule for training on site is located here.
A webinar version is also available. The approach is saves travel expenses and is especially useful for organizations with branch offices. It’s offered as two half-day sessions, often with a day or two skipped in between to give participants a chance to do the exercises and catch up on other work. There is time for questions on the lecture topics (live) and the exercises (via email). However, webinar participants are typically much less engaged, and far less discussion takes place.
For further details or to arrange a webinar or site visit, contact the instructor, Bob Muenchen, at email@example.com.
This workshop assumes a basic knowledge of R. Introductory knowledge of statistics is helpful, but not required.
When finished, participants will be able to use R to import documents in a variety of formats and analyze them with regard to topics or style.
Bob has served on the advisory boards of SAS Institute, SPSS Inc., StatAce OOD, Intuitics, the Statistical Graphics Corporation and PC Week Magazine (now eWeek). His suggested improvements have been incorporated into SAS, SPSS, JMP, STATGRAPHICS and several R packages. His research interests include statistical computing, data graphics and visualization, text analytics, and data mining.
On-site training is best done in a computer lab with a projector and, for large rooms, a PA system. The webinar version is delivered to your computer using Zoom (or similar webinar systems if your organization has a preference.)
Course programs, data, and exercises will be sent to you a week before the workshop. The instructions include installing R, which you can download R for free here: http://www.r-project.org/. We will also use RStudio, which you can download for free here: http://RStudio.com. If you already know a different R editor, that’s fine too.
Basic text analysis concepts
How to define a document
Terms / Tokens / Features
High / low frequency terms & their problems
Term frequency divided by document frequency (TF/IDF)
Bag of words
Topics: detecting, coding, scoring
Style & forensic analysis
Dictionary-based Content Analysis
The algorithm, pros, cons
Overview of the quanteda and readtext packages
Creating, viewing, summarizing, a corpus
Keywords in Context (KWIC)
Tokenizing words, sentences, punctuation, symbols, etc.
Finding popular terms
Taking advantage of text diversity measures
Discovering useful phrases (n-grams)
Creating & applying a phrase dictionary
Advantanges & dangers of using a stop list
Creating and applying a thesaurus
Finding words that differentiate documents (TF/IDF)
Advantages & dangers of stemming & lemmatization
Creating & applying a topic dictionary
Adding scores back to main data set for mixed-methods analyses
Finding & studying documents with zero topics
Studying topic pairs for potential combinations
Studying single topics for potential splits
What we can learn from “big talkers”
Summarizing topics using tidytext
Visualizing topics using ggplot2
Visualizing topics using word clouds
Extracting and studying document subsets
Latent Semantic Analysis (LSA)
Overview of the algorithm (detailed math optional)
Pros & cons of this approach
Converting quanteda’s document-feature matrices into the term-document matrices needed by the lsa package
Applying local and global weights (TF/IDF, Entropy)
Creating the maximum LSA space
Plotting scores to estimate number of topics
Creating a reduced LSA space
Interpreting the topics (or factors)
Adding factor scores to original data for mixed-methods analyses
Scoring a new set of documents (careful!)
Latent Dirichlet Allocation (detailed math optional)
The algorithm, pros, cons
Converting quanteda’s document-feature matrix into the document-term matrix needed by the topicmodels package
Performing the analysis
Finding top words for each topic using tidytext
Visualizing the topics using ggplot2
Combining scores with original data for mixed-methods analyses