The Popularity of Data Science Software

by Robert A. Muenchen

Abstract

This article, formerly known as The Popularity of Data Analysis Software, presents various ways of measuring the popularity or market share of software for advanced analytics software. Such software is also referred to as tools for data science, statistical analysis, machine learning, artificial intelligence, predictive analytics, business analytics, and is also a subset of business intelligence. Software covered includes:

Actuate, Alpine, Alteryx, Angoss, Apache Flink, Apache Hive, Apache Mahout, Apache MXNet, Apache Pig, Apache Spark, BMDP, C, C++ or C#, Caffe, Cognos, DataRobot, Domino Data Labs, Enterprise Miner, FICO, FORTRAN, H2O, Hadoop, Info Centricy or Xeno, Java, JMP, Julia, KNIME, Lavastorm, MATLAB, Megaputer or PolyAnalyst, Microsoft, Minitab, NCSS, Oracle Data Miner, Prognoz, Python, R, RapidMiner, Salford SPM, SAP, SAS, Scala, Spotfire, SPSS, SPSS Modeler, SQL, Stata, Statgraphics, Statistica, Systat, Tableau, Tensorflow, Teradata, Vowpal Wabbit, WEKA/Pentaho, and XGboost.

Updates: The most recent update was the Scholarly Articles section 6/19/2017.
I announce the updates to this article on Twitter: http://twitter.com/BobMuenchen

 Introduction

When choosing a tool for data analysis, now more commonly referred to as analytics or data science, there are many factors to consider:

  • Does it run natively on your computer?
  • Does the software provide all the methods you need? If not, how extensible is it?
  • Does its extensibility use its own unique language, or an external one (e.g. Python, R) that is commonly accessible from many packages?
  • Does it fully support the style (programming, or menus and dialog boxes, or workflow diagrams) that you like?
  • Are its visualization options (e.g. static vs. interactive) adequate for your problems?
  • Does it provide output in the form you prefer (e.g. cut & paste into a word processor vs. LaTeX integration)?
  • Does it handle large enough data sets?
  • Do your colleagues use it so you can easily share data and programs?
  • Can you afford it?

 

There are many ways to measure popularity or market share and each has its advantages and disadvantages. In rough order of the quality of the data, these include:

  • Job Advertisements
  • Scholarly Articles
  • IT Research Firm Reports
  • Surveys of Use
  • Books
  • Blogs
  • Discussion Forum Activity
  • Programming Popularity Measures
  • Sales & Downloads
  • Competition Use
  • Growth in Capability

Let’s examine each of them in turn.

Job Advertisements

One of the best ways to measure the popularity or market share of software for data science is to count the number of job advertisements for each. Job advertisements are rich in information and are backed by money so they are perhaps the best measure of how popular each software is now. Plots of job trends give us a good idea of what is likely to become more popular in the future.

Indeed.com is the biggest job site in the U.S., making its collection the best around. As their  co-founder and former CEO Paul Forster stated, Indeed.com includes “all the jobs from over 1,000 unique sources, comprising the major job boards – Monster, Careerbuilder, Hotjobs, Craigslist – as well as hundreds of newspapers, associations, and company websites.” Indeed.com also has superb search capabilities and it includes a tool for tracking long-term trends.

Searching for jobs using Indeed.com is easy, but searching for software in a way that ensures fair comparisons across packages is tricky. Some software is used only for data science (e.g. SPSS, Apache Spark) while others are used in data science jobs and more broadly in report-writing jobs (e.g. SAS, Tableau). General-purpose languages (e.g. C, Java) are heavily used in data science jobs, but the vast majority of jobs that use them have nothing to do with data science. To level the playing field I developed a protocol to focus the search for each software within only jobs for data scientists. The details of this protocol are described in a separate article, How to Search for Data Science Jobs. All of the graphs in this section use those procedures to make the required queries.

I collected the job counts discussed in this section on February 24, 2017. One might think that a sample of on a single day might not be very stable, but the large number of job sources makes the counts in Indeed.com’s collection of jobs quite consistent. The last time I collected this data was February 20, 2014, and those that were collected using the same protocol (the general purpose languages) yielded quite similar results. They grew between 7% and 11%, and correlated r=.94, p=.002.

Figure 1a shows that SQL is in the lead with nearly 18,000 jobs, followed by Python and Java in the 13,000’s. Hadoop comes next with just over 10,000 jobs, then R, the C variants, and SAS. (The C, C++, and C# are combined in a single search since job advertisements usually seek any of them). This is the first time this report has shown more jobs for R than SAS, but keep in mind these are jobs specific to data science. If you open up the search to include jobs for report writing, you’ll find twice as many SAS jobs.

Next comes Apache Spark, which was too new to be included in the 2014 report. It has come a long way in an incredibly short time. For a detailed analysis of Spark’s status, see Spark is the Future of Analytics, by Thomas Dinsmore.

Tableau follows, with around 5,000 jobs. The 2014 report excluded Tableau due to its jobs being dominated by report writing. Including report writing will quadruple the number of jobs for Tableau expertise to just over 2o,ooo.

Figure 1a. The number of data science jobs for the more popular software (those with 250 jobs or more, 2/2017).

Apache Hive is next, with around 3,900 jobs, then a very diverse set of software comes next, with Scala, SAP, MATLAB, and SPSS, each having just over 2,500 data science jobs. After those, we see a slow decline from Teradata on down.

Much of the software had fewer than 250 job listings. When displayed on the same graph as the industry leaders, their job counts appear to be zero; therefore I have plotted them separately in Figure 1b. Alteryx comes out the leader of this group with 240 jobs. Microsoft was a difficult search since it appears in data science ads that mention other Microsoft products such as Windows or SQL Server. To eliminate such over-counting, I treated Microsoft different from the rest by including product names such as Azure Machine Learning and Microsoft Cognitive Toolkit. So there’s a good chance I went from over-emphasizing Microsoft to under-emphasizing it with only 157 jobs.

Figure 1b. The number of analytics jobs for the less popular software (under 250 jobs, 2/2017).

Next comes the fascinating new high-performance language Julia. I added FORTRAN just for fun and was surprised to see it still hanging in there after all these years. Apache Flink is also in this grouping, which all have around 125 jobs.

H2O follows, with just over 100 jobs.

I find it fascinating that SAS Enterprise Miner, RapidMiner, and KNIME appear with a similar number of jobs (around 90). Those three share a similar workflow user interface that make them particularly easy to use. The companies advertise the software as not needing much training, so it may be possible that companies feel little need to hire expertise if their existing staff picks it up more easily. SPSS Modeler also uses that type of interface, but its job count is about half that of the others, at 50 jobs.

Bringing up the rear is Statistica, which was sold to Dell, then sold to Quest. Its 36 jobs trails far behind its similar competitor, SPSS, which has a staggering 74-fold job advantage.

The open source MXNet deep learning framework, shows up next with 34 jobs. Tensorflow is a similar project with a 12-fold job advantage, but these two are both young enough that I expect both will be growing rapidly in the future.

In the final batch that has few, if any, jobs, we see a few newcomers such as DataRobot and Domino Data Labs. Others have been around for years, leaving us to wonder how they manage to stay afloat given all the competition.

It’s important to note that the values shown in Figures 1a and 1b are single points in time. The number of jobs for the more popular software do not change much from day to day. Therefore the relative rankings of the software shown in Figure 1a is unlikely to change much over the coming year. The less popular packages shown in Figure 1b have such low job counts that their ranking is more likely to shift from month to month, though their position relative to the major packages should remain more stable.

Each software has an overall trend that shows how the demand for jobs changes across the years. You can plot these trends using Indeed.com’s Job Trends tool. However, as before, focusing just on analytics jobs requires carefully constructed queries, and when comparing two trends at a time, they both have to fit in the same query limit. Those details are described here.

I’m particularly interested in trends involving R so let’s see how it compares to SAS. In Figure 1c we see that the number of data science jobs for SAS has remained relatively flat from 2012 until February 28, 2017 when I made this plot. During that same period, jobs for R grew steadily and finally surpassed jobs for SAS in early 2016. As noted in a blog post (and elsewhere in this report), use of R in scholarly publications surpassed those for SAS in 2015.

Figure 1c. Data science job trends for R (blue) and SAS (orange).

A long-standing debate has been taking place on the Internet regarding the relative place of Python and R. Ironically, this debate about data science software has involved very little actual data. However, it is possible now to at least study the job trends. Figure 1a showed us that Python is well out in front of R, at least on that single day the searches were run. What has the data looked like over time? The answer is shown in Figure 1d.

Figure 1d. Jobs trends for R (blue & lower) and Python (orange & upper).

As we see, Python surpassed R in terms of data science jobs back in 2013. These are, of course, very different languages and a quick scan of job descriptions will show that the R jobs are much more focused on the use of existing methods of analysis, while the Python jobs have more of a custom-programming angle to them.

 

Scholarly Articles

Scholarly articles provide a rich source of information about data science tools. Their creation requires significant amounts of effort, much more than is required to respond to a survey of tool usage. The more popular a software package is, the more likely it will appear in scholarly publications as an analysis tool, or even an object of study.

Since graduate students do the great majority of analysis in such articles, the software used can be a leading indicator of where things are headed. Google Scholar offers a way to measure such activity. However, no search of this magnitude is perfect; each will include some irrelevant articles and reject some relevant ones. Searching through concise job requirements (see previous section) is easier than searching through scholarly articles; however only software that has advanced analytical capabilities can be studied using this approach. The details of the search terms I used are complex enough to move to a companion article, How to Search For Data Science Articles.  Since Google regularly improves its search algorithm, each year I re-collect the data for the previous years.

Figure 2a shows the number of articles found for the more popular software packages (those with at least 750 articles) in the most recent complete year, 2016. To allow ample time for publication, insertion into online databases, and indexing, the was data collected on 6/8/2017.

SPSS is by far the most dominant package, as it has been for over 15 years. This may be due to its balance between power and ease-of-use. R is in second place with around half as many articles. SAS is in third place, still maintaining a substantial lead over Stata, MATLAB, and GraphPad Prism, which are nearly tied. This is the first year that I’ve tracked Prism, a package that emphasizes graphics but also includes statistical analysis capabilities. It is particularly popular in the medical research community where it is appreciated for its ease of use. However, it offers far fewer analytic methods than the other software at this level of popularity.

Note that the general-purpose languages: C, C++, C#, FORTRAN, MATLAB, Java, and Python are included only when found in combination with data science terms, so view those counts as more of an approximation than the rest.

Figure 2a. Number of scholarly articles found in the most recent complete year (2016) for the more popular data science software. To be included, software must be used in at least 750 scholarly articles.

The next group of packages goes from Apache Hadoop through Python, Statistica, Java, and Minitab, slowly declining as they go.

Both Systat and JMP are packages that have been on the market for many years, but which have never made it into the “big leagues.”

From C through KNIME, the counts appear to be near zero, but keep in mind that each are used in at least 750 journal articles. However, compared to the 86,500 that used SPSS, they’re a drop in the bucket.

Toward the bottom of Fig. 2a are two similar packages, the open source Caffe and Google’s Tensorflow. These two focus on “deep learning” algorithms, an area that is fairly new (at least the term is) and growing rapidly.

The last two packages in Fig 2a are RapidMiner and KNIME. It has been quite interesting to watch the competition between them unfold for the past several years. They are both workflow-driven tools with very similar capabilities. The IT advisory firms Gartner and Forester rate them as tools able to hold their own against the commercial titans, SPSS and SAS. Given that SPSS has roughly 75 times the usage in academia, that seems like quite a stretch. However, as we will soon see, usage of these newcomers are growing, while use of the older packages is shrinking quite rapidly. This plot shows RapidMiner with nearly twice the usage of KNIME, despite the fact that KNIME has a much more open source model.

Figure 2b shows the results for software used in fewer than 750 articles in 2016. This change in scale allows room for the “bars” to spread out, letting us make comparisons more effectively. This plot contains some fairly new software whose use is low but growing rapidly, such as Alteryx, Azure Machine Learning, H2O, Apache MXNet, Amazon Machine Learning, Scala, and Julia. It also contains some software that is either has either declined from one-time greatness, such as BMDP, or which is stagnating at the bottom, such as Lavastorm, Megaputer, NCSS, SAS Enterprise Miner, and SPSS Modeler.

Figure 2b. The number of scholarly articles for the less popular data science (those used by fewer than 750 scholarly articles in 2016.

While Figures 2a and 2b are useful for studying market share as it stands now, they don’t show how things are changing. It would be ideal to have long-term growth trend graphs for each of the analytics packages, but collecting that much data annually is too time consuming. What I’ve done instead is collect data only for the past two complete years, 2015 and 2016. This provides the data needed to study year-over-year changes.

Figure 2c shows the percent change across those years, with the “hot” packages whose use is growing shown in red (right side); those whose use is declining or “cooling” are shown in blue (left side). Since the number of articles tends to be in the thousands or tens of thousands, I have removed any software that had fewer than 500 articles in 2015. A package that grows from 1 article to 5 may demonstrate 500% growth, but is still of little interest.

 

Figure 2c. Change in the number of scholarly articles using each software in the most recent two complete years (2015 to 2016). Packages shown in red are “hot” and growing, while those shown in blue are “cooling down” or declining.

Caffe is the data science tool with the fastest growth, at just over 150%. This reflects the rapid growth in the use of deep learning models in the past few years. The similar products Apache MXNet and H2O also grew rapidly, but they were starting from a mere 12 and 31 articles respectively, and so are not shown.

IBM Watson grew 91%, which came as a surprise to me as I’m not quite sure what it does or how it does it, despite having read several of IBM’s descriptions about it. It’s awesome at Jeopardy though!

While R’s growth was a “mere” 14.7%, it was already so widely used that the percent translates into a very substantial count of 5,300 additional articles.

In the RapidMiner vs. KNIME contest, we saw previously that RapidMiner was ahead. From this plot we also see that it’s continuing to pull away from KNIME with quicker growth.

From Minitab on down, the software is losing market share, at least in academia. The variants of C and Java are probably losing out a bit to competition from several different types of software at once.

In just the past few years, Statistica was sold by Statsoft to Dell, then Quest Software, then Francisco Partners, then Tibco! Did its declining usage drive those sales? Did the game of musical chairs scare off potential users? If you’ve got an opinion, please comment below or send me an email.

The biggest losers are SPSS and SAS, both of which declined in use by 25% or more. Recall that Fig. 2a shows that despite recent years of decline, SPSS is still extremely dominant for scholarly use.

I’m particularly interested in the long-term trends of the classic statistics packages. So in Figure 2d I have plotted the same scholarly-use data for 1995 through 2016.

Figure 2d. The number of scholarly articles found in each year by Google Scholar. Only the top six “classic” statistics packages are shown.

As in Figure 2a, SPSS has a clear lead overall, but now you can see that its dominance peaked in 2009 and its use is in sharp decline. SAS never came close to SPSS’ level of dominance, and its use peaked around 2010. GraphPAD Prism followed a similar pattern, though it peaked a bit later, around 2013.

Note that the decline in the number of articles that used SPSS, SAS, or Prism is not balanced by the increase in the other software shown in this particular graph. Even adding up all the other software shown in Figures 2a and 2b doesn’t account for the overall decline. However, I’m looking at only 46 out of over 100 data science tools. SQL and Microsoft Excel could be taking up some of the slack, but it is extremely difficult to focus Google Scholar’s search on articles that used either of those two specifically for data analysis.

Since SAS and SPSS dominate the vertical space in Figure 2d by such a wide margin, I removed those two curves, leaving only two points of SAS usage in 2015 and 2016. The result is shown in Figure 2e.

 

Figure 2e. The number of scholarly articles found in each year by Google Scholar for classic statistics packages after the curves for SPSS and SAS have been removed.

Freeing up so much space in the plot allows us to see that the growth in the use of R is quite rapid and is pulling away from the pack. If the current trends continue, R will overtake SPSS to become the #1 software for scholarly data science use by the end of 2018. Note however, that due to changes in Google’s search algorithm, the trend lines have shifted before as discussed here. Luckily, the overall trends on this plot have stayed fairly constant for many years.

The rapid growth in Stata use seems to be finally slowing down.  Minitab’s growth has also seemed to stall in 2016, as has Systat’s. JMP appears to have had a bit of a dip in 2015, from which it is recovering.

These results apply to scholarly articles in general. The results in specific fields or journals are very likely to be different. Denis Haine applied extracted similar data for epidemiology journals and even focused in on Bayesian software in a similar post.

 

IT Research Firms

IT research firms study software products and corporate strategies, they survey customers regarding their satisfaction with the products and services,  and then provide their analysis on each in reports they sell to their clients. Each company has its own criteria for rating companies, so they don’t always agree. However, I find the reports extremely interesting reading. While these reports are expensive, the companies that receive good ratings usually purchase copies to give away to potential customers. An Internet search of the report title will often reveal the companies that are distributing such copies.

Gartner, Inc. is one of the companies that provides such reports.  Out of the roughly 100 companies selling data science software, Gartner selected 16 which had either high revenue or lower revenue but high growth (see full report for details). After extensive input from both customers and company representatives, Gartner analysts rated the companies on their “completeness of vision” and their “ability to execute” that vision. Figure 3 shows the resulting plot. Note that purely open source software is not rated by Gartner, but nearly all the software in Figure 3 includes the ability to interact with R and Python.

The Leader’s Quadrant is the place for companies who have a future direction in line with their customer’s needs and the resources to execute that vision. The four companies in the Leaders quadrant have remained the same for the last three reports: IBM, KNIME, RapidMiner, and SAS. Of these, they rate IBM as having slightly greater “completeness of vision” due to the extensive integration they offer to open source software compared to SAS Institute. KNIME and RapidMiner are quite similar as the are driven by an easy to use workflow interface. Both offer free and open source versions, but RapidMiner’s is limited by a cap on the amount of data that it can analyze (10,000 cases). IBM and SAS are market leaders based on revenue and, as we have seen, KNIME and RapidMiner are the ones with high growth.

Figure 3a. Gartner Magic Quadrant for Data Science Platforms

The companies in the Visionaries quadrant are those that have a good future plans but which may not have the resources to execute that vision. Of these, Microsoft increased its ability to execute compared to the 2016 report, and Alpine Data, one of the smallest companies, declined sharply in their ability to execute. The remaining three companies in this quadrant have just been added: H2O.ai, Dataiku, and Domino Data Lab.

Those in the Challenger’s quadrant have ample resources but less customer confidence on their future plans. Mathworks, the makers of MATLAB, is new to the report. Quest purchased Statistica from Dell, and it appears in roughly the same position as Dell did last year.

Members of the Niche Players quadrant offer tools that are not as broadly applicable.

In 2017 Gartner dropped coverage of Accenture, Lavastorm, Megaputer, Predixion Software, and Prognoz.

For more details, see the original report and this analysis by Thomas Dinsmore.

Forrester Research, Inc. is another company that provides such reports. The conclusions from their 2017 report, Forrester Wave: Predictive Analytics and Machine Learning Solutions, are summarized in Figure 3b. On the x-axis they list the strength of each company’s strategy, while the y-axis measures the strength of their current offering. The size and shading of the circles around each data point indicate the strength of each vendor in the marketplace (70% vendor size, 30% ISV and service partners).

As with Gartner 2017 report discussed above, IBM, SAS, KNIME, and RapidMiner are considered leaders. However, Forrester sees several more companies in this category: Angoss, FICO, and SAP. This is quite different from the Gartner analysis, which places Angoss and SAP in the middle of the pack, while FICO is considered a niche player.

Figure 3b. Forrester Wave plot of predictive analytics and machine learning software.

In their Strong Performers category, they have H2O.ai, Microsoft, Statistica, Alpine Data, Dataiku, and, just barely, Domino Data Labs. Gartner rates Dataiku quite a bit higher, but they generally agree on the others. The exception is that Gartner dropped coverage of Alpine Data in 2017. Finally, Salford Systems is in the Contenders section. Salford was recently purchased by Minitab, a company that has never been rated by either Gartner or Forrester before as they focused on being a statistics package rather than expanding into machine learning or artificial intelligence tools as most other statistics packages have (another notable exception: Stata). It will be interesting to see how they’re covered in future reports.

Compared to last year’s Forrester report, KNIME shot up from barely being a Strong Performer into the Leader’s segment. RapidMiner and FICO moved from the middle of the Strong Performers segment to join the Leaders. The only other major move was a lateral one for Statistica, whose score on Strategy went down while its score on Current Offering went up (last year Statistica belonged to Dell, this year it’s part of Quest Software.)

The size of the “market presence” circle for RapidMiner indicates that Forrester views its position in the marketplace to be as strong as that of IBM and SAS. I find that perspective quite a stretch indeed!

For a wonderfully detailed analysis of Forrester’s 2017 report, see Thomas Dinsmore’s blog.

Alteryx, Oracle, and Predixion were all dropped from this year’s Forrester report. They mention Alteryx and Oracle as having “capabilities embedded in other tools” implying that that is not the focus of this report. No mention was made of why Predixion was dropped, but considering that Gartner also dropped coverage of then in 2017, it doesn’t bode well for the company.

Hurwitz & Associates released their Advanced Analytics: The Hurwitz Victory Index Report in mid 2014. Figure 3c shows their plot of strength of company strategy vs. viability. This is similar to the measures plotted in the Gartner Group’s Magic Quadrant plot (Fig. 3a). These two plots both show IBM and SAS in the best position, but after that there’s not much similarity. Gartner sees RapidMiner in the same ballpark as IBM and SAS, while Hurwitz shows it towards the opposite end. KNIME is also toward the top of Gartner’s plot and not covered by Hurwitz at all. This plot still has Predixion on it, but in dead last position (though labeled as a strong contender for Leaders status!)

Figure 7c. Hurwitz & Associates plot of

Figure 3c. Hurwitz & Associates plot of corporate vision vs. viability.

Surveys of Use

Survey data adds additional information regarding software popularity, but they are commonly done using “snowball sampling” in which the survey provider tries to widely distribute the link and then vendors vie to see who can get the most of their users to participate. So long as they all do so with equal effect, the results can be useful. However, the information is often limited, because the questions are short and precise (e.g. “tools for data mining” or “program languages for data mining”) and responding requires just a few mouse clicks, rather than the commitment required to place a job advertisement or publish a scholarly article, book, or blog post. As a result, it’s not unusual to see market share jump 100% or drop 50% in a single year, which is very unlikely to reflect changes in actual use.

Rexer Analytics conducts a survey of data scientists every other year, asking a wide range of questions regarding data science (previously referred to as data mining by the survey itself.) Figure 4a shows the tools that the 1,220 respondents reported using in 2015.

Figure 6a. Analytics tools used.

Figure 4a. Analytics tools used by respondents to the 2015 Rexer Analytics Survey. In this view, each respondent was free to check multiple tools.

We see that R has a more than 2-to-1 lead over the next most popular packages, SPSS Statistics and SAS. Microsoft’s Excel Data Mining software is slightly less popular, but note that it is rarely used as the primary tool. Tableau comes next, also rarely used as the primary tool. That’s to be expected as Tableau is principally a visualization tool with minimal capabilities for advanced analytics.

The next batch of software appears at first to be all in the 15% to 20% range, but KNIME and RapidMiner are listed both in their free versions and, much further down, in their commercial versions. These data come from a “check all that apply” type of question, so if we add the two amounts, we may be over counting. However, the survey also asked,  “What one (my emphasis) data mining / analytic software package did you use most frequently in the past year?”   Using these data, I combined the free and commercial versions and plotted the top 10 packages again in figure 4b. Since other software combinations are likely, e.g. SAS and Enterprise Miner; SPSS Statistics and SPSS Modeler; etc. I combined a few others as well.

Figure 6b. The percent of survey respondents who checked each package as their primary tool.

Figure 4b. The percent of survey respondents who checked each package as their primary tool in 2015. Note that free and commercial versions of KNIME and RapidMiner are combined. Multiple tools from the same company are also combined. Only the top 10 are shown.

In this view we see R even more dominant, with a 3-to-1 advantage compared to the software from IBM SPSS and SAS Institute. However, the overall ranking of the top three didn’t change. KNIME however rises from 9th place to 4th. RapidMiner rises as well, from 10th place to 6th. KNIME has roughly a 2-to-1 lead over RapidMiner, even though these two packages have similar capabilities and both use a workflow user interface. This may be due to RapidMiner’s move to a more commercially oriented licensing approach. For free, you can still get an older version of RapidMiner or a version of the latest release that is quite limited in the types of data files it can read. Even the academic license for RapidMiner is constrained by the fact that the company views “funded activity” (e.g. research done on government grants) the same as commercial work. The KNIME license is much more generous as the company makes its money from add-ons that increase productivity, collaboration and performance, rather than limiting analytic features or access to popular data formats.

The results of a similar poll done by the KDnuggets.com web site in May of 2015 are shown in Figure 4c. This one shows R in first place with 46.9% of users reporting having used it for a “real project.” RapidMiner, SQL, and Python follow quite a bit lower with around 30% of users. Then at around 20% are Excel, KNIME and HADOOP. It’s interesting to see that these survey results reverse the order in the previous one, showing RapidMiner as being more popular than KNIME. Both are still the top two “point-and-click” type packages generally used by non-programmers.

KDnuggests 2015

Figure 4c. Percent of respondents that used each software in KDnuggets’ 2015 poll. Only software with 5% market share are shown. The % alone is the percent of tool voters that used only that tool alone. For example, only 3.6% of R users have used only R, while 13.7% of RapidMiner users indicated they used that tool alone. Years are color coded, with 2015, 2014, 2013 from top to bottom.

O’Reilly Media conducts an annual Data Science Salary Survey which also asks questions about analytics tools. Although the full report of results As their report notes,”O’Reilly content—in books, online, and at conferences—is focused on technology, in particular new technology, so it makes sense that our audience would tend to be early adopters of some of the newer tools.”  The results from their “over 600” respondents are shown in figures 6d and 6e.

OReilly-2015-Top

Figure 4d. Tools used by 2015 respondents to O’Reilly 2015 salary survey. The less popular tools among this audience are shown in the following figure.

OReilly-2015-Bottom

Figure 4e. The less popular tools used by the respondents of O’Reilly’s 2015 salary survey.

The O’Reilly results have SQL in first place with 70% of users reporting it, followed closely by Excel. Python and R follow seemingly tied for third place with 55%. However, Python also appears in 6th place with its subroutine libraries numpy, etc., and R’s popular ggplot package appears in 7th place, with around 38% market share. The first commercial package with deep analytic capabilities is SAS in 23rd place!  This emphasizes that the O’Reilly sample is heavily weighted towards their usual open source audience. Hopefully in the future they will advertise the survey to a wide audience and do so as more than just a salary survey. Tool surveys gain additional respondents since they are advertised by advocates of the various tools (vendors, fans, etc.)

Lavastorm, Inc. conducted a survey of analytic communities including LinkedIn’s Lavastorm Analytics Community Group, Data Science Central and KDnuggets. The results were published in March, 2013, and the bar chart of “self-service analytic tool” usage among their respondents is shown in Figure 6f. Excel comes out as the top tool, with 75.6% of respondents reporting its use.

R comes out as the top advanced analytics tool with 35.3% of respondents, followed closely by SAS. MS Access’ position in 4th place is a bit of an outlier as no other surveys include it at all. Lavastorm comes out with 3.4%, while other surveys don’t show them at all. That’s hardly a surprise given than the survey was aimed at the Lavastorm’s LinkedIn community group.

Figure 7e. Lavastorm survey of analytics tools.

Figure 4f. Lavastorm survey of analytics tools.

Books

The number of books that include a software’s name in its title is a particularly useful information since it requires a significant effort to write one and publishers do their own study of market share before taking the risk of publishing. However, it can be difficult to do searches to find books that use general-purpose languages which also focus only on analytics. Amazon.com offers an advanced search method which works well for all the software except R and the general-purpose languages such as Java, C, and MATLAB. I did not find a way to easily search for books on analytics that used such general purpose languages, so I’ve excluded them in this section.

The Amazon.com advanced search configuration that I used was (using SAS as an example):

Title: SAS -excerpt -chapter -changes -articles 
Subject: Computers & Technology
Condition: New
Format: All formats
Publication Date: After January, 2000

The “title” parameter allowed me to focus the search on books that included the software names in their titles. Other books may use a particular software in their examples, but they’re impossible to search for easily.  SAS has many manuals for sale as individual chapters or excerpts. They contain “chapter” or “excerpt” in their title so I excluded them using the minus sign, e.g. “-excerpt”. SAS also has short “changes and enhancements” booklets that the developers of other packages release only in the form of flyers and/or web pages, so I excluded “changes” as well. Some software listed brief “articles” which I also excluded. I did the search on June 1, 2015, and I excluded excerpts, chapters, changes, and articles from all searches.

“R” is a difficult term to search for since it’s used in book titles to indicate Registered Trademark as in “SAS(R)”. Therefore I verified all the R books manually.

The results are shown in the table immediately below, where it’s clear that a very small number of analytics software packages dominate the world of book publishing. SAS has a huge lead with 576 titles, followed by SPSS with 339 and R with 240. SAS and SPSS both have many versions of the same book or manual still for sale, so their numbers are both inflated as a result. JMP and Hadoop both had fewer than half of R’s count and then Minitab and Enterprise Miner had fewer then half again as many. Although I obtained counts on all 27 of the domain-specific (i.e. not general-purpose) analytics software packages or languages shown in Figure 2a, I cut the table off at software that had 8 or fewer books to save space.

Software        Number of Books 
SAS                  576
SPSS Statistics      339
R                    240    [Corrected from blog post: 172]
JMP                   97
Hadoop                89
Stata                 62
Minitab               33
Enterprise Miner      32

Table 1. The number of books whose titles contain the name of each software package.

 

Blogs

On Internet blogs, people write about software that interests them, showing how to solve problems and interpreting events in the field. Blog posts contain a great deal of information about their topic, and although it’s not as time consuming as a book to write, maintaining a blog certainly requires effort. Therefore, the number of bloggers writing about analytics software has potential as a measure of popularity or market share. Unfortunately, counting the number of relevant blogs is often a difficult task. General purpose software such as Java, Python, the C language variants and MATLAB have many more bloggers writing about general programming topics than just analytics. But separating them out isn’t easy. The name of a blog and the title of its latest post may not give you a clue that it routinely includes articles on analytics.

Another problem arises from the fact that what some companies would write up as a newsletter, others would do as a set of blogs, where several people in the company each contribute their own blog. Those individual blogs may also combined into a single company blog inflating the count further still. Statsoft and Minitab offer examples of this. So what’s really interesting is not company employees who are assigned to write blogs, but rather those written by outside volunteers. In a few lucky cases, lists of such blogs are maintained, usually by blog consolidators, who combine many blogs into a large “metablog.” All I have to do is find such lists and count the blogs. I don’t attempt to extract the few vendor employees that I know are blended into such lists. I only skip those lists that are exclusively employee-based (or very close to it). The results are shown here:

         Number
Software of Blogs Source
R         550     R-Bloggers.com
Python     60     SciPy.org
SAS        40     PROC-X.com, sasCommunity.org Planet
Stata      11     Stata-Bloggers.com

Table 2.  Number of blogs devoted to each software package on April 7, 2014,
and the source of the data.

R’s 550 blogs is quite an impressive number. For Python, I could only find that list of 60 that were devoted to the SciPy subroutine library. Some of those are likely cover topics besides analytics, but to determine which never cover the topic would be quite time consuming. The 40 blogs about SAS is still an impressive figure given that Stata was the only other company that even garnered a list anywhere. That list is at the vendor itself, Statacorp, but it consists of non-employees except for one.

While searching for lists of blogs on other software, I did find individual blogs that at least occasionally covered a particular topic. However, keeping this list up to date is far too time consuming given the relative ease with which other popularity measures are collected.

If you know of other lists of relevant blogs, please let me know and I’ll add them. If you’re a software vendor employee reading this, and your company does not build a metablog or at least maintain a list of your bloggers, I recommend taking advantage of this important source of free publicity.

 

Discussion Forum Activity

Another way to measure software popularity is to see how many people are helping one another use each package or language. While such data is readily available, it too has its problems. Menu-driven software like SPSS or workflow-driven software such as KNIME are quite easy to use and tend to generate fewer questions. Software controlled by programming requires the memorization of many commands and requiring more support. Even within languages, some are harder to use than others, generating more questions (see Why R is Hard to Learn).

Another problem with this type of data is that there are many places to ask questions and each has its own focus. Some are interested in a classical statistics perspective while others have a broad view of software as general-purpose programming languages. In recent years, companies have set up support sites within their main corporate web site, further splintering the places you can go to get help. Usage data for such sites is not readily available.

Another problem is that it’s not as easy to use logic to focus in on specific types of questions as it was with the data from job advertisements and scholarly articles discussed earlier. It’s also not easy to get the data across time to allow us to study trends.  Finally, the things such sites measure include: software group members (a.k.a. followers), individual topics (a.k.a. questions or threads), and total comments across all topics (a.k.a. total posts). This makes combining counts across sites problematic.

Two of the biggest sites used to discuss software are LinkedIn and Quora. They both display the number of people who follow each software topic, so combining their figures makes sense. However, since the sites lack any focus on analytics, I have not collected their data on general purpose languages like Java, MATLAB, Python or variants of C. The results of data collected on 10/17/2015 are shown here:

LinkedIn_Quora_2015

Figure 7a.  Number of people who follow each software on LinkedIn and Quara.

We see that R is the dominant software and that moving down through SAS, SPSS, and Stata results in a loss of roughly half the number of people in each step. Lavastorm follows Stata, but I find it odd that there was absolutely zero discussion of Lavastorm on Quora. The last bar that you can even see on this plot is the 62 people who follow Minitab. All the ones below that have tiny audiences of fewer than 10.

Next let’s examine two sites that focus only on statistical questions: Talk Stats and Cross Validated. They both report the number of questions (a.k.a. threads) for a given piece of software, allowing me to total their counts:

CrossValidated_TalkStats_2015

Figure 7b.  Number of questions and for each software on Talk Stats and Cross Validated. Those not shown had no questions.

We see that R has a 4-to-1 lead over the next most popular package, SPSS. Stata comes in at 3rd place, followed by SAS. The fact that SAS is in fourth place here may be due to the fact that it is strong in data management and report writing, which are not the types of questions that these two sites focus on. Although MATLAB and Python are general purpose languages, I include them here because the questions on this site are within the realm of analytics. Note that I collected data on as many packages as were shown in the previous graph, but those not shown have a count of zero. Julia appears to have a count of zero due to the scale of the graph, but it actually had 5 questions on Cross Validated.

Programming Popularity Measures

Several web sites rank the popularity of programming languages. Unfortunately, they don’t differentiate between general-purpose languages and application-specific ones used for analytics. However, it’s easy to choose the few analytics languages their results.

The most comprehensive of these sites is the IEEE Spectrum Ranking. This site combines 12 metrics from 10 different sites. These include some of the measures discussed above, such as popularity on job sites and search engines. They also include fascinating and useful measures such as how much new programming code was added to the popular GitHub repository in the last year. This figure shows their top 10 languages for 2015:

IEEE_Spectrum

Figure 8a.  IEEE Spectrum language popularity rankings. The left column (orange) shows the 2015 ranking, while the left (yellow) one shows the 2014 ranking.

We see that R is in 6th place and that it has increased from 9th place in 2014. Not shown on this is SAS in 26th place. Python is ranked in 4th place, but that’s for all purposes, while the use of R is more focused on analytics. No other analytics-specific language makes it in their rankings at all. This ranking is based on a weighted composite score and the site is interactive, allowing you to generate a ranking more suited to your needs.

The next most comprehensive analysis is provded by RedMonk. Their analysis is simple and objective. They plot the number of lines of code written using each language on the popular Github repository against the number of tagged comments on the discussion forum StackOverflow.com. Here is the result:

RedMonk-2015-Q3

Figure 8b.  RedMonkpProgramming language popularity as measured by number of projects on GitHub and the amount of discussion on StackOverflow.

Moving from the upper right corner downward the lower left, we can see that Redmonk’s approach shows R as a very popular language, around 12th place. Although a substantial amount of the metrics for Python, MATLAB, and Julia may be due to analytics use, we have no way of knowing how much.

The TIOBE Community Programming Index also ranks the popularity of programming languages. It extracts measurements from the 25 most popular search engines including Google, YouTube, Wikipedia, Amazon.com, and combines them into a single index. In their October 2015 rankings, they place R in 20th place and SAS in 23rd. Stata is in a bundle they call “the next 50” languages, whose popularity among general-purpose languages is so sparse that their relative rankings are too unstable to bother giving individual ranks. SPSS is a language they monitor, but it doesn’t make it into their top 100. This brings us to an important limitation of the Tiobe index: it searches for one single string: “X programming.” So if it didn’t find “SPSS programming” then it doesn’t count. The complex searches that I used for jobs and scholarly articles was far more useful in estimating each package’s popularity. Another limitation to the Tiobe index is that it measures what is on the Internet now, so it’s a lagging indicator. There’s no way to plot trends without purchasing their data, which is quite expensive.

A very similar popularity index is PYPL PopularitY of Programming Language. It only tracks the top 15 languages and, in October of 2015 it placed R in 11th place. It searches on the single string, “X tutorial” making it a leading indicator of what’s likely to be more popular in the future.

The Transparent Language Popularity Index is very similar to the TIOBE Index with except that its ranking software, algorithm and data are published for all to see. Work on this index ceased as of July, 2013.

 

Sales & Downloads

Sales figures reported by some commercial vendors include products that have little to do with analysis. Many vendors don’t release sales figures, or they release them in a form that combines many different products, making the examination of a particular product impossible. For open source software such as R, you could count downloads, but one confused person can download many copies, inflating the total. Conversely, many people can use a single download on a server, deflating it.

Download counts for the R-based Bioconductor project are located here. Similar figures for downloads of Stata add-ons (not Stata itself) are available here.  A list of Stata repositories is available here. The many sources of downloads both in repositories and individuals’ web sites makes counting downloads a very difficult task.

 

Competition Use

Kaggle.com is a web site that sponsors data science contests. People post problems there along the amount of money they are willing pay the person or team who solves their problem the best. Both money and the competitors’ reputations are on the line, so there’s strong motivation to use the best possible tools. Figure 9 compares the usage of the top two tools chosen by the data scientists working on the problems. From April 2015 through July 2016, we see the usage of both R and Python growing at a similar rate. At the most recent time point Python has pulled ahead slightly. Much more detail is available here.

Figure 9. Software used in data science competitions on Kaggle.com in 2015 and 2016.

Growth in Capability

The capability of analytics software has grown significantly over the years. It would be helpful to be able to plot the growth of each software package’s capabilities, but such data are hard to obtain. John Fox (2009) acquired them for R’s main distribution site http://cran.r-project.org/ for each version of R. To simplify ongoing data collection, I kept only the values for the last version of R released each year (usually in November or December), and collected data through the most recent complete year.

These data are displayed in Figure 10. The right-most point is for version 3.2.3, released 12/10/2015. The growth curve follows a rapid parabolic arc (quadratic fit with R-squared=.995).

Fig_9_CRAN

Figure 10. Number of R packages available on its main distribution site for the last version released in each year.

To put this astonishing growth in perspective, let us compare it to the most dominant commercial package, SAS. In version, 9.3, SAS contained around 1,200 commands that are roughly equivalent to R functions (procs, functions, etc. in Base, Stat, ETS, HP Forecasting, Graph, IML, Macro, OR, and QC). In 2015, R added 1,357 packages, counting only CRAN, or approximately 27,642 functions. During 2015 alone, R added more functions/procs than SAS Institute has written in its entire history.

Of course while SAS and R commands solve many of the same problems, they are certainly not perfectly equivalent. Some SAS procedures have many more options to control their output than R functions do, so one SAS procedure may be equivalent to many R functions. On the other hand, R functions can nest inside one another, creating nearly infinite combinations. SAS is now out with version 9.4 and I have not repeated the arduous task of recounting its commands. If SAS Institute would provide the figure, I would include it here. While the comparison is far from perfect, it does provide an interesting perspective on the size and growth rate of R.

As rapid as R’s growth has been, these data represent only the main CRAN repository. R has eight other software repositories, such as Bioconductor, that are not included in Fig. 10. A program run on 4/19/2016 counted 11,531 R packages at all major repositories, 8,239 of which were at CRAN. (I excluded the GitHub repository since it contains duplicates to CRAN that I could not easily remove.) So the growth curve for the software at all repositories would be approximately 40% higher on the y-axis than the one shown in Figure 10.

As with any analysis software, individuals also maintain their own separate collections available on their web sites. However, those are not easily counted.

What’s the total number of R functions? The Rdocumentation site shows the latest counts of both packages and functions on CRAN, Bioconductor and GitHub. They indicate that there is an average of 19.78 functions per package. Given the package count of 11,531, as of 4/19/2016 there were approximately 228,103 total functions in R. In total, R has approximately 190 times as many commands as its main commercial competitor, SAS.

 

What’s Missing?

I previously included graphs from Google Trends. That site tracks not what’s actually on the Internet via searches, but rather the keywords and phrases that people are entering into their Google searches. That ended up being so variable as to be essentially worthless. For an interesting discussion of this topic, see this article by Rick Wicklin.

Website Popularity – in previous editions I have included measures of this. However, as the corporate landscape has consolidated, we end up comparing huge companies with interests far outside the field of analytics (e.g. IBM) with relatively small focused ones, which no longer makes sense.

 

Conclusion

Although the ranking of each package varies depending on the criteria used, we can still see major trends. Among the software that tends to be used as a collection of pre-written methods, R, SAS, SPSS and Stata tend to always be in the top, with R and SAS occasionally swapping places depending on the criteria used. I don’t include Python in this group as I rarely see someone using it exclusively to call pre-written routines.

Among software that tends to be used as a language for analytics, C/C#/C++, Java, MATLAB, Python, R and SAS are always towards the top. I list those in alphabetical order since many of the measures cover not only use for analytics but for other uses as well. Among my colleagues, those who are more towards the computer science side of the data science field tend to prefer Python, while those who are more towards the statistics send tend to prefer R. A language worth mentioning is Julia, whose goal is to have syntax as clean as Pythons while maintaining the top speed reached by the C/C#/C++ group.

A trend that I find very interesting is the rise of software that uses the workflow (or flowchart) style of control. While menu-driven software is easy to learn, it’s not easy to re-use the work. Workflow-driven software is almost as easy — the dialog boxes that control each node are almost identical to menu-driven software — but you also get to save and re-use the work. Software that uses this approach includes: KNIME, Microsoft Azure Machine Learning, RapidMiner, SPSS Modeler (the first to popularize this approach), SAS Enterprise Miner, SAS Studio, and even two cloud-based system that I have not been tracking, Dotplot Designer and Microsoft Azure Machine Learning.  The wide use of this interface is allowing non-programmers to make use of advanced analytics.

I’m interested in other ways to measure software popularity.  If  you have any ideas on the subject, please contact me at muenchen.bob@gmail.com.

If you are a SAS or SPSS user interested in learning more about R, you might consider my book, R for SAS and SPSS Users. Stata users might want to consider reading R for Stata Users, which I wrote with Stata guru Joe Hilbe. I also teach workshops on these topics both online and with site visits.

 

Acknowledgments

I am grateful to the following people for their suggestions that improved this article: John Fox (2009) provided the data on R package growth; Marc Schwartz (2009) suggested plotting the amount of activity on e-mail discussion lists; Duncan Murdoch clarified the pitfalls of counting downloads; Martin Weiss pointed out both how to query Statlist for its number of subscribers; Christopher Baum provided information regarding counting Stata downloads; John (Jiangtang) HU suggested I add more detail from the TIOBE index;  Andre Wielki suggested the addition of SAS Institute’s support forums; Kjetil Halvorsen provided the location of the expanded list of Internet R discussions; Dario Solari and Joris Meys suggested how to improve Google Insight searches; Keo Ormsby provded useful suggestions regarding Google Scholar; Karl Rexer provided his data mining survey data; Gregory Piatetsky-Shapiro provided his KDnuggets data mining poll; Tal Galili provided advice on blogs and consolidation, as well as Stack Exchange and Stack Overflow; Patrick Burns provided general advice; Nick Cox clarified the role of Stata’s software repositories and of popularity itself; Stas Kolenikov provided the link of known Stata repositories; Rick Wicklin convinced me to stop trying to get anything useful out of Google Insights; Drew Schmidt automated some of the data collection; Peter Hedström greatly improved my search string for Stata; Rudy Richardson pointed out that GraphPad Prism is widely used for statistical analysis; Josh Price and Janet Miles provided expert editorial advice.

 

Bibliography

J. Fox. Aspects of the Social Organization and Trajectory of the R Project. R Journal, http://journal.r-project.org/archive/2009-2/RJournal_2009-2_Fox.pdf

R. Ihaka and R. Gentleman. R: A language for data analysis and graphics. Journal of Computational and Graphical Statistics, 5:299–314, 1996.

R. Muenchen, R for SAS and SPSS Users, Springer, 2009

R. Muenchen, J. Hilbe, R for Stata Users, Springer, 2010

M. Schwartz, 1/7/2009, http://tolstoy.newcastle.edu.au/R/e6/help/09/01/0517.html

 

Trademarks

Alpine, Alteryx, Angoss, Microsoft C#, BMDPIBM SPSS Statistics, IBM SPSS ModelerInfoCentricity Xeno, Oracle’s Java, SAS Institute’s JMP, KNIME, Lavastorm, Mathworks’ MATLAB, Megaputer’s PolyAnalyst, Minitab, NCSS, Python, R, RapidMiner, SAS, SAS Enterprise Miner, Salford Predictive Modeler (SPM) etc., SAP’S KXEN, Stata, Statistica, SystatWEKA / Pentaho are are registered trademarks of their respective companies.

Copyright 2010-2017 Robert A. Muenchen, all rights reserved.

113 Responses to The Popularity of Data Science Software

  1. bob mcconnaughey says:

    i’m not surprised that R, in particular, has done spectacularly well with respect to analytic use – it has, as best i can tell, virtually all the analytic tools one might need. I’ve been worried for decades now about the ever increasing use of excel for both data mgt and analysis. So many projects/”data sets”/ analyses have come our way in excel spreadsheets only for major problems in data integrity, tracing flow of data changes that led to errors, even analyses that were later found to be completely hosed because a user had done something as simple, and as deadly, as sorting a column instead of the records/rows.

    What SAS has and,really should concentrate on, is its data handling, manipulation, organization, data validation features..that are all built into Base SAS. I have, and appreciate, your R for SAS/SpSS Users – and i can’t help but think that organizations that rely on both “data integrity” which, really, is SAS’ great strength and analysis could profitably use SAS for complex data manipulations and then write out files in one of the many formats R takes, do the analytics in R and pull the results back into Base SAS. A few months ago i helped out a friend who was analyzing generational data drawn from 80 + yrs from the complete medical birth registry of Norway. SPSS is the data manipulation software they use..and the task of linking families, sibs, half sibs with flags/subsets for individuals/families that had various birth defects over multiple generations was seemingly intractable in SPSS, whereas while it was a non-trivial exercise in SAS, it was certainly conceptually straight forward. And the resulting files could be analyzed in either R or SPSS, of course.(or SAS – which isn’t a package that they licence because of its increasingly pricey )

    • Bob Muenchen says:

      I’ve done quite a lot of complex data management in SAS, SPSS and R. To me they seem quite similar in capability except that R must fit the data into the computer’s main memory (unless you’re using Revolution Analytic’s version). Where SAS may have the edge is reading unusual files where you have to read some data and, based upon that data, decide what other data to continue reading. I see that type of data rarely and I’ve only read it in SAS. The others may be able to do it but I haven’t taken the time to see if they can or not.

      • Christian says:

        “To me they seem quite similar in capability except that R must fit the data into the computer’s main memory”

        I’ve been thinking about this lately, and I wonder if this might be a blessing in disguise? Every time our group hits memory constraints, we buy more RAM. It’s cheap, and it grows exponentially cheaper/larger over time. Of course, that doesn’t work for “very large problems”. But, on the other hand, there’s the MapReduce paradigm of divide-and-conquer. I don’t often encounter datasets that I can’t subdivide and process in chunks. Working with on-disc data is orders of magnitude slower (though SSD seems to help quite a bit), and so the dataset-in-RAM paradigm strikes me, after some thought, as a “good idea in disguise”.

  2. Jeremie says:

    Excellent summary, thank you very much. The exponential growth of R packages is impressive.

    I am trying to catch how you measured the statistical softwares on the job market.

    Indeed a research with just “R” leads of course to nothing meaningful. I would search for expressions like theses :
    “STATA (statistic OR statistical)” = 627
    “MINITAB (statistic OR statistical)” =1277
    “SPSS (statistic OR statistical)” = 2488
    “R (statistic OR statistical)” = 2957
    “SAS (statistic OR statistical)” = 7053

    which shows the prevalence of SAS, but to a less degree.

    • Bob Muenchen says:

      Many of the strings are easy:

      JMP, BDMP Minitab, SPSS, Stata, Statistica, Systat

      And SAS isn’t too bad but but you have to exclude any hard drive interface references for which SAS has another meaning:

      SAS (excluding SATA, storage, firmware)

      R is devilishly difficult to get. Since you found more jobs for R than for SPSS I’m pretty sure you’re getting mostly bad hits. You have to study a lot of the job descriptions to see what’s actually being found. Plain old “R” is found in many irrelevant situations. I use a Linux shell script that searches for:

      (“SAS or R” or “R or SAS”) and it repeats that pattern for the above packages and MATLAB, SQL, Java, Python, Perl

      After much study that is the only way I have found to locate “R” that is relevant. If you find another way, I’d love to hear it!

      The whole thing is a Linux shell script written by a former research assistant. A variation of it which I used for figures 7a and 7b is described in detail at:

      http://librestats.com/2012/04/12/statistical-software-popularity-on-google-scholar/

  3. omar says:

    thank you for this stats article just what i needed

  4. rjrich says:

    It would be interesting to include popular scientific plotting and statistics packages such as Origin Pro, SigmaPlot, and GraphPad Prism.

  5. Ken says:

    Where you say, “No other data analysis languages covered by this article even make their top 100.”, is not true. If you look at the portion that says the next 50, covering 51-100 you will see S, S-PLUS, and SPSS which are all data analysis languages. It is also debatable that MATLAB, PL/SQL and Transact-SQL could be considered data analysis languages.

  6. Bob Muenchen says:

    Ken, thanks very much for pointing that out. One of the hardest things about tracking so many sources of information is noticing all the changes that are relevant! I’ll deleted that sentence.

  7. Karup Pekar says:

    This is a very good article. I especially admire the way you have tried to quantify various measures. It’s worth reading just to learn that you can use “not” operators on google and amazon. Most illustrative of trends in stats packages and languages. Thank you!

  8. Thankfulness to my father who shared with me regarding this website,
    this webpage is genuinely remarkable.

  9. Ken says:

    One interesting thing to look at could be comparing trends from the kdnuggets polls. You have the current year but there is also links to some of the prior years. For instance the following show two very different perspectives from two different points in time.
    http://www.kdnuggets.com/polls/2011/tools-analytics-data-mining.html
    http://www.kdnuggets.com/polls/2008/data-mining-software-tools-used.htm

    I am not sure what all could be done with this but it would be interesting.

  10. Monica Lewis says:

    I’m curious about a review of tools used by non-statisticians for analysis in business. Do you know what products that help smooth some of most basic data related tasks that the masses are currently doing in Excel — such as pivot tables, commenting and collaboration? I’ve been building one to try to answer this, and am curious about others!

    Thanks for all of the details on tool functionalities and preferences for true big data analysts!

  11. inundata says:

    Fantastic! Is this published somewhere peer-reviewed that I can cite? I’m working on a journal article (which strongly discourages citing webpages) and would love to cite this as a source.

  12. seo ranking tools says:

    Hello! Do you use Twitter? I’d like to follow you if that would be okay. I’m absolutely enjoying
    your blog and look forward to new posts.

    • Bob Muenchen says:

      I’m @BobMuenchen on Twitter and I do tweet when each new post or article is finished. It’s certainly OK to follow me. I don’t tweet a lot, so you won’t be bombarded with crazy messages about where I’m eating lunch!

  13. jergreen@gmail.com says:

    SAS just doesn’t seem affordable except for corporations. Do they even have a single user academic perpetual license?

    • Bob Muenchen says:

      SAS Institute never does perpetual licenses. A single user academic license is very expensive but they do make it very cheap per copy when you get an unlimited-copies license.

  14. Bob McConnaughey says:

    I have quite a wonderful “ANCIENT” book that has a comparison of Stats/database packages circa 1980 back in my office. I DO remember that back in the day….the yearly license for the “Statistical Analysis Software” package was $1000.00 for a university. If I could attach a pdf I actually scanned the chapter on “General Statistical Packages.” The book was basically the result of a survey of users…My favorite line: “More importantly, SAS’s users think almost as highly of this program as its developer does”

  15. Dr.Az says:

    lovely post.
    one tiny error– there are two captions titled the same serial “7a”.
    maybe you mean 7b in the latter one.

  16. Sue Briggs says:

    “quiet” under Fig. 1d should be “quite”

  17. Hello! I know this is kinda off topic however I’d figured I’d ask.
    Would you be interested in exchanging links or maybe guest writing a
    blog post or vice-versa? My blog covers a lot of the same topics as yours and I believe
    we could greatly benefit from each other.
    If you might be interested feel free to shoot me an e-mail.
    I look forward to hearing from you! Superb blog by the way!

  18. gawbul says:

    I’d love to see how Julia (julialang.org) fairs over the coming years 🙂

  19. Rosaria says:

    Do you think you can include more of KNIME in some of your graphs? I am curious to see how it compares. I use KNIME and I have seen it cited only in figure 3 and figure 4.

    • Bob Muenchen says:

      Hi Rosaria,

      I started out studying just classic statistics packages while the data mining software came from data collected by others. However I do hope to expand the graphs next year to include them. There’s little real difference between the two types of software other than the user interface, which is better on most data mining packages.

      Cheers,
      Bob

  20. Fred says:

    This is absolutely amazing. Given the passion that most scientists have towards their software packages and that you are a self-proclaimed Stata user, I’m amazed that you can have such an unbiased and rational approach to answering this question.

    1) There seem to be way too many stats packages.
    2) I was happy to see Number Cruncher Statistical Analysis in there. The copy I have is 10 years old, but I still use it for 3d graphing capabilities.
    3) I conducted a web search of “SAS vs Stata” because a coworker uses Stata and won’t shut up about it. I use SAS/Excel…and won’t shut up about it. My hypothesis was that my coworker is using an outdated stats package and he is stubbornly set in his outdated ways. This article mostly disproves that hypothesis, but does give me some ammo on the comparison. Thanks!

    • Bob Muenchen says:

      Hi Fred,

      I actually use Stata only occasionally, and then usually just to study how it does a particular thing. My co-author Joe Hilbe is the Stata guru. It is a beautiful system though. You can tell that a tiny number of people cared about making its structure consistent. SAS, SPSS and especially R were at the mercy of too many developers so their syntax is less consistent. All four are wonderful packages though, and each has an audience that thinks it’s the best by far. I like ’em all!

      Cheers,
      Bob

      • Wayne says:

        Bob,

        I’ve used R for years, and just bought Stata/IC 13 yesterday for several reasons. First, the company has a great attitude/culture and it’s always good to deal with a company where you like the people. Second, it seems to me to be the best option among SAS, SPSS, Minitab, et al, and it’s also a better deal for an individual purchaser. Third, it implements some algorithms that are more advanced than the R equivalents. And fourth, Stata 13 was just released and has a lot of nice new features.

        My first thought is that it reminds me a lot of Igor Pro, by Wavemetrics, which I used to use. They both have a great bunch of people (developers and users), a great culture, an interface that you can drive via commands or a GUI (though the GUI generates the command line equivalents so you can learn it or reuse it), and a consistent flavor. The difference being that Stata is statistics-oriented, while Igor Pro is scientific/experimental-oriented.

        I like Stata a lot, but it won’t replace R. I’d say that it’s much more elegant than SAS, et al. (SAS was developed for punched cards and influenced by IBM’s punched-card JCL, and has all kinds of obvious seams between its various parts. It’s definitely a Frankenstein.) I’d disagree with you that Stata’ syntax is more consistent than R’s though. I believe you’re talking about how functions in R have been written by various people, so the function calls may have some inconsistent argument names or perhaps result formats. On the other hand, Stata suffers from the data (essentially a spreadsheet) versus free-form variable (r(), e(), _b, _se, etc) distinction, which itself sets up various inconsistencies and makes me feel claustrophobic.

        So I still think that R’s the best option, but have definitely added Stata to my toolbox and it will be there long-term. I’d definitely recommend it to others.

        To some degree, I think it makes a difference what direction you come to statistics from. If you’re used to programming and like having the full machinery and flexibility of a programming language, R makes a lot of sense. If you don’t really program — you just want to give commands and get results — though you want the option of automating some things or using programs that others have written, Stata makes a lot of sense.

        • Bob Muenchen says:

          Hi Wayne,

          Thanks for your interesting comments. I’ve talked to a couple of other people recently make the point that R is better as a programming language, while Stata is easier to use as a way to control pre-written procedures. SAS certainly has some odd inconsistencies, but I’ve used it for so many years that they seem second nature to me.

          Cheers,
          Bob

      • Fred says:

        Hi Bob,

        As I mentioned before, my coworker uses Stata and I use SAS/Excel. Unfortunately, my coworker has retired and I have no way to validate my SAS/Excel code with her Stata output. If I were to provide the code that my coworker used, datasets, and any other information required, could you reference somebody to me who can run a Stata program? I can’t seem to find anybody who runs Stata!

        Any information is helpful.
        Thanks!
        “Fred”

  21. Karl Rexer says:

    Great analysis, as always. This is a great resource for the entire analytics community. Thanks!

  22. Kamal says:

    Hi Bob,
    Thanks for providing an overall big picture of statistical packages. I am using SAS from past couple of years and is preparing for its certification too. As a beginner I always used to wonder about the differences among different statistical packages but your article has answered a lot of my questions.
    Thanks.

  23. Ajay Ohri says:

    so why does SAS Institute still make 2.5 $ billion every year. Your data is overwhelmingly conclusive- but the SAS revenue is what makes me a hold out believer

    • Bob Muenchen says:

      Hi Ajay,

      As far as I know, SAS Institute is still the largest privately held software company in the world and I don’t see that changing anytime soon. They continue to innovate, especially by offering complete solutions to problems rather than just offering tools that let you come up with your own solutions. I think the whole analytics pie is getting much larger. While SAS gets a smaller slice of this pie each year, it still adds up to more revenue.

      Cheers,
      Bob

  24. Nice data on use of different packages. A couple of comments. It would be interesting to know who is using what software and for what purposes. As an experimental psychologist, for example, I very much like SPSS for its handling of analysis of variance (both GLM and the older Manova). When I was generating course evaluations on my campus for a number of years, I liked to use SAS because of its powerful relational database functions (SQL). The same things could be done in SPSS but not nearly as “elegantly.” Is it perhaps the case that different classes of users are finding the features they need in particularly packages? Finally, we might like to believe that the “best” product wins out, but that is not always the case with respect to software (e.g., Word vs Wordperfect?) and should perhaps warrant some caution with respect to usage statistics. Nice job!

    • Bob Muenchen says:

      Hi Jim,

      You make some good points. Different packages definitely dominate in different market segments. Our campus (University of Tennessee) has a large social science presence and SPSS dominates by far overall. However, among economists Stata is dominant, the agriculturalists and business analytics folks use SAS, and while R use is in the minority, it seems like every department has someone on the cutting edge of their field using R.

      I like all these packages for their various strengths and agree that it makes little sense to say which is “best” for everyone.

      Cheers,
      Bob

  25. Hypersphere says:

    Extremely engaging. Although Sage is much more than a statistical package, it encompasses statistics, and it would be interesting to include it in the mix.

  26. john painter says:

    Should the caption for figure 1a say, “MORE popular”? The caption appears same as the one for Figure 1b

    “Figure 1a. The number of analytics jobs for the less popular software”
    “Figure 1b. The number of analytics jobs for the less popular software”

    Your descriptions of the challenges faced when compiling data from readily available but harder to interpret data shows how much work you have put into this site. Thanks!

    • Bob Muenchen says:

      Hi John,

      Thanks for catching that! It’s fixed. Regarding the amount of work, I wish I had tracked it. I do know that the job search section alone took over 100 hours. Now that I understand the problem better, I can update the figures in about an hour, but determining the optimal searches was really difficult.

      Cheers,
      Bob

  27. Joseph Hilbe says:

    The primary reason I show either both Stata and R code or just R code for the examples in my books now is due to the fact that the far majority of statistics journal manuscripts that I referee or edit use R for examples. SAS and Stata seem to come in as the second most used stat packages. However, I realize that this may in part be due to the type of manuscripts I referee. I’m on the editorial board of six journals, and am asked to referee by a number of others. But these are generally related to biostatistics, econometrics, ecology, and recently astrostatistics (where Python and R are most common). It also seemed to me that most of the books I read or referenced when researching for my books also used R for examples, followed by Stata and SAS.

    The second reason is due to the students I teach with Statistics.com. I teach 5 courses (9 classes a year) with the company. These are month long courses over the web with discussion pages which I use to interact with those enrolled in the courses. A good 95% (seriously) of enrollees are active researchers working in government, research institutions, hospitals, large corporations, and so forth, as well as university professors wanting to update their knowledge of the area, or learn about it if they knew little before. Students come from literally everywhere — the US, UK, Italy, Australia/NZ, Brazil, China, Japan, South Africa, Near Eastern nations, Nigeria, and even Mongolia. I always ask for their software preference, and have on average 15-30 students. Logistic Regression is the most popular course followed by Modeling Count Data. R is by far the most used software package. I started teaching with Statistics.com their first year (2003) , using Stata. I would accept submissions using SAS and SPSS, but the course text and handouts I used were in Stata. It is a very easy package to learn and it has a very large range of statistical capabilities. But I increasingly had more and more students wanting to use R. So I started to become more proficient, co-authored R for Stata Users with Bob Muenchen, (2010) which really spiked my knowledge of the software and now used the two package equally. My “Methods of Statistical Model Estimation” book with Andrew Robinson (2013) is a book for R programmers, and “A Beginners Guide to GLM and GLMM using R” (2013) with Alain Zuur and Elena Ieno uses only R and JAGS – I am ever more becoming a Bayesian as well. Other book, like my “Modeling Count Data” (Cambridge Univ Press) which comes out in May uses both R and Stata in the text, with SAS code for the examples in the Appendix. R, JAGS, and SAS is used for the Bayesian chapter.

    Look through the new books that are being authored and the journal articles being published by the major statistics journals. Its mostly R, Stata, and SAS, with SPSS also used in books/journal articles specifically devoted to the social sciences. Minitab occasionally as well. Python and R almost exclusively for the physical sciences. For the many new books on Bayesian modeling, most use WinBUGS/OpenBUGS and R (and R with JAGS), and some SAS. I see Python becoming more popular though.

    For what its worth, I’ve seen a lot of software over the years, From 1997 to 2009 I was Software Reviews Editor for The American Statistician, and received free stat software to review and use for 12 years and pretty much still ongoing. I turn 70 this year, so have watched the development of statistics and statistical software for quite awhile. I would not purchase stock in SPSS, nor in SAS for that matter. SAS is ingrained in the pharmaceutical and healthcare industry, and in much of “big” business, folks have jobs as SAS programmers, or SAS analysts. Too much is invested by business to simply drop it. But that’s not the case as much with SPSS. With more Revolution-like businesses developing in the next decade, I believe R will predominate as the Franca Lingua statistical software. Stata will become ever more popular, but needs to develop a strong Bayesian component. Its not difficult to do given Stata’s excellent programming and matrix languages. Python, OpenBUGS (WinBUGS is not being developed any more), JAGSs and perhaps some other Bayesian software will grow fast in use as well. The Predictive Analytics movement is having an influence as well, and together with academia is focused on employing more Bayesian, basic sampling, and enlightened machine learning into the analysis community.

    • Bob Muenchen says:

      Hi Joe,

      It’s good to hear from you! I, too, have noticed the rapid growth of R used as code examples in journals and books. I only measure books that use the software name in their titles since they’re easy to find. However, I do think it would be much more indicative of R’s dominance to somehow count the books that used R in examples. I see some that use R and Stata, or R and SAS, etc. so R may already be the dominant software used across all stat books.

      Cheers,
      Bob

    • Chao says:

      Completely agree with your comment on Stata and pleased to see Stata now officially incorporate Bayesian/MCMC modelling starting from Stata 14.

      R often boasts about its number of libraries available, but curiously there seems lack of R package for Bayesian. There is MCMCpack but it is very basic. R users as you said typically depend on other softwares such as OpenBUGS, JAGS, Stan for fitting Bayesian models. This means yet another software to install and another language to learn (some such as BUGS is similar to R but some may not).

      I have not got a copy of Stata 14 yet but I found the PROC MCMC in SAS is very good and is my choice for Bayesian modelling at the moment.

  28. Partha says:

    I am getting addicted by your writings. I am a student of statistics and want to learn as much as possible from your writings.

  29. David says:

    Excellent article, very detailed presentation of data. Good to follow the analytics trend. Thank you for this article.

  30. Simon says:

    Nice article, and impressive thinking too!

    Just one query: In fig. 1a (2/2014), don’t you mean over 250 jobs, not under?

    • Bob Muenchen says:

      Hi Simon,

      Thanks for catching that typo! It’s fixed.

      Cheers,
      Bob

      • Isaac says:

        hello all,

        I am a graduate of statistics. i want to focus my career in customer insight analysis, building predictive models. i have. learnt sql and SAS programming for data extraction and manipulation. I am confused on which stat package to really learn for data mining. I know SAS EM but feel coys wont employ based on point and click. What about SPSS? I would love to learn SAS programming for data minin on BASE SAS. Pls any recommendations as well as. books to read? thanks a lot.

  31. Jonathan Gezos says:

    This is great, but you’re missing out on a lot by only looking at tools that are 10+ years old. There is a lot of innovation in the industry right now with new players like Tableau and Looker in the mix.

    • Bob Muenchen says:

      Hi Jonathan,

      Tableau is shown in figures 2a, 6b, 6d and 6e. However, most of those are from people who collected their own data. I’m focusing on advanced analytics or predictive analytics. Tableau is more of a visualization package. It does a nice job with a small number of variables but you can only see perhaps 8 at a time on a graph (x, y, z, color, size, shape, small multiples, time in animation). Even with that many it’s hard to absorb. The other software can find patterns in hundreds or even thousands of variables. I just edited the paper to make the focus more clear.

      Cheers,
      Bob

  32. REmi says:

    I have found another rather popular data-analysis software called SCaVis (http://jwork.org/scavis/). It looks like it has about 150 weekly dowloads since 2005. Doyou have any opinion about it?

  33. Arlin Stoltzfus says:

    That was fascinating. Thanks for all of the effort.

    I’m curious about the enormous decline in Fig 2b in scholarly articles citing SAS and SPSS, from a peak ca. 2007. Fig 5A also shows a dramatic reduction in references to SAS at around the same time. At one point in the article you mention competition from R, suggesting that free software edged out pricey packages. But this hypothesis isn’t really borne out by Fig 2b. The numbers for R are so much smaller that they don’t make up the loss– instead, the total number of google scholar hits citing analytics packages decreases dramatically. It appears to suggest a net loss of productivity. What is going on? I was thinking it might be a major drop in biomedical research funding from NIH, but that happened ca. 2000 which seems too early. When the grant money dries up, the license won’t get renewed. The lag time for that effect to show up in publications might be several years.

    • Bob Muenchen says:

      Hi Arlin,

      Good point. I’ve been asked this so many times that I should have modified the text by now. Here’s what I’ve added: “Note that the decline in the number of SPSS and SAS articles is not balanced by the increase in the other software shown. This is likely due to the fact that those two leaders faced increasing competition from many more software packages than I have time to track.”

      Cheers,
      Bob

  34. Hypersphere says:

    It could be argued that programs such as JMP emphasize data visualization or visual data exploration, but it is not excluded from the comparisons. With this in mind I would urge inclusion of GraphPad Prism and OriginPro — these programs certainly have excellent visualization and graphing capabilities, but they also offer considerable strength in data analysis and statistics. In any event, thank you for a most interesting and thorough comparison of other data analysis and statistics software.

    • Bob Muenchen says:

      Hi Hypersphere,

      That’s a good point. Just as the classic stats packages have added much better visualization, the viz packages have added statistics blurring the difference. However, I’m barely keeping this document up to date as it is. It’s a lot of work!

      Cheers,
      Bob

      • Hypersphere says:

        Hi Bob,

        Thanks for taking the time to reply. If you should decide to add a software package or two to your list, please keep Origin Pro and GraphPad Prism in mind. These packages are used by quite a few scientists and engineers.

        Best wishes,

        Hypersphere

  35. dragosh says:

    Thank you for the useful, empirically backed findings, and the practical R search query.

  36. Dear Bob,

    thank you for this informative article! I referenced it in my own blog (in german), also showing your figure 1a (with appropriate credits). Hope that’s ok. http://www.4falter.at/sk/2015/06/5-gruende-fuer-biologinnen-und-biologen-r-zu-lernen-und-5-gruende-es-nicht-zu-tun/
    Thanks!

    Cheers,
    Stefan

  37. David says:

    Dear Bob,

    I really appreciate your scholarly approach to this question.

    I have just started introducing R to spanish speakers in Ecuador, many of whom regard it with great suspicion and resent it doubly for being code-based and in english. I am looking for an analysis of the popularity of R amongst its peer programs, considering users with first-languages, or work contexts other than english.

    My question is, are you aware of an analysis of this type that adjusts for the language “problem”?

    regards,
    David

    • Bob Muenchen says:

      Hi David,

      I’ve thought quite a bit about the “language problem” and I think it helps explain the dominance of SPSS. While its language is only usable in English, SPSS uses a graphical user interface that is available in many different languages. So it has the advantage of being easy to use in any popular language. All the programming-based packages like R, SAS & Stata face the same language barrier.

      Unfortunately, I’m unaware of any analysis that attempts to quantify this effect, so it’s all just guesswork on my part.

      Cheers,
      Bob

  38. krexer says:

    Earlier in August I remember seeing that Ajay Ohri posted some introductions to R, Python and SAS — these were in Spanish. In case they are helpful for you, here is the link: http://decisionstats.com/2015/08/21/more-hispanic-data-scientists-please/

  39. Jake W. says:

    Bob, awesome article. Thank you so much for the time/effort it took to produce such an in-depth study.

    I came across this article as I’ve slowly slipped into a mild depression (kidding. . . kind of) after I recently graduated with a Masters in Statistics and found that virtually no job advertisement requires STATA, my statistical package of choice. There is some hope, however, as it looks like SAS may be on the decline relative to STATA. *fingers crossed*

    • Bob Muenchen says:

      Hi Jake,

      Stata is really interesting. The first chapter in my book, “R for Stata Users” talks about all of the similarities between R and Stata. If you’ve been programming in Stata – rather than pointing & clicking on the menus – you should be able to transition to R more easily than a SAS programmer would. As you can see, the use of Stata in scholarly work is growing as rapidly as R, but you’re right, in the corporate world, it’s rare to find a job looking for it. One of the attributes it shares with R is having to store its data in the computer’s main memory. There are a few ways to break that limit in R, but I don’t know offhand if there are similar ways to do that in Stata.

      Cheers,
      Bob

    • Hypersphere says:

      I really like Stata. It “feels” similar to R via R-Studio and offers a bit of hand-holding by providing some menus. Although Stata’s coverage of statistical procedures is superb, I think if they were to be more forward-looking by serving as an interface to R, then they would garner many more users. Unfortunately, thus far Stata has not embraced this concept.

  40. Jazz says:

    This is great, as a librarian, I’m trying to figure out which software packages our University should be investing in for our computer labs. I’m going to use this as reasoning for our choices. If you know of anyone doing similar things with qualitative data software, let me know! (nvivo etc)

  41. Extremely informative. Thanks! Professor David Rice

  42. Thank you so much for this article! I’m a disgruntled SAS user/teacher and a grouchy SPSS user/teacher. I’m an R promoter, so these statistics are helpful for me to communicate to my students the cost/benefit of spending their precious time learning SAS vs. SPSS vs. R.

    Gorgeous data visualizations as well! Thanks again for this very extensive and neatly-presented analysis.

    • muenchen.bob@gmail.com says:

      Hi Monika,

      I’m glad you found it useful. It’s really hard to get the SPSS point-and-click crowd to get excited about R. I hope someone does a really open source good front end to R to help non-programmers. Several are working on it but all seem to have a long way to go.

      Cheers,
      Bob

  43. Mcarson says:

    Hi Bob, do you have any official publications for these data? It looks like you occasionally update them so I’m wondering if I was to cite this how things might change in time. Many thanks for putting this together.

    • muenchen.bob@gmail.com says:

      Hi Mcarson,

      I’ve kept the name the same even though “data analysis” is now referred to more accurately as “data science.” So you can find it in the long run at http://r4stats.com/articles/popularity. Unfortunately, I update it every year, so what might have been “Figure 2b” a couple of years ago is likely a different graph today! I’ve never tried it, but I’m told this site https://archive.org/web/ offers snapshots of history, so you might be able to find old versions there. If you try that, please let me know how well it works!

      Cheers,
      Bob

      P.S. stay tuned for a major update this week!

  44. Hypersphere says:

    I really like Stata, but I fear that its reluctance to interface with R could work against its continued acceptance. The same could be said of Matlab. To their credit, there are several programs that include interfaces to R — these include JMP Pro, Origin Pro, and Statistica. Not surprisingly, JMP Pro also has an interface to SAS. Origin Pro seems the most forward-thinking, with interfaces to C, Mathematica, Matlab, and R. Moreover, OriginLab has informed me that they are committed to developing the R console in Origin Pro into a more full-fledged GUI for R — this would be useful for enlarging the scope of statistical procedures available from Origin Pro and enhancing the interactive editing of plots in R.

  45. SPQR says:

    What about XLSTAT?

    • muenchen.bob@gmail.com says:

      Hi SPQR,

      I don’t cover packages that only work in combination with other spreadsheets, databases, etc. If you’d like to start tracking those, I’d be happy to put a pointer toward your blog!

      Cheers,
      Bob

      • SPQR says:

        Hi Bob,

        Why not covering these packages?
        You listed the different factors you used to exclude or include softwares without explaining the reasons.

        Thanks for your answers.

  46. Not sure what does “Java” mean in there charts. Java is the language. Talking about Java, I think DMelt from http://jwork.org/dmelt/ should be on top of this list. It is a way more comprehensive than some of the listened packages.

    • Bob Muenchen says:

      Hi Jhepwork,

      The general purpose languages in this data are a bit confusing. To clarify, I’m not interested in Java for any use, only for data science. So I use the search string listed below. I document all the search strings here: http://r4stats.com/articles/how-to-search-for-data-science-articles/. The best Google Scholar search string I could develop for datamelt is:
      “datamelt” -“datamelt index”
      There’s something called a “datamelt index” that has nothing to do with the Datamelt software, and that minus sign gets rid of that. There are only 11 scholarly articles found for all years with that search, so I won’t start tracking it until its usage increases. It looks pretty cool though!

      I strongly recommend dropping the second name of “Dmelt”. Google Scholar finds 214 articles on that, and most have nothing to do with the Datamelt software. Datamelt is a nice descriptive name and one that’s easy to search on. It’s fine for people to refer to something by an abbreviated name when speaking, but you don’t want people citing it by anything but its proper name if you want to be able to accurately track its use. Thanks for bringing Datamelt to my attention, and please write me when its annual use puts it into 100+ articles per year.

      Cheers,
      Bob

      My current Java data science search:

      java -author:java -weka -“Practical Machine Learning”
      -indonesia (“statistical analysis” OR “t test” OR
      “regression analysis” OR “quantitative analysis” OR
      “data analytics” OR “machine learning” OR
      “artificial intelligence” OR “analysis of variance” OR
      “anova” OR “chi square” OR “data mining”)

  47. R. Kahn says:

    Hi Bob,

    Very nice article and I loved the facts & figures. I own a book you authored “R for SAS & SPSS Users” this book helped me immensely on a project where I was translating SAS to R and implementing the R code on Hadoop via RHADOOP. Thanks for your great service!!!

  48. asad says:

    This blog awesome and i learn a lot about programming from here.The best thing about this blog is that you doing from beginning to experts level.

  49. Luna De Ferrari says:

    As a popularity metric maybe you could use the number of MOOCs (massive open online courses)? But you would have to correct for the number of students registered or certificates awarded. I am not sure how open the data is. Just an idea, probably already suggested. Thank you for your hard work!

    • Bob Muenchen says:

      Hi Luna,

      That idea has been in my queue for a couple of years now, but I haven’t gotten around to tackling it. As you point out, it’s a messy one! Some sites, like DataCamp, have lots of great classes on R and Python, but little else. Companies like SAS and IBM/SPSS offer training, but I doubt they’d give out their attendance figure. MOOCs have a lot of people sign up, but far fewer complete the classes. I’d love to find a simple solution to this, even if it’s not a perfect one (all the other measures have their limitations).

      Cheers,
      Bob

  50. mlearn says:

    I’ve tried to do an update to to your R versus python plot for Kaggle by looking at the new scripts facility there. R and python are looking much like parity. https://www.kaggle.com/mlearn/d/kaggle/meta-kaggle/python-vs-r-as-seen-in-kaggle-scripts/notebook

    • Bob Muenchen says:

      Hi mlearn,

      Very nice work! I’ll update my blog to reflect this & do a ping-back to your post. Btw, were you on stage at UseR! 2016 when I asked when this info would be updated?

      Cheers,
      Bob

  51. Rajesh says:

    You should also check out Jobs Micro. This is a leading Job search website in USA which benefits both the employees and the employers. Hope it helps you.

  52. Peter says:

    Though UK only you can easily obtain data on the trend in demand for different stats packages for free at our website http://www.itjobswatch.co.uk. Drop me a line if you need any help/have any comments. PeterH

  53. Rudy Richardson says:

    As an academic, I was particularly fascinated by your graphs showing the number of scholarly publications (in Google Scholar) mentioning specific data analysis packages versus publication year. In particular, the sharp decline since 2008 in papers mentioning SPSS was striking.

    It would be interesting to compare the trends in Google Scholar with those in PubMed. Although PubMed focuses on the biomedical literature, I have used it to find citations in physical sciences and engineering as well. In my own biomedical field, I have noticed anecdotally that many authors cite GraphPad Prism and/or Origin or OriginPro for their statistical analyses, but terms such as “Origin” and “Prism” are used in so many ways, it is difficult to search for their use in data analysis. Likewise, it is difficult to search for “R” as a specific data analysis term. “JMP” also presents difficulties, as it is used as an acronym for a variety of biomedical terms.

    In any event, I ran a quick and very imprecise PubMed search for terms that are relatively specific for statistical/data analysis packages and got the numbers of hits shown below. I was able to make the search for GraphPad Prism relatively specific by using both terms (this results in an underestimate, because some researchers only mention “Prism” and a version number).

    The first number for each package is for all records in PubMed; the second number is for the past five years; the third number is the 5-year/all-time ratio:

    SPSS: 22617; 14618; 0.646
    SAS: 12494; 5250; 0.420
    Matlab: 4930; 2593; 0.526
    Stata 4569; 3686; 0.807
    Statistica: 1125; 510; 0.453
    GraphPad Prism: 303; 246; 0.812
    Minitab: 221; 106; 0.480
    Systat: 89; 32; 0.360

    It is interesting to see how GraphPad Prism and Stata differs from the others with respect to having the largest fractions of hits in the last five years.

    GraphPad Prism has been my “go to” package for routine statistics and graphing for many years — in my opinion, it is by far the most intuitive statistics package available for the bench scientist.

    For more advanced procedures, I’ve started using Stata relatively recently, and I have been very favorably impressed. From my perspective, it has an excellent coverage of techniques, superb documentation, caring customer support, and an active user base. it is more difficult to learn than strictly menu-driven packages such as SPSS or Statistica, but much easier to learn than Matlab or R. Overall, there is an elegance about Stata that I appreciate, and I am glad to see that its apparent rate of growth in academic publications is quite respectable.

    • Bob Muenchen says:

      Hi Rudy,

      Thanks very much for providing that interesting data! I’ve heard of GraphPad Prism but I was unaware of how prevalent its use has become. I have noticed that both Origin and SigmaPlot have added statistical methods, but so far I haven’t followed them due to the difficulty of separating that out from general graphics use. What did you end up using as a search string for GraphPad Prism? Does PubMed include articles in Italian? If so, did you correct for “stata” being a common word in Italian? (If you haven’t see my search details, they’re here: http://r4stats.com/articles/how-to-search-for-data-science-articles/).

      I am very impressed by Stata’s conciseness, internal consistency, extensiblility, and general ease of use of use. Its ability to analyze complex samples across many of its procedures is particularly impressive.

      I’m sure that we could pick other fields such as marketing, where SPSS might still dominate, or economics, were Stata might be the top one.

      Cheers,
      Bob

      • Hi Bob,

        Thanks for your reply to my comment.

        Yes, in my field, GraphPad Prism is quite popular. Most researchers I know use it for its combination of statistical procedures and graphics. Few if any appear to use it only for graphics (although I have not done a rigorous survey).

        In addition, my colleagues in physical sciences and engineering (and I) use Origin or OriginPro in like manner to those who use GraphPad Prism — data analysis in combination with graphics, but not graphics in isolation.

        Indeed, some software presents problems of categorization. OriginPro includes such things as signal processing, and Matlab has even broader coverage, especially including its various toolboxes and links to simulations via Simulink.

        Speaking of linkages, OriginPro has consoles/links to Origin C, Labtalk, Python, Mathematica, Matlab, and R. Statistica has a link to R, and Matlab has R-link through its file exchange. This makes it difficult to separate “pure” usage of the parent program from using it as a front-end for other programs such as R.

        Regarding search strings, for Prism, I used “Graphpad AND Prism”. This will result in some degree of under-representation, because some people cite the program as “Prism 6.03”, etc., without mentioning GraphPad. I didn’t attempt to find hits for Origin, which is variously cited as “Origin”, “OriginPro” “Origin Pro” or “OriginLab(s)”.

        As for Stata, I did not correct for the Italian use of “stata” (past tense of “to be”, i.e., “was”). It would be more common to pick up nouns, but the closest noun in Italian (as far as I know) is “stato”. I checked the total number of PubMed hits for “stata” for articles written in Italian — there were only 82, and a quick scan of them indicated that these hits were due to the software Stata rather than Italian for “was”.

        SigmaPlot includes some statistical procedures, but it also has a companion program, SigmaStat. My impression is that these programs have not been very actively developed for the past several years, and scientists have tended to look elsewhere for scientific data analysis and graphing. A quick PubMed search for SigmaStat turned up 51 hits; Sigmaplot had 56.

        I’ve been searching for a data analysis package to turn to when GraphPad Prism or OriginPro are insufficient, and thus far I’ve narrowed it down to Stata and R. Between these two, Stata seems more “natural” to me. On the other hand, I recognize the many advantages or R, and I feel an obligation to myself to overcome the problems with its steep learning curve.

  54. Christopher says:

    Great read. Took 2 coffees and several hours to absorb all of this yesterday, I had no criticism whatsoever. Very well done. However. Getting back into the work I do today I had a realisation. In all of this research and analysis did you encounter yourself while practicing or even see it during your research? The topic of Arbitrary Precison Arithmetic. In R I struggle with it consistently every time I perform arithmetic on decimals such as money during visualisation and analysis, and most importantly during simple logistic regression!
    I tend to rely on R for the tools but the same task in Python2 yields more accurate results consistently then R. Verified using a basic calculator! Don’t underestimate the mere calculator. R has great tools but fundamentally flawed without having implemented arbitrary precise arithmetic. R may be “faster” due to its raw use of floating point calculations but it lack accuracy due to it and many a time I’ve discredited my own findings with R using a calculator and more importantly I’ve shown that 6 months of work others have done yielded inaccurate results simply because they don’t know to account for arbitrary precise arithmetic.

  55. Augustina R says:

    Hi! You mentioned that “John Fox (2009)” analyzed capabilities of the R language but I didn’t see the source in your bibliography. I was curious if you had that source available? Also, do you have a notebook either on on rpubs or in Github that show the data you used to generate the plots? I’m currently researching popularity metrics and would love to be able to reproduce some of your results in different contexts.

Leave a Reply to Rudy Richardson Cancel reply