VU_MeterSkeptics insist that those making claims (especially extraordinary ones) bring some data supporting said claims. As Josh Rosenau put it, skepticism means caring about evidence. It is only appropriate that we hold ourselves to similar standards when we make claims about our own work.

But measuring the effect of educational and outreach programs can be difficult. Surveying the public is expensive and time-consuming. Skeptics are sometimes forced to rely on work done by better funded organizations like Pew Research, when they happen to intersect our interests.

Skeptics who concentrate their efforts on the Internet have a slightly easier time of it. Computer services and applications collect detailed statistics as a matter of course. By tapping into these numbers we can often learn more about our successes and failures and perhaps discover where we should be focusing our efforts.

There are of course simple things like hit counts on web pages and videos that can give you an idea of the popularity of a piece of content. More advanced tools like Google Analytics and YouTube Insight can give you more detail about where your readers or viewers are located, what searches they used to find you and even more. I encourage all skeptic webmasters to take advantage of these and other tools. Many of them are free or included in your website hosting.

Each skeptic webmaster is privy to the full statistics for their own site, but sometimes useful statistics are publicly available for all to see.  The DoubleClick ad targeter has interesting data for any site which runs ads placed by Google, including our competitors.  With it you can discover that the 80% of the readers of Watts Up With That are male or that only about a third of the people who read Mike Adams’ NaturalNews have a college degree.  These facts could be useful in targeting skeptic activism. 

Those sites are all point and click, but sometimes it takes a little more work to gather the data. Back in May I did a survey of skeptic podcasts by aggregating the data in the RSS feeds that are used to subscribe. While no listener counts are available this way, some interesting data still emerged. For instance, I estimate that there is now almost two and a half hours of new skeptic podcast content made available every day, and this is continuing to grow.

Traffic statistics are also publicly available for Wikipedia. I’ve used this data to compare page visits to What’s The Harm with the equivalent Wikipedia topic pages - the result being that Wikipedia gets between one and two orders of magnitude more traffic than my site.  I’ve also measured the effect of getting skeptic topics listed in the “Did You Know” feature on the main page. Susan Gerbic recently measured the effect of interest in the SGU 24 podcast on page views of the people in the podcast.

There are things unrelated to page views that can be measured as well. After Canadian skeptic Erik Davis set up the WOT Project to help protect the ratings of skeptic sites in Internet safety tool Web of Trust, I realized that the change in safety ratings could be directly measured with some programming. I was able to confirm that Erik’s project was having the desired effect.

Measurements can have predictive power as well.  A physicist recently did an analysis of how the length of YouTube videos affects the number of views.  It seems to indicate that keeping a video below 3 minutes can significantly increase viewership.

There are clearly many potential confounding factors in that YouTube study, and they can crop up in many other online measurements too. For instance, measuring changes in your site’s search engine placement is always confounded by the continual enhancements being made by search engine vendors.

Another problem is not understanding the significance of what you are actually measuring. For instance, some bloggers use the page views on their posts to drive what topics they choose. But page views only indicate someone was interested in that topic, perhaps from the title alone. There is no way to tell from that how many readers agreed with it or even read the entire post. The often skeptical technology site Ars Technica did a clever experiment that showed that many commenters on controversial articles clearly do not read to the end before weighing in.

On the topic of comments, some might choose the activity there as a better proxy for interest or agreement. But the number of commenters is invariably a tiny percentage of those who actually read the post.  How do you know their opinions accurately represent your entire readership? They are a self-selected sample that might be biased in some way.

Perhaps a better way to measure public interest in various skeptic topics is through Google Insight for Search. This tool has data on the prevalence of certain search terms entered into the search engine.  How does interest in homeopathy versus acupuncture compare over the last few years?  Who is better known of Sylvia Browne, John Edward and James Van Praagh? These are all questions that can be quantified by freely available data.

Caveats apply to any data gathered online.  This applies doubly for brand new sources of data.  Google last year introduced a tool called Ngrams that can answer questions about the relative prevalence of different words or phrases in books over time.  Some have used it as a tool to ask certain skeptical questions.  But others have raised serious questions about whether the data underlying Ngrams is accurate enough to generate meaningful answers to questions.

But with the various caveats in mind, I encourage all skeptics who work online to seek out methods to measure their efforts, and take advantage of them.  Be sure to share what you find!

Tim Farley is a JREF Research Fellow in electronic media. He is the creator of the website What's the Harm and also blogs at Skeptical Software Tools. He researched the information in JREF's Today in Skeptic History iPhone app and has given presentations at TAM 6, 7 and 9. You can follow him on Twitter here.