“42.7 percent of all statistics are made up on the spot.” – Steven Wright

25 Jun

How To Spot A Crappy “Trend”

In my business, we have a saying: “the trend is your friend.” Snapshot data is occasionally interesting, but if you are going to be in the business of consumer behavior, you kinda need to know where things have been before you can speculate about where they are going, or how they are changing. After all, I could produce a snapshot that says Pinterest is the 55th most popular website in America, but MySpace is ranked #42 (which is true, according to Compete.com today.) What would you make of that stat, devoid of context?

Which brings me to this cringeworthy headline, shared by fellow market researcher Eric Swayne:Teens turn from Facebook to fresher social-media [sic] sites." This article commits two, equally grievous sins. First of all, it compares Facebook’s growth rate from a year ago with its growth rate today, and concludes that Facebook is losing its momentum. Here’s a fact: the percentage of teens who have a Facebook profile (over 80% of all teens in America) is nearly the percentage of Americans 12+ with internet access of any kind (85%). Is growth slowing? Ummm…yeah. It will probably, in fact, stop soon…once every freakin’ person has a Facebook page.

The second crime against numeracy occurs in the very headline of the article: “Teens turn from Facebook…” If you are going to posit that a group of consumers is turning from one thing to another, it’s incumbent upon you to show a change in state. That means, from a research perspective, that you need to show movement from x to y—from “popular” to “unpopular,” or vice versa. And that, axiomatically, means you need at least two data points. In short: a trend.

What this article presents is a snapshot, and a badly-framed one, at that. Here, by the way, is what smells right about this article: a study from YPulse showing that 18% of teens prefer to “check in” on Foursquare instead of Facebook. That number seems right to me, given what the USA Today article doesn’t tell you: that number is amongst teens who use both services. Given that the usage of Foursquare is around 3-4% of the general population, to represent this 18% figure as amongst “all teens” and not amongst “all teens who use both services” is a violent crime against common sense.

Here is a trend: in our nationally representative Social Habit research series, we show Foursquare usage as generally flat between 2011 and 2012. That doesn’t mean I’m bearish on Foursquare, since more people use Foursquare than drive Hyundais, and the latter seems to be a going concern. But the trend is flat, and amongst that 3% who do use “check-in” services, those who say that they post location-tagged status updates “nearly every time they go out” is: 18%. Sound familiar? That’s why the YPulse number smells right. But the USA Today article conflates it with 18% of all teens (demonstrably wrong) and draws a trend based upon one data point.

If you are going to write that a given consumer group is turning from one thing to another, it is incumbent upon you to provide at least two data points (and preferably three or more) that quantify a change in state. When you see headlines that imply a change, trend or other “movement,” look for that. If those data aren’t present, you know what to do.

31 May

Using Outliers As Examples, or Why I’m Mailing Dead Fish

A DataSnob reader (thank you, Josh Franklin!) sent me a link today to an article claiming that Market Research was useless and ineffective—after all, as we can read in Walter Isaacson’s biography of Steve Jobs, the iconic Apple CEO didn’t believe in it.

I’m not going to link to the article, because this blog is steadfastly about ideas, not individuals. I only mention it here because it is emblematic of literally dozens of similar articles I have read that conflate Jobs’ disdain for market research with its apparent demise. And every time I read one of those articles, I want to send the author a nicely wrapped fish, or the head of a horse.

I don’t do that, of course, so I’ll just say it here—proclaiming that market research is ineffective and citing Steve Jobs as your “example” is intellectually dishonest for three reasons:

1. Apple has done, and continues to do, enormous amounts of market research. I’ve gotten at least three surveys about my experience in the Apple Store alone.

2. There is an enormous survivor bias around our campfire tales of the Visionary CEO And Their Golden Gut. For every Steve Jobs there are a hundred (maybe more) visionaries who also introduced a new product or service without the benefit of market research, and failed miserably. And in some of those cases, we even mythologize the failures as “grand experiments” that future successes learned from. In my book, that’s also market research—just a lot more expensive than the kind you do ahead of time. Ahem.

3. Finally, look at your CEO. Now look at Steve Jobs. Now look at your CEO…NOW BACK TO STEVE. For 99% of you, you should notice at least a modest difference. Steve Jobs was a generational outlier, not a representative example. I laud his ability to predict, and make, markets based upon vision, intuition and genius. For the rest of us, a little market research couldn’t hurt.

Finally, I am also tired of the famous Henry Ford example, paraphrased by Steve Jobs, that when Ford was designing the Model T, if he’d asked people what they wanted, they’d have said “faster horses.” Nice story. I think if I or any moderately competent market researcher would have asked that question, the answer we would have reported back to Ford would have been that people want to get places faster. Which seems like the right answer to me.

Datasnob OUT.

24 May

On Correlations

Rand Fishkin posted an article on SEOMoz a couple of days ago on why the marketing world needs more correlation research. I don’t disagree with this. Correlation studies are a valuable first step in the scientific method, and I wouldn’t be typing this on my spanky new laptop without correlation studies, I can assure you. So, let’s dispose of this particular straw man directly and stipulate that I don’t think anyone is saying we need less research—certainly not someone like me who puts food on his family by doing research.

Fishkin is right to point out that “correlation is not causation” is often trotted out by detractors of this sort of research to repudiate it. This axiom may be true, but is admittedly sometimes used as a crutch by those who do not accept or understand the results of such a study.

There is an equally sinister use of this phrase however: when it is used by the author(s) of a biased correlation study to disarm the reader by giving a false sense of disinterest. In trial law, a skilled prosecutor often brings up their weakest point themselves, and early in a trial, to “own” the story and put the jury at ease about later testimony. Sometimes, when I see a correlation study insert that phrase, I get the same sense—that the author is putting on a front of sorts: establishing that “correlation does not imply causation” makes the author appear reasonable, right before he or she does indeed go about the business of implying causality.

That aside, I’ll note again that I think Fishkin is beating a straw man. I completely agree that there should be more correlation studies. My issue is that some of the works he names as good correlation studies, are not in fact correlation studies. They are eyeball comparisons of two lines on a chart. If that’s a correlation study, then I’d better start eating more ice cream so I can raise the temperature outside.

What we need are more good correlation studies. We need more work that shows us how multiple variables affect a given target. And I’d kinda like to see the stats, if you wouldn’t mind. I fully cop to being snobbish on this one (hence the name of this “tumbling blog or tumblog.”) I expect my doctor to understand how to read a CBC test before ordering one. I expect my mechanic to understand my car’s computer box before disassembling it. And I expect correlations to be demonstrated statistically, not visually.

More than good correlation studies, though, we need better questions. Correlation is not causality, but as Edward Tufte famously said, it’s a pretty good clue. That clue should then be used to ask a better question. The optimal length of a Facebook post to facilitate sharing is a shitty question. What impact length has on the “sharability” of a Facebook post amidst all the other variables (the content, for instance) is a good question. And seeing an eyeball correlation between post length and amplification might in fact be the trigger for asking that better question, and that’s what correlation analyses are really good for.

But Fishkin is right—we need more good work. Correlation analyses produced for the sole purpose of driving web traffic or generating leads are inherently incurious, and should never be taken as law for your business. But they do trigger good questions, and answering those questions will make us better marketers. Let’s just not treat these studies as the answers. Causality is difficult to prove—and it’s an altogether easy trap for a “data snob” to dismiss a correlation—but we don’t need to design a causality study to make use of a correlation. We just need better correlation studies, with more variables and moving parts, and better questions.

<drops the mic on the floor and dismounts soapbox>

11 May


An Infographic Best Practice
I don&#8217;t hate infographics. Honest. I hate BAD infographics. And I hate them BAD. Here&#8217;s one that wouldn&#8217;t be so bad, except that it violates one of Webster&#8217;s Cardinal Rules&#8212;list the damn sample base somewhere. When I saw this infographic here, I immediately questioned the stat. There&#8217;s NO WAY 47% of any kind of general population sample own a tablet&#8212;that&#8217;s higher than smartphone ownership! In our most recent representative study of Americans 12+, we recorded tablet ownership at 17% (with 12% for the iPad alone.) So my radar on this one immediately started pinging.
When stats like this one raise my hackles, I immediately do what you should do&#8212;find the original study, download it, and make sure you know the fine print. In the case of this infographic, there were only three possibilities: the number is from some subset of the general public (but not the general public), convenience sampling, or crappy sampling. In this case, as in most cases, it wasn&#8217;t crappy sampling, but a combination of the other two options. As I often say in this space, there is value in almost any kind of data, as long as you know who was asked, and how they were asked.
In the report from which this graphic originated, there is a caption to this graphic that is not seen on the website or indeed anywhere else that published this image: the sample for this was smartphone owners. So, right off the bat, it isn&#8217;t 47% of &#8220;people&#8221; that own a tablet, it&#8217;s 47% of the 44% of Americans who own smartphones who also own a tablet. That&#8217;s the &#8220;subset&#8221; part. The convenience sampling part is this&#8212;the sample was obtained from the database of JiWire users, which probably lean slightly more towards the early adopter end of the spectrum than your smartphone-owning Aunt Ethel.
So, taking those two things into account, I totally believe this number in its context&#8212;in other words, I completely buy that nearly half of JiWire-using smartphone owners also own a tablet. Now I know what to do with this stat. But every time infographics like this are released without any kind of information on the provenance of the data or the sample, a unicorn gores a puppy. Right there, in the fine print next to &#8220;Source,&#8221; put the sample base. Save a puppy.


An Infographic Best Practice

I don’t hate infographics. Honest. I hate BAD infographics. And I hate them BAD. Here’s one that wouldn’t be so bad, except that it violates one of Webster’s Cardinal Rules—list the damn sample base somewhere. When I saw this infographic here, I immediately questioned the stat. There’s NO WAY 47% of any kind of general population sample own a tablet—that’s higher than smartphone ownership! In our most recent representative study of Americans 12+, we recorded tablet ownership at 17% (with 12% for the iPad alone.) So my radar on this one immediately started pinging.

When stats like this one raise my hackles, I immediately do what you should do—find the original study, download it, and make sure you know the fine print. In the case of this infographic, there were only three possibilities: the number is from some subset of the general public (but not the general public), convenience sampling, or crappy sampling. In this case, as in most cases, it wasn’t crappy sampling, but a combination of the other two options. As I often say in this space, there is value in almost any kind of data, as long as you know who was asked, and how they were asked.

In the report from which this graphic originated, there is a caption to this graphic that is not seen on the website or indeed anywhere else that published this image: the sample for this was smartphone owners. So, right off the bat, it isn’t 47% of “people” that own a tablet, it’s 47% of the 44% of Americans who own smartphones who also own a tablet. That’s the “subset” part. The convenience sampling part is this—the sample was obtained from the database of JiWire users, which probably lean slightly more towards the early adopter end of the spectrum than your smartphone-owning Aunt Ethel.

So, taking those two things into account, I totally believe this number in its context—in other words, I completely buy that nearly half of JiWire-using smartphone owners also own a tablet. Now I know what to do with this stat. But every time infographics like this are released without any kind of information on the provenance of the data or the sample, a unicorn gores a puppy. Right there, in the fine print next to “Source,” put the sample base. Save a puppy.

7 May

There’s No Such Thing As Too Much Data

In the spirit of Datasnob, I’m going to get a little snotty about an article my friend Jason Konopinski alerted me to today, entitled Too Much Data Is Too Much Data. Now, I can certainly understand the sentiment here, and I’ve given a few keynotes this year with the title “Drowning in Data,” so I can’t get too uppity here.

But allow me to get uppity here. Reading a columnist complaining about “too much data” is like hearing a non-doctor say “there’s too many veins!” or a non-pilot say “there’s too many buttons!” In this case, it isn’t data that’s the problem. It’s reliable discernment and synthesis that are the problems. It’s not a numbers issue, it’s a people issue.

We are only scratching the surface of what Big Data is going to reveal to us in the future. Right now, it’s a little murky. For many, it’s more information than you need. But data is data. What we lack, here in the nascent days of all this data mining, are enough talented miners. We’ll get there. But let’s light a candle, rather than curse the darkness.

We need more creatives to stop saying “I’m not a numbers person,” which in 2012 sounds exactly to me like “I’m not a words person.” Welcome to the future, folks. Innumeracy=illiteracy. And we need more statisticians to train as marketers and hone their strategic thinking AND communications skills. We need fewer data dumps, and more concise, three-page summaries that distill all that data down to the actionable big picture. We need both sides on this farmers-and-cowmen dispute to learn from each other, cross-pollinate, and assimilate the skills needed to make the next leap.

That leap will lead us to a place, I hope, where we never say something so ludicrous as “there’s too much data” again. If your marketing department dumps too much data on you, criticize your marketing department. It’s a poor potter that blames the clay.

17 Apr

Social Sewage

I had a conversation with Nichole Kelly last week over lunch about sewage. We were both speaking at Explore Nashville, and kibitzing over just how much—or little—humans really want to connect with brands on social networks like Facebook. Nichole’s example was a little extreme: “I certainly don’t want to friend my local sewage treatment company!” I had to agree, especially as I had a mouth full of ham sandwich at the time. 

This got me thinking, though. If social media is the “world’s largest focus group,” who are your respondents, really, and how representative are they? If your brand is a highly engaging one (say, Zappo’s), then lots of mainstream social users might be talking about you online. This means that the data you get back from social has a better chance of being representative of the folks who aren’t talking about your brand. But if your brand is Roto-Rooter, there may be fewer people talking about you, and those that are might be significantly, well, let’s say “off the mark” from your average users.

What I am talking about here has little to do with raw numbers, and everything to do with who is talking, and how “normal” (relative to your average users) they are. If you “like” a page for Nike to participate in a fitness promotion, you may be more representative of Nike’s customer base than someone who “likes” Charmin is of toilet paper users (which, I hope, is everyone. Cleanliness is next to godliness.)

Now, if you are a brand manager, you need to do the work. You need to use some other form of research (survey, generally) to determine how close or far from “normal” your social fans are. But, in the spirit of Datasnob, we are all consumers of research. Asking yourself how engaged the “fans” of a brand are likely to be is an essential step in providing context for any research you see based upon social monitoring of those fans. In other words, to bring this post to its scatological extreme, if you see a study about America’s septic systems derived from “fans” of a sewage treatment brand, it could be useful, but it’s likely to be crap. 

And yes, I meant to do that. 

4 Apr

Knowing “Who” Is Half The Battle

One of Datasnob’s readers passed a study along to me last week to get my take on its findings. The report is listed as a CEO, Social Media and Leadership Survey, and purports to be a study of “C-Suite engagement on social media channels and attitudes of customer and employees toward that brand.”

I can’t really comment on the findings of the study—and again, this space will steadfastly not be used to rip on people’s research (which is why I’m not linking to it here—this isn’t about them.) But the reasons I can’t comment on the study’s findings might be useful to you, dear readers, as a way to think about reading reports like these.

I have a very sanguine take on most research I encounter. My bias is this: there is almost always value in asking someone questions. The key to understanding the answers, however, is being clear about who you asked. The most common sin here is to survey the readers of your marketing blog and then present the respondents as “marketers.” They aren’t marketers. They are people of varying professions who read your blog and responded to your survey. That isn’t a negative - it’s about clarity. The results of your survey may or may not be projectable to the universe of marketers, but they are at least potentially descriptive of your audience, and that has value. Knowing what I know about your blog/audience, I can contextualize the data, and put it to use

It is tempting to think of the myriad surveys and studies we encounter in binary fashion—they’re either great or crap. But the truth, like people, is far messier. I can find value in almost any form of study, as long as I know who you asked, and how you asked them. Just those two pieces of information go a long way towards contextualizing information. I’m an information synthesist. Every piece of data has its place—and knowing who you asked, and how you asked them lets me place that data in the right spot on the sticky-note-encrusted, spinning cork ball of my brain. Without that information, the sticky note has no place to stick. It falls to the ground in a heap. There’s a lot of that in my brain, too.

Back to our CEO study. In order for me to process a study of C-Suite opinions regarding social media, I need to know who was interviewed, and how. Here is the complete  methodology section for this report:

[The study] surveyed several hundred employees of diverse companies, spanning in size from startups to Fortune 500 companies, and working at all levels of their respective organizations. Respondents representing a wide selection of industries, professions and regions were asked to answer questions pertaining to social media participation by their organization and executive leadership team. Respondents were also asked about their perception of other companies and brands, based on executive participation in social media channels.

Before I continue, let me reiterate that I am not going to denigrate this survey. That’s not what this blog is about. I will, however, point out what else I need to know to find its kernel of value.

First, let’s start with sample size: several hundred. Now, I don’t expect “people polls” like this to necessarily adhere to the same standards that our election exit polls do, but I would like to know exactly how many people were interviewed. Again, that isn’t to “judge” the survey, but it does help me get a sense of how deeply the data can really be dissected before subgroups become too small to be stable. Webster’s (please buy one—my family gets a nickel) defines “several" as more than two but fewer than many. So we can surmise that the sample size here is north of 200, but not many hundred. The Many Hundred would be a great straight-to-VCR ripoff of a gladiator movie, though. SPARTA!

The title and purview of the study would lead one to believe that the sample comprised CEOs and other C-Suite executives, but this information is also not provided. Instead, the sample is described—and I’m simplifying here—as employees of companies. That’s pretty broad. What I need to know is the exact makeup of that sample—the sample composition of C-level employees, managers, line and staff. If I know that, then I can contextualize the findings, which again are meant to portray C-suite engagement with social media. I’d love to know what C-level employees think, vs. what line and staff employees think. This sort of study won’t tell you the objective truth, but it could provide valuable insights into the varying perceptions of the truth at various levels of the organization.

Finally, I don’t know how the questions were asked, or how the sample was recruited/obtained, which again I need to know to put the data in its proper box. This data is not useless, nor are the study’s conclusions necessarily bad. But, as I said at the start, I don’t know what to do with them.

The lessons here are two-fold: for the creators of such data, please find a way to tell me who you asked and how you asked them. Sometimes, I think (and not necessarily in this case) people fear posting that information, because it will expose some weakness or flaw in the study. All studies have weaknesses and flaws. I’ll take the ones I can quantify over the ones I can’t any day. And for the consumers of such data: if you are genuinely curious, ask for this information from the providers of these sorts of studies. Insist on knowing who was asked before you post their infographic.

Because knowing who is half the battle. The rest of the battle, of course, consists of red and blue lasers.

26 Mar

What The Newest comScore Data Teaches Us About Impressions

A new study from comScore was just released that examines just what an “impression” really means in online advertising. In the spirit of Datasnob as source for the discriminating data connoisseur, let me start by saying this is a fine study, and one that has implications beyond the banner ad.

The comScore study examined 18 online ad campaigns, with 3,000 placements and over 1.8 billion ad impressions. The big picture finding: 31% of “impressions” are actually never seen—they are scrolled past before they can load, for instance—and “below the fold” ads are actually more valuable than heretofore thought.

I’ll let the study speak for itself—it’s a good piece of work—but I’ll add here that the temptation has been to report this as a negative about the effectiveness of the banner ad (“Nearly one-third of impressions are wasted!!!”) I think the fact that 69% of online display ads are actually seen is a more impressive stat. The key is not to consider this stat in a vacuum. What percentage of any advertising is actually seen? 

This brings to mind some of the horribly flawed “impression calculators” I’ve seen in social media that examine who retweets your messages and how many followers they have to calculate an “impressions” number for the potential eyeballs who could have seen your message. These are useless. With all the clickstream data at our fingertips, the simple fact remains that when I tweet this post to my followers on Twitter, I have *no idea* how many people will actually see this tweet, or even could have seen this tweet. All I know is that it’s probably higher than the number of people who retweeted it, and probably a LOT lower than 69% of the unduplicated follower counts of those who retweet it.

The “impression” remains exactly what it is—a visual impression of a brand or message right between the retinas. They remain an important part of brand advertising, and what this comScore report tells me is that display advertising remains a remarkably effective way to make “an impression” at the right phase in a campaign. 

15 Mar

What SXSW Conversations Teach Us About Outliers

Mashable recently ran an infographic from Meltwater (below) that characterized the social media chatter around the South by Southwest Interactive festival in Austin, TX this past week. They monitored 483,000 conversations directly related to or emanating from SxSW, and revealed the people and topics that were trending during the event.

The study is a good one, and does what it says on the tin: Meltwater isolated SxSW-related content, and presented a descriptive summary, which Mashable reported correctly. What this reminded me of, however, are the dangers of “black swan” events in mining for social data, especially for things like mentions, which are often as likely to be a random walk as they are indicative of any type of buzz. 

Let’s take my doppelganger, Matthew McConaughey, for example. Meltwater reports that he received the fourth-highest number of mentions amongst celebrities in SxSW-related social chatter, and I completely believe that stat. McConaughey recently moved to Austin and was seen numerous times, both beshirted and not, about town during the event. Now, Meltwater did the work here to isolate SxSW chatter from “normal” chatter (is there such a thing on Twitter?), which presents us with a believable frame for this data and a context for its use.

Imagine, however, that you are working for a studio, and you are looking to cast a part for a remake of Citizen Kane. You fire up your Twitter Crapulator, and start trolling for tweets to see what celebrities have the most buzz. After rejecting the Bieber as too young, and the Gaga as too…uhh..much, you hit upon an intriguing name - McConaughey. Seduced by his apparent Twitter popularity, you imagine him drawling “Rosebud” to packed theaters, and call his agent. If you didn’t investigate further the reasons for this apparent Twitter resurgence, then you just got Black Swan’d (yes, I made that into a verb.) I wrote about this some time ago when I investigated why Chicago became such a hot positive topic on Twitter for a while - turns out, Justin Bieber tweeted “I love Chicago,” which then generated roughly 6 BGillion retweets. In other words, a massive outlier.

SxSW Twitter traffic is a similar outlier. With the flood of tweets emanating from the festival, it’s easy to see how some topics could spike in, let’s say, an unrepresentative fashion. If you are monitoring for topics during that period, it’s important to look at the traffic around those topics both before and after the anomaly (SxSW) to determine if that traffic is merely an outlier, or if there is some aspect of that buzz that had lasting effect.

Being aware of the context for data is crucial. It’s why we don’t do surveys in Europe during some of the prime vacation weeks (people are out of their normal patterns) or extrapolate monthly game console sales from December data (THANKS, SANTA!) If you are interested in what people had to say about your brand at SxSW, you need to screen for those conversations, as Meltwater did. But if you are interested in what social consumers in general are saying about your brand, then you might want to screen SxSW-tagged conversations out. What’s left may still be influenced by what happened at SxSW—nothing happens in a vacuum—but not dominated by it. Context is everything - and to make sense of social data, you need to understand what people are doing before you can characterize what they are saying.

Rosebud. It’s a sled, y’all.

8 Mar

Pinterest Drives More Traffic Than Google+ , YouTube and LinkedIn - Or Does It?

Most of the social media studies I see propagated on sites like Mashable are basically fine for what they are - the problem is what the people who report these studies purport them to be.

Case in point: this Pinterest traffic data that was all the rage back in February (ahh, I remember those halcyon days like it was only a month ago…) The headline was almost brazen: “Pinterest Drives More Traffic Than Google+, YouTube and LinkedIn Combined [STUDY]”  The data reported here showed that yes, Pinterest accounted for 3.6% of referral traffic, and that this figure is in fact more than those other three sites combined…

…amongst users of Shareaholic.

Now, the only accurate headline anyone could have written here was “Pinterest Drives More Traffic Than Google+, YouTube and LinkedIn Amongst Shareaholic Users.” I’m not in the headline business, but I’ll submit that this is probably not as compelling a headline as the one Mashable came up with.

So the issue here is one of scope. While the study is fine as an indicator of the behavior of Shareaholic users, it is, let’s say, unclear that it’s a useful proxy for Internet behavior in general. None of this is to denigrate Shareaholic’s methodology - I take their data at face value. However, there are waaaaaay too many facts not in evidence here to draw many conclusions about this data beyond the scope of Shareaholic users. Chief among those facts: how representative are Shareaholic users of Internet users in general? Or social media users? Or even Pinterest users?

We cannot know this, really - all that we can do is make assumptions. One might, for instance, assume that Shareaholic users are closer to early adopters than mainstream users. Or that Shareaholic users (from their very name!) tend to share more than mainstream social media users. But these are assumptions. Without a study of Shareaholic users (and for this, you need a survey - not clickstream data) we really can’t know much of anything - except that Shareaholics pin the hell out of things on Pinterest.

Again, there is nothing wrong, per se, with the methodology of the Shareaholic study - it is what it is. What it isn’t, though, is what legions of retweeters said it was.

For more, I’ll refer you to this post from the archives of Brandsavant on Social Media Data Analysis 101.  One of the five main things you need to know about any study you see on the web is who was sampled (and the complement to that, who wasn’t sampled.) In this case, Shareaholic users were sampled, and non-Shareaholic users were not sampled. This does not invalidate the study; it contextualizes it. That’s the crucial point.

Helpful? If you’d like to see more studies, data points and other sources of social media information discussed here on Datasnob, do visit my contact page and send those links my way. Or, hit me up on the Twitter. Remember, my goal here is not to scold, and most of the data you see on the social web has some redeeming value. In this space, I’ll root that value out, and also give you some tips on how to frame this information so that it ends up being helpful to you.

Thanks for reading.