Posts Tagged ‘measurement’
I was shocked today to get pointed to a post on the Hootsuite blog by friend Kami Huyse. The post “What is the most sought-after selfie?” looked at recent famous selfies. What galled me was this paragraph:
2014 was the year of the first billion-dollar selfie. During the 2014 Oscars, Ellen DeGeneres snapped a group selfie, rumored to be sponsored by Samsung, with the likes of Brad Pitt, Angelina Jolie, Bradley Cooper, Julia Roberts and Meryl Streep. She then uploaded the photo to her Twitter account and ended up getting millions of retweets from people around the globe. Maurice Levy, CEO of advertising firm Publicis, said that the Oscar selfie was worth between $800 million to $1 billion to its client Samsung.
I immediately shared some inappropriate words, then I left a comment on the post. But apparently I still have more to say.
Lévy is the CEO of a gigantic conglomerate of agencies lumped together as Publicis Groupe, and he was doing a talk at the MIPTV summit in April, just after the Academy Awards. Here’s the crucial quote:
The quote: “The earned media — all the buzz which had been done around the Oscars — represents roughly a value between $800 million and a billion US dollars, because it has been mentioned all around the world, and the Samsung phone has been either mentioned or seen.”
M. Lévy has, no doubt, achieved great things. His group of companies generated $2.3 billion in revenue (US dollars) in the first quarter of 2014. Compared to me, he’s a top predator, and I’m an amoeba. So I am shocked to see a man of his stature, in his position, use a metric that has been so thoroughly discredited — Advertising Value Equivalency, or AVE.
AVEs have been around for a long time. And despite the efforts of many professional groups and individuals, they remain. Why are they problematic? I can’t state the reasons much better than this 2003 paper from the Institute for Pubic Relations. I’ll turn the paper’s objections into bullet points for brevity:
- There’s no factual basis for assuming that an “editorial” mention is equivalent to an advertisement
- The credibility of media varies from one topic and one outlet to the other. So using one “multiplier” is impossible
- AVEs only measure what APPEARS, while PR folk often work to minimize coverage or not see something appear at all. This is not measurable by AVE
- Advertisements depend on repetitive mentions to build awareness. “Earned media” cannot do the same
- Not everything is relatable to advertising. If there are no ads on the front page of a magazine, what’s the value of a cover mention?
- If a story tangentially mentions a brand or an organization, does the equivalency relate to the entire story or the portion of the story mentioning the specific brand?
In 2010, a coalition of leading communication organizations agreed upon what came to be known as the “Barcelona Principles.” Principle number five of the seven principles states: “AVEs are not the value of public relations.” Yet, according to PR News earlier this spring, the principles are not being adopted as quickly as might have been expected. Or hoped. And when you have people in the position of Maurice Lévy using these discredited and disavowed numbers, while it remains disappointing, it becomes less surprising.
The lesson for us here? I could simply and flippantly say “Don’t follow leaders.” But there’s a slightly deeper lesson here. Even if you’re working with a “top agency”, even if you’re hiring “the best” — you owe it to yourself and your business to be ready to call BS on what they tell you. Don’t simply assume they know best, that their advice should be taken. If you can’t understand the strategy, or the method of evaluation; if you can’t relate the tactics to your business goals: speak up. Ask for better.
And if you’re a communicator — find a way to help push our industry out of the bad habits that we’ve developed. We can do better. And we know how.
Earlier this month I wrote about taking public stands as a business. One of the elements of that post was that you want to be listening to the conversations taking place around the issue, and around your business. Ideally, you should be doing that on an ongoing basis.
I also wrote about developing a “listening strategy.” Maybe you took those posts to heart. But, you say, you don’t regularly monitor social media? Too difficult? Too expensive? Pshaw.
Yes, you can spend money on a commercial social media monitoring service. There are lots out there. But maybe you don’t have the budget for that. Well, In a few steps, you can have a listening post set up that might not be as exhaustive as some giant corporate operation, but is certainly going to be better than ignoring conversations.
Step one: Get your Google on.
There’s more to Google than just searching for that store that sells those gadgets you need. You can use tools like Google News, Google Blogsearch, in tandem with RSS feeds and/or Google alerts to know exactly what is happening in your industry, when someone writes about your competition, or when a blog covers a topic of interest to you or your business. Don’t forget about Youtube searches as well.
Step two: Say yes to RSS.
The geekosphere mourned the loss of Google Reader when it was shut down on July 1, 2013. But there are alternatives, like Feedly. What are these things? Here’s my simple description. Websites, Google searches, and all sorts of web-based tools all generate something called an RSS feed. That feed gets updated every time the site is updated. Feedly, and other RSS readers, grabs all the feeds you want and creates a newsstand on your screen. You can skim through hundreds of websites in a couple of minutes, keep the articles you think are worth keeping and forget about the rest. To try to visit an equivalent number of sites would take HOURS. This is a huge timesaver.
Step three: Make it a nest-y habit.
Make checking this part of your daily routine. My recommendation: First thing in the morning, when you turn on your computer or tablet, you check your e-mail, right? Then you do the same thing with your RSS Reader. You then flag anything that’s of importance and act on it — give it to an employee, respond, make phone calls, put it in your follow-up file — whatever works.
If you do this? You’ll be further ahead than the majority of businesses, as you’ll see by this late-2012 study that found that TWO THIRDS of companies aren’t monitoring social media for business purposes.
Got a question about setting up your listening post? Leave a comment. Like this kind of post? Click on the “SMB101″ or “Tips” tags just below! Need a little help or support setting things up? No problem – contact me.
(photo: Creative Commons licenced by Flickr user Elliott Phillips.)
If you’re here regularly, you’ll know I love me some measurement. So when I saw a recommendation to read a paper from Katie Paine, I was pretty much immediately going to the site to download it.
“Social Media Measurement: A Step-by -Step Approach” by Angela Jeffrey, a Texas-based communications consultant with Measurement Match, is exactly what the title implies — a no-BS guide to doing solid measurement of social media initiatives for organizations, published by the excellent Institute for Public Relations. When I saw a thanks to Kami Huyse, a communicator who I like and respect a great deal, that made me even more positively disposed to the paper.
And the content does not disappoint.
She starts with the depressing information that measurement is NOT being embedded in organization’s social media campaigns and points to three different surveys with disturbing numbers. Perhaps the worst? An eConsultancy survey that reported only 22% of communicators had a strategy that linked data and analysis to business objectives.
So perhaps you’re in the three-quarters of that sample. Drop the shame, and read the rest of the paper. In under 20 pages (before the appendices), she lays out an eight-step process for a solid — and achievable — social media evaluation process.
Here’s my paraphrase of her steps. And if any of this is shocking, you need to really brush up.
- Identify goals
- Research and prioritize your stakeholders for each goal
- Set objectives
- Set key performance indicators
- Choose your tools and benchmark
- Analyze your results and compare them to your costs
- Present to your management
- Lather, rinse, repeat.
If you’re holding back, or you haven’t done a measurement component to your social media activities, read this paper and then tell me why you can’t.
Hell, you don’t even have to pay for her paper. So .. get to it. And if you want some more support, feel free to contact me for a consultation, or to take the next social media measurement course I’m teaching later this month.
Not so long ago, my friend Dennis posted an infographic about the misuse (accidential or wilful) of data in infographics. In a handy infographic format. I’m going to take the opportunity to embed it below. It’s worth keeping.
But Dennis’s nifty graphic only tells us about one place where we can be led into temptation — the infographic.
I happened upon a newsletter today that made me think of how easy it is to make marketing and communication decisions or take action based on information that should be questioned.
Mobile Commerce Daily reported on May 29 that “44pc of shoppers will never return to sites that are not mobile friendly: report.” The story is based entirely on a survey carried out by US software company Kentico, which makes content management systems. Kentico issued a news release about the survey on May 28, but it could be that the newsletter had an embargoed copy of the release.
The information is interesting. For example, it says that nearly 9 in 10 people with smartphones use them to compare products to competitors. And 45% do it right in the store, underlining the practice of “showrooming.”
But… in the newsletter story, there’s no information at all about the survey data. Even more frustrating is the lack of a link to the source data. I tracked down Kentico, then hit their press centre, where the news release about the survey sits. If you go to the Kentico site, you discover that the data-gathering part of this survey consisted of “More than 300 US residents 18 years old and over participated in the Kentico Mobile Experience Survey, conducted online during the month of April, 2013.”
Now, a survey sample is neither good nor bad. The point is to understand that sample. Was it a random sample? Did the participants selfselect? I couldn’t tell anything more than what I just said, because Kentico didn’t link to the survey itself or a more detailed report of its findings.
I contacted Kentico’s PR company, and Chris Blake of MSR Communications was prompt, open and detailed in his responses to my questions. He gave me demographic information that SurveyMonkey, the tool they used to do the research, provided, and a copy of the questionnaire. After a brief perusal of some USA census data, I learned that their sample of 300 people skewed only slightly more male, somewhat older, and way more educated than the US general population, for one thing. And the data provided on their sample gives me a sense of the potential sampling error rate (while Chris Blake suggests a ±5% margin of error, I’m thinking more like ±10%).
I don’t think there’s ANYTHING bogus about the survey results here. But I needed to take a fair amount of time to convince myself of that. And there are many occasions on which I find the data or survey results so problematic that I forget about using them.
There’s a flood of survey results and other materials that get published by the originators of the information, by newsletters, and by people like me every minute of every day. It’s easy to take everything at face value. But think twice. As a teacher of social media, I’m constantly looking for good data to share with students. As a consultant, I’m looking for information that I can use to help clients make sound decisions. But it is dangerous to see a newsletter article and use it to tell students or clients to base their actions on the data it contains.
Back in the days when ink and paper cost money, I understand the need for brevity and concision. But these newsletters are electronic. Pixels don’t cost anything but the time to write. And if you’re not going to disclose proprietary or competitive information, why not make as much information as you can readily available?
The more easily people like me can peruse your research, the more likely we’ll be to accept its conclusions. The more difficulty we have understanding the process behind the numbers, the more skeptical we become (or at least the more skeptical we SHOULD become).
And if you’re in business and trying to grapple with the challenges of communicating using social media, either desktop-style or mobile, make sure to ask questions EVERY time you see statistics and survey results. You don’t want to have to explain to your boss why you made a bad marketing or sales decision based on data you found in a press release and didn’t vet.
It’s too generous to assume that just because someone writes a newsletter, they’re doing your due diligence for you.
Here’s Dennis’s great graphic:
Today is Maclean’s day in Canada. For six years, I did media relations at a Canadian university, which meant Maclean’s Day was … well, sort of the inverse of a national holiday. It was the day on which you could be assured that all the local media outlets would be calling for comment from the president on the university’s standings in the league tables of Canada’s universities, published by our only national weekly newsmagazine.
This was BIG news in Canada, and some universities used their Maclean’s ranking as part of their marketing and recruitment activities. At the university I worked at, we were much less comfortable doing that. Was that because we were never at the top of our class? I can tell you no, but I don’t know if you’ll believe me or not.
It’s worth noting that at one point, a consortium of universities tried to organize a boycott of Maclean’s, refusing to provide them with data. The goal was to either stop the rankings entirely, or to influence how the data were crunched and presented.
So our standard lines were along the lines of …
- “The Maclean’s rankings are one measure of some aspects of performance of universities.
- We have issues with the way Maclean’s collects and analyzes data.
- We don’t celebrate when we rise or mourn when we fall.
- We will look at the numbers and see if there are things we can focus on to improve the university experience for our students.”
Humans have a natural desire to rank and to rate. Who’s best? Who’s worst? And one truth that is sometimes forgotten in the quest to rank is this: even if everyone’s excellent, SOMEONE has to be worst. One of the big problems with the Maclean’s rankings (from the universities’ perspective) was that the rankings weren’t accompanied by a score. Now, you don’t know if the point-spread between #1 and #20 was 5 “points” or 50.
Education rankings, whether in Canada or the US, are big business. For Maclean’s, it means an extra 500,000 readers for their Universities issue. I’ve no doubt advertising is charged at premium rates too. Maclean’s also packages the material into an annual guide that sells for $20. In the States, US News and World Report has just announced it’s killing its print editions… except for the university rankings and other special issues.
And the ranking mania has moved to public schools. Here in Ottawa, the local paper publishes a ranking of public schools that is prepared by the Fraser Institute. And in Los Angeles, you can even get rankings for your teacher, thanks to the Los Angeles Times. The Times hired a consultant who crunched achievement test scores, then produced reports on how each teacher and school did on “value-added” — essentially how much each teacher or school improved a child’s achievement test scores. The teacher or the school’s performance was charted like this:
So what’s wrong with this? Shouldn’t people be assessed and evaluated? In Washington, DC, the “Schools Chancellor” fired more than 200 teachers in July based on their effectiveness rankings. The Times coverage suggests that this is a step up from, or at least an add-on to, traditional teacher assessment methods. This is how the Times describes those methods:
teachers’ performance reviews, which are overwhelmingly based on short, prearranged classroom visits by administrators and other subjective measures.
All of this would be — pardon the pun — academic. But for the fact that some people believe a teacher who received poor rankings committed suicide. There’s no way of knowing what brought the teacher to kill himself.
But I guess the questions that are going around in my mind is this:
- When is it appropriate to identify and assess the performance of people publicly?
- Where does performance assessment end and shaming begin?
- Can ratings and aggregate scores be trusted to make firing decisions?