You might wonder, with great anti-biasing technology, why wouldn’t the idea with the most votes always be the best idea? There are all sorts of reasons: a great idea might have been entered relatively late in the crowdsourcing process or the submitter might have given it a non-compelling title, for example. But by looking at implicit data, as well as explicit data, (that is looking at how the crowd interacts with ideas and not just at the hard data like votes), you can identify other indicators for ideas that are truly merit worthy, despite not getting the most votes, or even a lot of votes. You may not be able to immediately tell if the “underdog” idea is in fact a better idea, but you can provide it with more visibility within the crowd so that you can do an apples-to-apples comparison with the big vote getting ideas.
Here are some of the things we do, and suggest others do, to ensure a reliable, accurate outcome, and avoid the “popularity contest” syndrome:
* Multiple idea order display – Display ideas in a variety of ways, such as most recent, most discussed and most active for example, and don’t just default to listing the top voted ideas.
* Zero-start finalist round – Use a finalist round to allow the entire crowd to focus on just a few ideas which all show signs of being superior ideas, and start all finalists at zero votes.
* Weighted voting – give insider experts, your panel, or more long-time active members more vote weight… you’ll find these people are highly motivated to filter the best not just the popular to the top
Tagged with: analysis
, avoiding bias
, Best practice
, Campaigns and Elections
, crowdsourcing results
, Data analysis
, Voting Systems
Publicado en ...and a bit of everything!
I’ve just finished reading a book called Intangible Capital (more on that in another post) by Mary Adams. The book does a good job describing the value and importance of knowledge, intellectual property and other intangible assets, and why innovation is key to the creation of those assets.
But that’s not the subject of today’s post. Today’s post deals with the fallacy that innovation “starts” with idea generation. I’m picking on Mary’s book because it was at hand and the latest to suggest that innovation starts with idea generation. I know this because it says so on page 85, but Mary’s writing does not stand alone. Far too often I hear people suggest or read that innovation starts with idea generation. Sorry, no – and my apologies in advance to Mary for calling out this small problem in what was otherwise a very good book.
Tagged with: Best practice
, crowdsourcing for business
, Innovation and Idea Management
, Intellectual property
, Knowledge Creation
, Knowledge Management
Publicado en Sin categoría
A common question we hear is “how is the quality of information, ideas and data derived from crowdsourcing better than what you might get from traditional research?” Here are a few answers:
More ideas: With a traditional survey, each recipient fills out the questions based on their thinking right then. Once they have filled out the survey, they usually can’t go back to add additional thoughts that might come to them later. In addition, since they can’t see other respondents’ replies to the survey (by design), their own thinking isn’t triggered by the thoughts of others. How many times has a good idea come to you because of something someone else said? Crowdsourcing provides not only a way to capture ideas both now and later, since most crowdsourcing sites live on for weeks if not months, it also enables the sharing of responses that can trigger more thoughts and ideas.
Better ideas: With traditional surveys, each respondent puts in their own ideas, and then those ideas are rolled up and analyzed, but at no point is there collaboration that enables the improvement of those ideas. Sometimes this is desirable and intended, but if you are looking for innovation, what you really want are the best ideas, shaped and enhanced by the collective intelligence, experience and viewpoints of the community. In some crowdsourcing models, the submitters or “owners” of the ideas can revise and enhance their ideas based on the feedback and comments from the crowd. In addition, through ranking or voting, you get a relative rating of how the crowd feels about a particular idea relative to the other ideas submitted. This can result in both better input, and a way to more clearly determine market preference.
With the plethora of market research techniques out there, some people might question the application of crowdsourcing to get information from the market. What with surveys, panels, focus groups, Neilsen, Ipsos, MyPoints, suggestion boxes, etc. we should be able to get all the input we need, right? After all, if over 50% of Fortune 500 firms only used focus groups, they’ve gotta be good right?*
Well, yes and no. The issue isn’t getting input, it’s getting reliable, accurate, unbiased input that’s most important. Getting market input isn’t all that hard. Ensuring that it’s accurate feedback that represents what the market truly wants and being able to assess all of that information to pull out only the most salient information is very hard to do well. And that’s where crowdsourcing differs significantly from traditional research.