The crowdsourcing dilemma: the idea with the most votes isn’t always the best idea

You might wonder, with great anti-biasing technology, why wouldn’t the idea with the most votes always be the best idea? There are all sorts of reasons: a great idea might have been entered relatively late in the crowdsourcing process or the submitter might have given it a non-compelling title, for example. But by looking at implicit data, as well as explicit data, (that is looking at how the crowd interacts with ideas and not just at the hard data like votes), you can identify other indicators for ideas that are truly merit worthy, despite not getting the most votes, or even a lot of votes. You may not be able to immediately tell if the “underdog” idea is in fact a better idea, but you can provide it with more visibility within the crowd so that you can do an apples-to-apples comparison with the big vote getting ideas.

Here are some of the things we do, and suggest others do, to ensure a reliable, accurate outcome, and avoid the “popularity contest” syndrome:

* Multiple idea order display – Display ideas in a variety of ways, such as most recent, most discussed and most active for example, and don’t just default to listing the top voted ideas.
* Zero-start finalist round – Use a finalist round to allow the entire crowd to focus on just a few ideas which all show signs of being superior ideas, and start all finalists at zero votes.
* Weighted voting – give insider experts, your panel, or more long-time active members more vote weight… you’ll find these people are highly motivated to filter the best not just the popular to the top

Anuncios

Randy Corke| http://www.chaordix.com/blog

One of the common complaints about crowdsourcing is that it can become a popularity contest: the idea that gets the most early votes rises to the top of the list, therefore gets more views, and therefore more votes and becomes the winner. And, unfortunately, for many so-called “crowdsourcing” sites, this is true. You see it on sites like Digg – get enough early “diggs” for your submission to get on the “top news” list and your submission can get visibility for a long time.

We work hard to surface the best quality results for our clients from their crowdsourcing projects, so as you would expect, we have developed ways to avoid this “early vote” bias and other forms of bias. But even with great design and planning, the best technology and the right methodology, you can’t completely eliminate the possibility of a less-worthy idea getting the most votes. However, it IS possible to use analysis and crowd management techniques to ensure that other highly worthy ideas can be identified, so that the chances of truly finding the best idea are maximized.

Leer más “The crowdsourcing dilemma: the idea with the most votes isn’t always the best idea”

Marketing Optimization Technology: Be careful of shooting yourself (and your test) in the foot

…) I had the pleasure of learning about an experiment devised by my colleague, Jon Powell, that illustrates why we must never assume that we test in a vacuum devoid of any external factors that can skew data in our tests (and even looking at external factors that we can create ourselves).

If you’d like to learn most about this experiment in its entirety, you can hear it firsthand from Jon on the web clinic replay. SPOILER ALERT: If you choose to keep reading, be warned that I am now giving away the ending.

So after reanalyzing the data and adjusting the test duration to exclude the results from when an unintended (by our researchers at least) promotional email had been sent out, Jon saw that each of the treatments significantly outperformed the control with conclusive validity.


(…) I had the pleasure of learning about an experiment devised by my colleague, Jon Powell, that illustrates why we must never assume that we test in a vacuum devoid of any external factors that can skew data in our tests (and even looking at external factors that we can create ourselves).

If you’d like to learn most about this experiment in its entirety, you can hear it firsthand from Jon on the web clinic replay. SPOILER ALERT: If you choose to keep reading, be warned that I am now giving away the ending.

Computer ChipAccording to the testing platform Jon was using, the aggregate results came up inconclusive. None of the treatments outperformed the control with any significance difference.  However, what was interesting is the data indicated a pretty large difference in performance with a couple of the treatments.

So after reanalyzing the data and adjusting the test duration to exclude the results from when an unintended (by our researchers at least) promotional email had been sent out, Jon saw that each of the treatments significantly outperformed the control with conclusive validity. Leer más “Marketing Optimization Technology: Be careful of shooting yourself (and your test) in the foot”