5 ways to to assess crowdsourced ideas
Crowdsourcing is a great way of inviting in ideas from outside your organisation. Hopefully you will receive ideas of sufficient quality as to be implementable and they will eventually deliver real value to the organisation.
Before you can get to that point however, you need a clear idea of how you will assess the various ideas that flood in. There are five distinct approaches you can take to do this:
- Peer assessment – This system relies upon participants in the system contributing to the ideas that are eventually chosen. Think for instance of the t-shirts that are chosen on Threadless or the ideas posted to Dell Idea Storm. Such an approach is often valuable when there are a large number of potential ideas, and the community can be good judges of success.
- Expert assessment – The opposite approach sees success decided upon by one or more experts. Platforms such as Innocentive employ this approach to select the best solutions to problems. It is appropriate for when the idea pool is small or when experts can be clearly identified.
- Customer testing – This approach differs from the others in that customers/participants often get to actually use ideas before casting their vote. Kickstarter is an obvious user of such an approach. It’s most appropriate when ideas can easily be shared with customers.
- Direct assessment – This approach does not rely on judges at all, as it is merely the first participant to achieve the desired outcome. Platforms such as the X-Prize or the Netflix Challenge fit into this category. It’s the best approach when the ideas you seek are such that they can be tested against a desired outcome.
- A hybrid approach – Finally you can use a combination of the aforementioned approaches. For instance, the crowd could generate a shortlist, that is then analysed by experts or by the end consumer. You get a combination of the benefits of the previous approaches so can be an attractive option to choose.