Entries in Wisdom of Crowds (12)

Sunday
May092010

Polling for facts; or, an idiotic pandering to crowd wisdom

Sean Hannity has an innovative approach to intelligence analysis applying cutting-edge crowdsourcing principles: Ask a bunch of uninformed viewers what they think is the answer to critical and sensitive national security questions and present the results as meaningful.

On his homepage today, readers are invited to respond to the poll featured to the left. I just voted "with the Taliban" but alas, I was in the minority of only 41%. Rather, 53% of Hannity viewers believe that Faisal Shahzad acted "with al Qaeda" despite the announcement of Attorney General Eric Holder this afternoon that we have actual evidence that Shahzad was funded by the Pakistani Taliban.

But who needs facts when you have crowd opinions?

Monday
Mar012010

Book Review: You are Not a Gadget by Jaron Lanier

The ascendant tribe is composed of the folks from the open culture/Creative Commons world, the Linux community, folks associated with the artificial intelligence approach to computer science, the web 2.0 people, the anticontext file sharers and remashers and a variety of others. Their capital is Silicon Valley...their favorite blogs include Boing Boing, TechCrunch, and Slashdot, and their embassy in the old country is Wired.

Thus Jaron Lanier describes the "cybernetic totalists" or "digital maoists" whose rising influence Lanier fears is leading us down a path of online culture where appreciation for humanity is displaced by blind trust in technology. In You are Not a Gadget Lanier laments recent trends in the online world - belief in the wisdom of crowds, reliance on algorithms for recommendations rather than people,  mashups and other piecemeal appropriation of others' content, templated web 2.0 designs - and argues that this failure to appreciate individual expression in the web world may have grave consequences for creativity and culture.

Click to read more ...

Saturday
Aug082009

A faux prediction market hits the dust -- Predictify closes

Predictify, a prediction platform that claimed to be "like fantasy sports for everything else" has cited the tough economic climate and announced its decision to close. Capitalizing on the public obsession with news, the site offered a "forward looking dimension" to news reporting -- allowing users to predict events before they happened (!) The site's failure is not surprising.

Predictify asked a wide range of questions. A recent look at the site shows the following questions as among the most popular:

  • Will a cure for Alzheimers be discovered by 2015?
  • Will the national drinking age be lowered to 18 in the next two years?
  • By the end of 2009, what will be Texas' status in the union?

The site also added demographic information to the predictions in an attempt to make the site more fun. I learned, for example, that 25% more Muslims than Mormons expect Hannah Montana to become a billionaire by age 18. I'm not sure what great insight can be gleaned from this observation, even apart from the fact that in a pool of 1,200 total bets, there were surely not enough Muslim and Mormon bettors to make that result statistically significant.

Predictify was one of a whole host of "prediction market" news sites to appear online with the buzz surrounding James Surowieki's The Wisdom of Crowds. They suffered from some common weaknesses:

  • Weak incentives -- relying on a leaderboard when you have thousands of anonymous people who don't know or care about each other is not a particularly effective incentive
  • No market making mechanism -- unlike in a real market, where if you want to buy something, someone has to be willing to sell it to you, these sites allowed all transactions to occur, making them more akin to a poll than a market
  • Stupid questions -- it's partially the fault of the user base who suggested the questions, but there were so many poorly worded serious questions or simply frivolous pop culture questions, that the sites felt more like a cheap, relatively amusing place to waste time than a serious place to think about the probabilities of future events that matter

The greatest fault of all, however, cuts most directly at the legitimate ability of a site like Predictify to generate accurate forecasts: there is zero reason to believe that the site's user base has any insight into the questions at hand. What level of wisdom across pop culture, political, sports, and business events can be expected from the same anonymous crowd?

This is not to say that collective intelligence is not a powerful tool. In Surowieki's book, the famous opening example of a bunch of farmers at a market better able to collectively estimate the weight of a cow than any individual (i.e., the average estimate was nearly exactly right when no single estimate was) is powerful. The difference? These people actually knew something about cows.

By contrast, I have no insight into Jon and Kate's marriage -- there is no reason to think that asking 2,000 people like me whether they will divorce in 2009 would produce a more accurate prediction than say, asking a writer at US Weekly, who follows them around. A crowd in itself does not produce wisdom.

So the novelty of Predictify has worn down and the site is closing. I hope this signals a change in the public consideration of the prediction market industry -- away from silly news forecasting sites with vapid promises of wisdom and towards serious enterprise prediction markets where the market players possess unique information that effectively pooled can produce insights into real company issues.

Friday
Jul172009

Prediction Market for CFOs Launches

It's like a fantasy sports league that might actually matter. CFO Magazine has announced a partnership with Crowdcast, the enterprise prediction market software provider that we profiled last week, to launch a series of markets on issues of relevance to corporate CFOs.

The site will use a traditional leaderboard system, rather than real money, where winners can win prizes ranging from subscriptions to the Economist Magazine (CFO Magazine is part of the Economist Group) to Amazon gift cards to a ticket to the CFO Rising East Conference.

The initiative likely stemmed from the results of the quarterly Duke University/CFO Magazine Global Business Outlook Survey where "the ability to forecast results" consistently tops the list of CFOs' concerns. On the platform, CEOs can bet on economic indicators such as business inventory levels, the rate of inflation, oil prices, consumer spending, CFO turnover, and accounting rules convergence.

More accurate predictions on these economic indicators and regulatory decisions would no doubt be of great value to CFOs and the collective wisdom of this particular crowd would likely be impressive. But will CFOs want to share their wisdom with all of their competitors and collaborate in this way? Will the thrill of the competition be enough to spur participation (I doubt that an Economist subscription is much of an incentive to this crowd)?

Unfortunately, you need to be a CFO in order to be let onto the site, so outsiders can't see the market's penetration. Alas, we also can't see the results. [Update: Crowdcast has clarified that the site is open (clicking "other" occupation lets you in fully) and I'm now in] Unlike many prediction market platforms out there, this one is actually asking questions that matter -- and I would love to see what the world's corporate financial leaders's forecasts.

Friday
Jun262009

Netflix prize is (nearly) awarded! A model in crowdsourcing

The Netflix $1,00,000 prize to the team able to increase the accuracy of its recommendation system by 10% is nearly sure to be awareded to BellKor's Pragmatic Chaos, a super-team of 4 teams that today claimed to have reached the 10.5% mark.


Tracking the Netflix prize has been fascinating. Back in November 2008, one of the leaders, who was at the time 8.8% better than Netflix's own Cinematch, estimated that the movie Napoleon Dynamite alone accounted for 15% of his remaining error rate. Why? Because people either love it or hate it; it has received nearly exlusively 1 or 5 stars and it is nearly impossible to predict whether or not someone will like it based on his past history.

Netflix did right opening up the competition to the public. It inspired teams around the country -- from AT&T engineers to father-son teams -- and the 10% increase will be worth well more than only $1M to the company.

This is one of the best examples of crowdsourced innovation and problem-solving out there. The winning team was initially 4 disparate pairs or individuals who they realized that they had complementary skills -- machine learning, computer science, engineering -- and decided to collaborate. Throughout the competition, as the market leaderboard tracked the top performers, teams would routinely share lessons learned. Even with $1M at stake, the market can indeed be collaborative and come to a better solution than a single company alone.

Friday
Jun262009

Upstart corporate prediction market success story

Corporate prediction market startup Crowdcast in the New York Times:

During a pilot period, five large companies, including Warner Brothers and General Motors, have been using Crowdcast to predict revenue, ship dates or new products from competitors. About 4,000 bets have been placed, and predictions have been about 50 percent more accurate than official forecasts, Ms. Fine said.

At a media company with a new product to ship, 1,200 employees predicted a ship date and sales figures that resulted in 61 percent less error than executives’ previous prediction, according to Crowdcast. A pharmaceutical company asked a panel of scientists and doctors to predict regulatory decisions and new drug sales using Crowdcast, and they were more accurate than the company’s original prediction 86 percent of the time.

Pretty impressive. Crowdcast has debuted a new type of collective forecasting mechanism that you can read about here.

Saturday
Jun062009

Intrade was right -- "worst case" unemployment projections too rosy

The latest unemployment figures put the U.S. rate at 9.4%. That puts already above the peak of the government's "baseline scenario" used to "stress test" the nation's bank.

Here's a great chart from Calculated Risk comparing the government's scenarios for unemployment rate and the observed rate.

The Intrade market was more pessimistic. As we wrote back in early May, participants gave a 60% chance than unemployment would pass the 10% mark by December 2009. The prediction market saw through the softball predictions of the government.

Friday
May292009

Ancillary benefits of corporate prediction markets

Adam Siegel, CEO of Inkling Markets, made an interesting comment today during our phone conversation: the ancillary benefits of prediction markets can often be more important that the predictions themselves.

Inkling, a prediction markets platform for companies, promises to help you gain "business intelligence from your human network" and they agree with a lot of the commonly cited benefits such as increasing collaboration across boundaries, gaining the "wisdom of crowds" etc., but Adam mentioned a few additional benefits. He cited a common challenge in talking about prediction markets (that I have faced as well) which is that the primary driver for many people is "did the prediction market get it right?" Did Intrade get the 2008 election right? Did the HP market accurately predict printer sales? The truth is that prediction markets offer probabilities, not absolutes, so even if Intrade got a certain event "right" with 52% of the vote, it could have done just as well with a coin toss.

So, while numbers are very important, the right/wrong dichotomy isn't necessarily. Here are some of the ancilliary non-forecasting benefits that came out of our conversation:

  • Building in social aspects into prediction markets, such as discussion threads, can lead the market framers (i.e., the management) to begin to ask the right questions
  • The purely quantitative results can drive increased emphasis placed on the value of transparent metrics in an organization
  • Market results can be an "eyebrow raising exercise" or gut-check to the expectations of management
  • When markets are on-going, they can constantly adjust to changing events, rather than providing just a snapshot in time

I was also intrigued by Adam's observation that Inkling's clients are overwhelmingly the"hell raisers within the company" i.e., managers who sense the real disconnect between their bosses and the employees that report to them and want to shake things up. He cited an example at Proctor & Gamble where the company market results proved that executives were consistently over-optimistic in their forecasts across the spectrum of future events from product roll-outs to competitor behaviors. The hard proof of this behavior helped them to stop killing their employees and bring some balance between their expectations and the realistic expectations of the folks on the ground.

Tuesday
May052009

A new decision-making tool enters the stage: Let Simon Decide

Stumbling upon an article on Techcrunch about a new online decision-making platform, LetSimonDecide, I felt nearly transported back to 1999 when my favorite search engine was Ask Jeeves. Like Jeeves, Simon is supposed to be your guide as you sort through overwhelming data although this time rather than help you with search, your guide will help you arrive at answers to your most pressing life decisions.

Unlike its main competitor, Hunch, our positive experience with which we profiled earlier, LSD does not attempt to use collective intelligence to help you reach your answers. Rather, it just provides a structured checklist approach to looking at your problem. I tried out the site today to answer a question that took me months of deliberation many years ago: what should I major in during college?

First, I was asked which two majors I was considering. I entered my choices--Economics and International Relations, but was immediately disappointed. Simon now only could offer a binary evaluation; how much use is that? I was hoping that perhaps my eyes would be opened to a whole new major meant for me that I somehow had overlooked.

Next, I was asked to rank the 5 factors that are important to me in this decision. Then I scored each of those factors from 1 to 5 stars for each major, hit submit, and......calculate, calculate, wait for it.....my answer is Economics!

What a let down. Does the site really think that I'm unable to keep track of FIVE factors as I evaluate a decision? This is the weakness of avoiding a community-based approach in a decision-making platform; the outputs are only as good as my own personal inputs. If my inputs were that great, then I wouldn't need the help! Contrast this with my Hunch experience, where I never mentioned the bank Chase, but still was led to my card of choice.

For those wary of turning to an anonymous crowd to help you evaluate your decisions, then perhaps the structure that Simon offers to approach your problem is for you. From my perspective, if I am just looking to help sort through in my own head different options to a big question, I'll just stick to my standard pen-and-paper checklist approach.

...For the record, I never was able to make a decision on what to study in college, so I ended up with a double-major.

Sunday
Apr192009

Hunch, a website to help you make decisions on...anything

If you have a big decision to make, whether it's who you should vote for or whether you should ditch your facial hair, there are many tried and true ways of coming to an answer. My favorites include reading the opinions of "experts" who I trust, informally polling my friends, and asking my mom. My mom and my friends are great, but may not always be the best sources of advice. How to get a broader opinion to inform my decision making?

Enter Hunch, an internet platform that aims to help you improve your decision making by getting to know you generally ("do you believe that alien abductions are real?") and by asking you related questions to to your main question of choice. The site creates a decision tree, informed by the results of all of its users, to use the "wisdom of crowds" to give you an informed and personalized recommendation.

From founder Caterina Fake (of Flickr fame):

Hunch is a decision-making site, customized for you. Which means Hunch gets to know you, then asks you 10 questions about a topic (usually fewer!), and provides a result -- a Hunch, if you will. It gives you results it wouldn't give other people...On Hunch, people can create a Topic (as we call it) that acts like a human expert, getting to a decision by asking relevant follow up questions and weighing trade offs. We think that it can ultimately save people lots of strenuous cognitive labor: not everyone who buys a computer needs to become a computer expert.

The site is currently open to those who request an invitation, and as someone who is consistenty struggling to be more decisive, I am glad to have tried it. The first question that I explored, "which credit card should I own?", led me to an answer by asking seven related questions--are you willing to pay an annual fee? do you want your rewards to be in travel, cash back, or points? what is your bank preference, if any? what is your credit score? etc.." It took me about 30 seconds to answer these questions and then the site produced a #1 recommendation: the Chase Freedom Card. That IS my credit card of choice! The decision that took me hours of online research, informal polling of my friends, time sitting around making sure this was the right card for me was answered on Hunch in 30 seconds. Wow.

This success made me excited to delve into my source of eternal pondering: "what city should I live in?" This time I answered only four questions related to the size of the city I would prefer, the amount of cold weather that I can stand, my regional preference, and whether I would mind living in a high cost area. I gave pretty open answers to all and my #1 result was....Philadelphia. #2 was Portland, Oregon. #3 was Wilmington. Why these three? Mystery.

The site, like most recommendation platforms, will only get stronger as the user base expands. Ms. Fake announced recently on her blog that users have answered 4.3 million questions since the site was launched (users could begin requesting invitations to use the site as of March 27). The recommendation algorithm, developed by MIT machine learning experts, is pretty powerful. Its strength also comes from offering cross-domain recommendations. I rely heavily on Amazon recommendations for books and iTunes recommendations for music, and I would love to see my preferences in those different spheres interact. For now, the site is pretty bare bones and only 500 topics are offered. I'll still be calling up my mom for those big decisions in the short term, but maybe Hunch will eventually supplant her wisdom.

Tuesday
Mar032009

Are you a hedgehog or a fox?

Philip Tetlock's book Expert Political Judgment seeks to explore what constitutes good judgment in predicting future events. His 20 years of research finds that when asked to forecast political phenomena experts fare only slightly better than informed dilettantes and worse than simple extrapolation models based upon current trends.

This is bad news for experts, pundits, and expensive consultants paid to give their opinions on what is going to happen. Indeed, Dr. Tetlock found that the more famous the expert, the less accurately he/she forecasted. He also found education and experience had little to say about a forecaster's skill. Forecasters systematically over-predicted unlikely events (longshot bias), exaggerate d the degree to which they "saw it coming all along" (hindsight bias) and committed a number of other mistakes, relying too often on intuition over logic.

This does not mean that expert opinions are only as reliable as a chimp throwing random darts (although it is close...). One thing does make a real impact on forecasting skill: how an expert thinks.

To explore this idea, Dr. Tetlock borrows the famous line from Isaiah Berlin (who in turn had borrowed from a Greek poet): "The fox knows many things, but the hedgehog knows one big thing." According to the book, the better forecasters were foxes: self-critical, eclectic thinkers willing to update their beliefs when faced with contrary evidence, doubtful of grand schemes and  modest about their predictive ability. The less successful forecasters were hedgehogs: thinkers who tended to have one big  idea that they loved to stretch, sometimes to the breaking point. They tended to be articulate and very persuasive as to why their idea explained everything.

Unfortunately, because they often offer simpler, easily digestible messages, hedgehogs are far more likely to be the faces that we see interviewed on the major news circuits with theories of the future of global finance, American decline and other complex topics that often fit a little too neatly with past analogies.

Do prediction markets help at all with this problem? Can they help reduce forecasting bias? One challenge I see is that many of the incentivizing constructs offered on current platforms are leaderboards, appealing to the ego of the participants, and in turn, I would assume, increasing longshot bias. Experts make their names from BIG predictions that no one else has the "courage" to make. Sometimes they are right but usually not, and the payoff is far greater for eventually being right than the penalty is for continuously being wrong. If the same lopsized incentives exist in prediction markets, then the predictions made there will presumably also tend far more towards the extremes.

Here is some good advice from Dr. Tetlock on how amateurs like you and I can test our own hunches:

"Listen to yourself talk to yourself. If you're being swept away with enthusiasm for some particular course of action, take a deep breath and ask: Can I see anything wrong with this? And if you can't, start worrying; you are about to go over a cliff."

Flickr credit: ·Will·

Flickr credit: mikebaird

Thursday
Feb262009

The future of corporate prediction markets

The Economist today discusses the "uncertain future" of prediction markets in corporate decision making. As the article states, "although they have spread beyond early-adopting companies in the technology industry, they have still not become mainstream management tools." A couple of challenges are raised:

  • Getting enough people to keep trading once the novelty wears off. While an important point (a prediction market, like all markets, can only thrive and reach equilibrium if there are enough players), a good incentivization structure (e.g., money, prizes) should both encourage participation and better results.
  • Keeping people interested if they can't see how the results are being used. If a company isn't planning on using the results of the prediction market to make real decisions, then why create it to begin with? Wells Fargo, for example, said that its most effective trials took place in areas where managers could "do something with their findings." This should be the standard, not the exception. Plenty of prediction markets out there now are simply fun and games, brag to the community ventures with no practical value (e.g., a current question on Hubdub: "Will Nadya Suleman's ex-boyfriend be confirmed as the father of her children?") Corporate markets run by businesses with a bottom line should have no tolerance for wasted time of their employees making silly bets. If the company is using the results, but it is just not clear to employees, then they have a simple strategic communication problem.
  • The wariness of bosses to rely on the recommendations of non-experts. This challenge seems to miss the point. Corporate prediction markets should be asking questions that the employees are experts in. Perhaps the junior staff does not have all of the program management knowledge of the managers who make the decisions, but they do have more insight into day-to-day operations. Gaining the collective judgment of employees who have windows into small pieces of the overall problem should, according to the Wisdom of Crowds argument, be worth more than any single expert.

The most successful corporate prediction markets ask specific questions that the employee pool can offer diverse, informed opinions upon. The Koch Industries example cited in the article, then seems to be a poor example. In Koch's prediction markets, employees can bet on the future prices of raw materials and the liklihood of bank nationalization. The truth is that an internal Koch market is probably not the best place to answer these big questions. There is no reason to believe that the collective wisdom of the chemical conglomerate on the future of banks should be better than the judgement of economic experts.

HP, by contrast, had great success focusing on something that their employees did have unique insight into: projecting future sales of printers. Their internal market is quite complex: “We want to reduce the wisdom of crowds to the wisdom of 12 or 13 people,” said Bernardo A. Huberman, director of the social computing lab at Hewlett-Packard. Among the techniques, he said, are preliminary tests to assess the “behavioral risk characteristics” of participants to shade predictions from people who are inherently risk seekers or risk averse.

Google has created the largest corporate prediction market in the world. Google economic analyst Bo Cowgill explains that the trading system lets Google management discover its employees' uncensored opinions: "If you let people bet on things anonymously, they will tell you what they really believe because they have money at stake," Cowgill said. "This is a conversation that’s happening without politics. Nobody knows who each other is, and nobody has any incentive to kiss up." Traders can bet on such questions as: Will a project be finished on time? How many users will Gmail have? (In addition to some unrelated, political questions).

The primary benefit of a prediction market is that it allows information to be shared efficiently and at little cost. Improved information in turn leads to better decision making by management. The important thing is to ask the right questions.

For more on the subject of corporate prediction markets, see Robin Hanson and Tyler Cowen.