And many of the solutions are surprisingly simple. Perhaps the most serious mistake that people make is the failure to distinguish known risk from uncertainty. We are dealing with known risk when all of the possible outcomes are known and the probability of each outcome can be computed. Otherwise, we are dealing with uncertainty-a very different kind of thing.
For example, most people want to get the most useful products at the lowest price; because of this, they will judge the benefits of a certain object (for example, how useful is it or how attractive is it) compared to those of similar objects. They will then compare prices (or costs). In general, people will choose the object that provides the greatest reward at the lowest cost. The model of rational decision making assumes that the decision maker has full or perfect information about alternatives; it also assumes they have the time, cognitive ability, and resources to evaluate each choice against the others.
With a higher risk (e.g., 10 percent), some research hypotheses would have been supported. Nonetheless, this study contributes to credibility research concerning Wikipedia for the following aspects. First, the study revealed that students, indeed, used various peripheral cues of Wikipedia when they were uncertain about the believability of Wikipedia articles. Second, this study first attempted to understand non-verification behavior and offered some new insights into understanding non-verification information behavior by employing the theory of bounded rationality, despite only partially confirming the theory.
Despite a higher percentage of self-reports than actual actions (79.7 percent versus 57.6 percent), this result was not statistically significant. This experiment shows that over half of the respondents took the additional action to verify information. In other words, to some degree, this studyâ€™s finding supports other studies that demonstrate a discrepancy between what Internet users say and what they actually do regarding information verification (Flanagin and Metzger, 2007; Iding, et al., 2009). However, this study did not confirm the above studies statistically.
And the discussion above does nothing to convince me further. Note that I am not saying that humans cannot or will not be able to asses these risks (though I am a glass 1/3 full kind of person), but I am saying that I donâ€™t think are heuristics are up to this task.
Gerd Gigernzer is an unapologetic German who seems to be on a mission to expose the unethical use of language and misleading statistics that confuse not only ordinary people but also the professionals and “experts” who unwittingly end up peddling profitable narratives for big industries – mainly pharmaceutical / medical and financial. At the same time, he comes across as unbiased, which I think adds to his credibility and illustrates his desire to educate for the greater good.
I particularly loved the ending of the book where he gently admonishes the concept of nudging (whose most prominent faces are Richard Thaler and Cass Sunstein) by writing, “As a general policy, coercing and nudging people like a herd of sheep instead of making them competent is not a promising vision of democracy. ” This is a book that should be revered and shared by self-directed learners who savor the opportunity to become more informed and appreciate acquiring knowledge that makes life just a little bit easier. Plus, if you are a fan of schadenfreude, you can see financiers, physicians and lawyers taken down a few notches as many of them are naive about how to interpret risk. This book is hard to rate objectively.
Given these findings, it would be useful to know whether similar patterns can be observed in Wikipedia, a user-generated encyclopedia, that has become a popular information source among college students (Head and Eisenberg, 2010; Lim, 2009). Currently, however, little is known about what peripheral cues college students use in judging the credibility of Wikipedia articles, and whether certain peripheral cues influence their credibility judgments of Wikipedia, especially when they do not have sufficient knowledge of the topic of the article.
Instead, humans examine alternatives sequentially and continue their search process until a satisfactory alternative that meets or exceeds their aspiration level is found. These aspiration levels are not fixed, but are adjusted to the situation in the sequence of trials. The aspiration levels rise if satisfactory alternatives are easy to find, and fall if they are difficult to acquire (Simon, 1955). In other words, humans operate within the limits of bounded rationality and simplify the choices available in making decisions (Todd, 2002), displaying a stimulus-response pattern more than a choice among alternatives (Simon, 1997a).
An intermediate amount of knowledge about a set of objects can result in the highest proportion of correct decisions, an example of the â€œless-is-more effectâ€ . In other words, fast and frugal heuristics explain human behavior based on the human rationality of making a decision. In a similar vein, Gigerenzer (2002) provides an explanation of why fast and frugal heuristics work. Humans use heuristics that are matched to particular environments and make adaptive decisions, taking into account a combination of accuracy, speed and frugality. In other words, humans are ecologically rational.
According to Simon (1979), this decision-making process leads humans to pursue a â€œsatisficingâ€ path instead of an optimal one. In fact, Simon coined the term, â€œsatisficingâ€ (Selten, 2002), which is a blend of satisfying and sufficing (Agosto, 2002). A satisficing strategy seeks a satisfactory choice that is â€œgood enoughâ€ to suit an individualâ€™s purpose. A satisficing strategy is a rational rule that reduces the informational and computational requirements of a rational choice (Byron, 1998), requiring less time and less cognitive exertion.
In the end, I feel a little more “risk savvy” after reading the book. Gigerenzer’s book serves well in stretching the understanding of risk and probability. According to the common view, we, humans, are probability-blind and predictably irrational . The author provides some useful tools for dealing with risk and uncertainty, arguing that it is perfectly possible to remove our seemingly hardwired cognitive biases. The three important angles of probability are discussed in the book.