Probability is a way of expressing knowledge or belief that an event will occur or has occurred.
1. The basic definition of probability
Probability is a way of expressing knowledge or belief that an event will occur or has occurred. The concept has been given an exact mathematical meaning in probability theory, which is used extensively in such areas of study as mathematics, statistics, finance, gambling, science, and philosophy to draw conclusions about the likelihood of potential events and the underlying mechanics of complex systems.
2. The definitions of probability in research
There are two main interpretations of probability, one that could be termed “objective” and the other “subjective.” The first is the interpretation of a probability as a limit of relative frequencies; the second, as a degree of belief.
(Probability, Statistics, and Stochastic Processes. Peter Olofsson. A Wiley-Interscience Publication John Wiley & Sons, Inc.)
A probabilistic situation is a situation in which we are interested in the fraction of the number of repetitions of a particular process that produces a particular result when repeated under identical circumstances a large number of times. The process itself, together with noting the results, is often called an experiment. An outcome is a result of an experiment. An event is an outcome or the set of all outcomes of a designated type. An event’s probability is the fraction of the times an event will occur as the outcome of some repeatable process when that process is repeated a large number of times.
(Judith Sowder, Larry Sowder, Susan Nickerson. Reasoning about Chance and Data Part IV of Reconceptualizing Mathematics for Elementary and Middle School Teachers. San Diego State University)
The classical interpretation of probability is a theoretical probability based on the physics of the experiment, but does not require the experiment to be performed. For example, we know that the probability of a balanced coin turning up heads is equal to 0.5 without ever performing trials of the experiment. Under the classical interpretation, the probability of an event is defined as the ratio of the number of outcomes favorable to the event divided by the total number of possible outcomes.
Sometimes a situation may be too complex to understand the physical nature of it well enough to calculate probabilities. However, by running a large number of trials and observing the outcomes, we can estimate the probability. This is the empirical probability based on long-run relative frequencies and is defined as the ratio of the number of observed outcomes favorable to the event divided by the total number of observed outcomes. The larger the number of trials, the more accurate the estimate of probability will be. If the system can be modeled by computer, then simulations can be performed in place of physical trials.
A manager frequently faces situations in which neither classical nor empirical probabilities are useful. For example, in a one-shot situation such as the launch of a unique product, the probability of success can neither be calculated nor estimated from repeated trials. However, the manager may make an educated guess of the probability. This subjective probability can be thought of as a person’s degree of confidence that the event will occur. In absence of better information upon which to rely, subjective probability may be used to make logically consistent decisions, but the quality of those decisions depends on the accuracy of the subjective estimate.
a. A priori probability
The a priori method of computing probability is also known as the classical method. It might help to think of it as the expected probability value (e.g., like expected frequencies used in calculating the chi-squared statistic).
b. A posteriori probability
The a posteriori method is sometimes called the empirical method. Whereas the a priori method corresponds to expected frequencies, the empirical method corresponds to observed frequencies.
(B. Weaver (31-Oct-2005) Probability & Hypothesis Testing 1)
c. Conditional probability
Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written P(A|B), and is read "the (conditional) probability of A, given B" or "the probability of A under the condition B". When in a random experiment the event B is known to have occurred, the possible outcomes of the experiment are reduced to B, and hence the probability of the occurrence of A is changed from the unconditional probability into the conditional probability given B.
II. Simplified example of probability
Question ID: 20400221100 and 20400222100. Square and round pieces of three colors.
Three red square pieces of wood, four yellow square pieces, and five blue square pieces are put into a cloth bag. Four red round pieces, two yellow round pieces, and three blue round pieces are also put into the bag. All the pieces are then mixed about. Suppose someone reaches into the bag (without looking and without feeling for a particular shape piece) and pulls out one piece. What are the chances that the piece is a red round or blue round piece?
a. cannot be determined
b. 1 chance out of 3
c. 1 chance out of 21
d. 15 chances out of 21
e. 1 chance out of 2
a. 1 of the 2 shapes is round.
b. 15 of the 21 pieces are red or blue.
c. There is no way to tell which piece will be picked.
d. only 1 of the 21 pieces is picked out of the bag.
e. 1 of every 3 pieces is a red or blue round piece.
III. Importance of probability
Conditional probability and Bayesian reasoning are important components in undergraduate statistics, since they intervene in the understanding of classical and Bayesian inference, linear regression and correlation models, multivariate analysis, and other statistical procedures, which are frequently used in professional work and empirical research. Conditional probability reasoning is also a crucial part of statistical literacy, since it helps making accurate decisions or inferences in everyday life.
(Díaz, Carmen, Batanero, Carmen. Students’ Biases in Conditional Probability Reasoning.)
A probability heuristic model (PHM) for syllogistic reasoning is proposed. An informational ordering over quantiﬁed statements suggests simple probability based heuristics for syllogistic reasoning. The most important is the ‘‘min-heuristic’’: choose the type of the least informative premise as the type of the conclusion. The rationality of this heuristic is conﬁrmed by an analysis of the probabilistic validity of syllogistic reasoning which treats logical inference as a limiting case of probabilistic inference.
(Nick Chater, Mike Oaksford. The Probability Heuristics Model of Syllogistic Reasoning. Cognitive Psychology 38, 191–258 ,1999)
Two major applications of probability theory in everyday life are risk assessment and trade on commodity markets. Governments typically apply probabilistic methods in environmental regulation where it is called "pathway analysis", often measuring well-being using methods that are stochastic in nature, and choosing projects to undertake based on statistical analyses of their probable effect on the population as a whole. A good example is the effect of the perceived probability of any widespread Middle East conflict on oil prices, which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely vs. less likely sends prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are not assessed independently nor necessarily very rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict.
Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, utilize reliability theory in the design of the product in order to reduce the probability of failure. The probability of failure may be closely associated with the product's warranty.
IV. Research on probability
One group analyzed rule usage on probability reasoning items by fuzzy partition with multiple rule scores. There are four rules relating to strategies used in solving problems on a probability reasoning test. The score on the test depends on these four rules so that each subject has four scores as to rule usage.
(Yuan-Hornglin, Min-Ningyu, Berlinwu. Fuzzy Classification Analysis of Rules Usage on Probability Reasoning Test with Multiple Raw Rule Score. Proceeding of the 2nd WSEAS/IASME International Conference on Education Technologies, Bucharest, Romania, October 16-17, 2006)
Another group provided evidence that people typically evaluate conditional probabilities by subjectively partitioning the sample space into n interchangeable events, editing out events that can be eliminated on the basis of conditioning information, counting remaining events, then reporting probabilities as a ratio of the number of focal to total events. Participants’ responses to conditional probability problems were influenced by irrelevant information.
(Craig R. Fox, Jonathan Levav. Partition–Edit–Count: Naïve Extensional Reasoning in Judgment of Conditional Probability. Journal of Experimental Psychology: General 2004,Vol.133,No.4,626–642)
This group reported on the Probability Inquiry Environment (PIE), which facilitates the development of probabilistic reasoning by making available collaborative inquiry activities and student-controlled simulations. These activities guide middle school students toward a deeper understanding of probability, a domain that is becoming increasingly important in the K-12 mathematics curricula of the United States but which is notoriously difficult to learn.
(Phil Vahey. Learning probability through the use of a collaborative, inquiry-based simulation environment. Journal of Interactive Learning Research. VOL. 11 Number 1 2000)
This group provided a model for reasoning about knowledge and probability together, allowing explicit mention of probabilities in formulas.
(Ronald Fagin and Joseph Y. Halpern. Reasoning About Knowledge and Probability. Journal of the Association for Computing Machinery, Vol .41, No 2, March 1994)