Graduate

GraduateProbability and Statistics


Probability Theory


Probability theory is a branch of mathematics that deals with the likelihood of various outcomes. It is used to predict future events based on patterns, and aids in decision making by quantifying uncertainty. Through models and axioms, probability theory finds applications in various fields such as finance, gambling, science, and engineering.

Basic concepts of probability

The fundamental building blocks of probability include experiments, outcomes, sample spaces, and events. Let's explore each of these components:

  • Experiment: An action or process that leads to one or more specific outcomes. For example, throwing dice or tossing a coin are experiments.
  • Outcome: A possible result of an experiment. For example, rolling a '4' is an outcome.
  • Sample space: The set of all possible outcomes of an experiment. If we are throwing dice, the sample space is {1, 2, 3, 4, 5, 6}.
  • Event: A subset of the sample space. An event might include getting an even number when throwing a dice, which could be represented as {2, 4, 6}.

Probability axiom

The axiomatic foundations of probability theory were laid by Kolmogorov in 1933. There are three basic axioms:

  1. Non-negativity: The probability of any event is a non-negative real number. Symbolically, if A is an event, then P(A) ≥ 0.
  2. Generalization: The probability of the entire sample space is 1. This means that something from the sample space will definitely occur. Formally, P(S) = 1, where S is the sample space.
  3. Additivity: If two events A and B are mutually exclusive, then the probability of either event occurring is the sum of their individual probabilities: P(A ∪ B) = P(A) + P(B)

Classical definition of probability

The classical probability definition, which applies when all outcomes are equally likely, is given as:

P(A) = frac{text{Number of favorable outcomes}}{text{Total number of outcomes in the sample space}}

To illustrate, consider the experiment of rolling a fair, six-sided die. The probability of a 3 (an event) coming up is:

P(3) = frac{1}{6}

Since there is only one favourable outcome (3 coming), the total number of outcomes is 6.

Visualization of probability

Let us take a simple example of tossing a coin. We define the probability of getting heads when tossing a fair coin as 0.5, since the sample space is {Heads, Tails}, with each outcome having equal probability.

Major: 50% Tail: 50%

Conditional probability

Conditional probability is the probability of an event occurring, provided that another event has already occurred. It is calculated using the formula:

P(A | B) = frac{P(A ∩ B)}{P(B)}

Here, P(A | B) is the conditional probability of event A occurring, P(A ∩ B) is the probability of both events occurring, and P(B) is the probability of B.

Bayes' theorem

Bayes' theorem is a useful result in probability that allows us to update the probability estimate for a hypothesis as additional evidence is obtained. It is formulated as follows:

P(A | B) = frac{P(B | A) cdot P(A)}{P(B)}

Where:

  • P(A | B) is the probability of hypothesis A given data B.
  • P(B | A) is the probability of observing data B under hypothesis A.
  • P(A) is the probability (prior probability) of hypothesis A being true.
  • P(B) is the probability of observing data B.

Freedom

Two events, A and B, are said to be independent if the occurrence of one does not affect the occurrence of the other. This can be expressed mathematically as follows:

P(A ∩ B) = P(A) · P(B)

In simple terms, if knowing that event B occurred tells you nothing about whether event A occurred or not, then the two events are independent.

Normal distribution in probability theory

Probability distributions describe how probabilities are distributed over the values of a random variable. Here are some of the main ones:

Discrete distribution

  • Bernoulli distribution: shows the possible outcomes of a single experiment in which a yes-no question is asked, with the probability p of one of the answers given.
  • Binomial distribution: A generalized form of the Bernoulli distribution for a scenario where an experiment is performed multiple times.
  • Poisson distribution: Describes the probability of a certain number of events occurring in a given interval of time or space.

Continuous delivery

  • Normal distribution: Often known as the bell curve, it is characterized by its symmetrical shape and is defined by two parameters: mean and variance.
  • Exponential distribution: Models the time between events in a Poisson process, where the events occur continuously and independently.
  • Uniform distribution: All outcomes within a defined range are equally likely.

Law of large numbers

The law of large numbers states that as the number of experiments increases, the average of the results obtained will be closer to the expected value. Formally, if X_1, X_2, ..., X_n are independent and identically distributed random variables with expected value E(X), then:

frac{X_1 + X_2 + ... + X_n}{n} rightarrow E(X) text{ as } n rightarrow infty

Central limit theorem

The central limit theorem (CLT) states that, for a sufficiently large sample size, the sampling distribution of the sample mean will be approximately normally distributed, regardless of the original distribution of the population. This is a cornerstone concept in statistics that helps justify the use of the normal distribution in practical problems.

Applications of probability theory

The relevance of probability theory extends across a variety of areas:

  • In finance: for risk assessment and forecasting markets.
  • In medicine: explaining the prognosis of disease and the effectiveness of treatment.
  • In computer science: Algorithms, especially in machine learning and artificial intelligence, often rely on probability models.
  • In science: Probability is used to estimate the distribution of elements, particles, etc.

Conclusion

Probability theory provides the formal basis on which concepts of uncertainty can be studied mathematically and logically. It combines intuition with quantitative analysis, making it an indispensable tool across the scientific spectrum. By understanding its terminology, axioms, and applications, predictions can be made with greater confidence.


Graduate → 5.1


U
username
0%
completed in Graduate


Comments