Unlocking Probability: From Basics to Conditional Distributions
This article explains the fundamentals of probability, conditional probability, independent events, the law of total probability, and both discrete and continuous probability distributions, including joint and marginal densities and conditional distributions for random variables.
Probability and Conditional Probability
Probability
Probability is the stable value that the frequency of an event approaches after many repeated experiments. For example, the forecast "30% chance of rain tomorrow" means that, in the long run, it will rain about 30% of the days with similar conditions.
Conditional Probability
Example: Given that tomorrow will be sunny, what is the probability of rain? Let the event "sunny" be A; the probability of rain under condition A is the conditional probability P(rain | sunny). Similarly, the unconditional probability of a stock market crash is the overall chance, while the probability of a crash given a severe recession is a conditional probability.
Independent Events
If the conditional probability equals the unconditional probability, i.e., P(B | A) = P(B), then the occurrence of A does not affect the occurrence of B, and the events A and B are called independent.
Law of Total Probability
When a set of events {A_i} are mutually exclusive and exhaustive, for any event B we have P(B) = Σ P(B | A_i)·P(A_i). This formula partitions the world into possible scenarios and aggregates the weighted conditional probabilities to obtain the unconditional probability.
Distributions and Conditional Distributions
Discrete Probability Distribution
For a discrete random variable X that can take values x₁, x₂, … with corresponding probabilities p₁, p₂, …, the distribution is described by P(X = x_i) = p_i.
Continuous Probability Distribution
The cumulative distribution function (CDF) of a continuous variable X is F(x) = ∫_{-∞}^{x} f(t) dt, where f(t) is the probability density function.
For multivariate random variables, we consider random vectors. For a two‑dimensional continuous random vector (X, Y), the joint density function f_{X,Y}(x,y) satisfies:
(i) f_{X,Y}(x,y) ≥ 0 for all (x,y).
(ii) ∫∫ f_{X,Y}(x,y) dx dy = 1.
(iii) The probability that (X,Y) falls into a region R is ∫∫_{R} f_{X,Y}(x,y) dx dy.
From the joint density we can obtain marginal densities, e.g., the marginal density of X is f_X(x) = ∫ f_{X,Y}(x,y) dy, and similarly for Y.
The cumulative distribution function for the vector (X, Y) is defined as F(x, y) = P(X ≤ x, Y ≤ y) = ∫_{-∞}^{x}∫_{-∞}^{y} f_{X,Y}(t, s) dt ds.
Conditional Distribution
Given a condition on a random variable, the conditional distribution of another variable can be defined. If Y is continuous and P(Y = y) = 0, the conditional probability density of X given Y = y is f_{X|Y}(x|y) = f_{X,Y}(x,y) / f_Y(y). This formula mirrors the conditional probability formula.
Source: Chen Qiang, Advanced Econometrics and Stata Applications, Higher Education Press, 2010.
Model Perspective
Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.