Skip to content

Lecture on 01/28/2026 - Probability Basics

Scribes: Allen Singleton and Hanan Latiff

  • What is Big Data?
  • Basic Probability Concepts
  • Discrete Random Variables

In modern computing, many problems involve big data, meaning datasets that are extremely large in size, often too large to store entirely in memory or to process using traditional deterministic algorithms.

Big data problems are characterized by:

  • Massive input sizes (millions or billions of elements)
  • Limited memory and time constraints
  • Data arriving in streams rather than all at once

Because of these constraints, classical algorithms that require multiple passes over the data or exact computations may be infeasible.

Randomized algorithms provide a powerful tool for dealing with big data. These algorithms use randomness in their logic, typically by making random choices during execution.

The key idea is that randomness allows us to:

  • Process only a small portion of the data
  • Approximate answers instead of computing exact results
  • Achieve good performance with high probability

Instead of guaranteeing correctness in every execution, randomized algorithms guarantee correctness with high probability. This tradeoff is acceptable in many big data applications where speed and scalability are more important than absolute precision.

Randomized algorithms are especially useful when:

  • Exact solutions are too slow or memory-intensive
  • Approximate answers are sufficient
  • The data is noisy or inherently uncertain

Before analyzing randomized algorithms, it is important to review basic probability concepts that will be used throughout the course. These concepts help us reason about uncertainty and quantify the likelihood of different outcomes.

A probability space consists of:

  • A sample space, which is the set of all possible outcomes
  • Events, which are subsets of the sample space
  • A probability measure that assigns a value between 0 and 1 to each event

The probability of an event represents how likely it is to occur. Probabilities satisfy the following basic properties:

  • The probability of any event is between 0 and 1
  • The probability of the entire sample space is 1
  • The probability of an impossible event is 0

In the context of randomized algorithms, probabilities are used to analyze:

  • The likelihood that an algorithm produces a correct result
  • The expected behavior of an algorithm over random choices
  • The chance that a rare or undesirable outcome occurs

Rather than guaranteeing deterministic outcomes, randomized algorithms rely on probabilistic guarantees. This means we analyze how often an algorithm succeeds or fails over many possible random executions.

Let be an event in a probability space. The complement of , denoted by , represents the event that does not occur.

The complement rule states that the probability of an event not occurring is equal to one minus the probability that the event occurs:

This rule follows directly from the fact that an event and its complement together cover the entire sample space. Since the probability of the sample space is , the probabilities of an event and its complement must add up to .

In practice, the complement rule is often useful when computing directly is difficult, but computing is easier. In such cases, we compute the probability of the complement first and subtract it from .

In the analysis of randomized algorithms, the complement rule is frequently used to:

  • Bound the probability that an algorithm fails
  • Analyze rare or undesirable events
  • Convert success probabilities into failure probabilities, or vice versa

Two events and are said to be independent if the occurrence of one event does not affect the probability of the other event.

Formally, events and are independent if :

This definition captures the idea that knowing whether occurs gives no information about whether occurs, and vice versa. If the equality above does not hold, then the events are dependent.

Independence is a fundamental assumption in many randomized algorithms. Random choices made by an algorithm are often designed to be independent so that probabilities can be multiplied and analyzed more easily.

It is important to note that independence is a strong condition. Even if two events seem unrelated, they may still be dependent unless the product rule above holds exactly.

In algorithm analysis, independence allows us to:

  • Compute probabilities of multiple events occurring together
  • Analyze repeated random trials
  • Simplify probability calculations by separating events

The binomial coefficient is used to count the number of ways to choose a fixed number of elements from a larger set, without regard to order.

For integers and , the binomial coefficient is denoted by:

and represents the number of ways to choose elements from a set of elements.

The binomial coefficient is defined as:

In probability, binomial coefficients arise naturally when analyzing repeated independent trials, where each trial has two possible outcomes, such as success or failure.

They are especially useful when computing probabilities involving:

  • The number of ways a certain outcome can occur
  • Multiple independent random choices
  • Counting events before assigning probabilities

In the context of randomized algorithms, binomial coefficients help quantify how many different ways an algorithm’s random decisions can lead to the same result. This allows us to combine counting arguments with probability calculations.

A random variable is a function , where is the sample space consisting of all the possible outcomes of the event that models.

A random variable can be discrete, meaning that its support is finite or countably infinite, or continuous, meaning that its support is uncountable. In this course, we will primarily focus on discrete random variables, as they arise naturally in the analysis of randomized algorithms.

A discrete random variable takes on a finite or countably infinite set of values, each with an associated probability.

Example (Rolling a fair die): Consider the experiment of rolling a fair six-sided die. The sample space consists of the six possible outcomes. The random variable maps each outcome to a numerical value in its support:

Since the die is fair, each outcome occurs with equal probability. We can describe the random variable as follows:

This notation makes explicit both the possible values of the random variable and the probability with which each value occurs.

A discrete random variable is fully described by its probability mass function (PMF), which assigns a probability to each value in its support.

The probabilities assigned to a random variable satisfy the following basic properties:

  • For every value in the support,
  • The sum of the probabilities over all possible values is equal to 1

That is,

In randomized algorithms, random variables are used to model quantities such as running time, the number of correct outputs, or whether an algorithm succeeds or fails. By defining appropriate random variables, we can analyze the behavior of an algorithm using probability.

This perspective allows us to reason about algorithm performance in terms of likelihood and expectation, rather than exact deterministic outcomes.

The expectation of a discrete random variable, denoted as , is a weighted average of all the possible values that takes on. The expectation is given by:

Property 1 (Shift and Scale of Expectation): If is a random variable with finite expectation, then:

where .

Property 2 (Linearity of Expectation): If are random variables, each with finite expectation , then:

The variance of a discrete random variable, denoted as 𝕒𝕣, is the squared average distance by which the values that takes on deviate from . The variance is given by:

𝕒𝕣

Equivalently, the variance can be computed as:

𝕒𝕣

Property 1 (Shift and Scale of Variance): If is a random variable with finite variance, then:

𝕒𝕣𝕒𝕣

where .

Property 2 (Linearity of Variance): If are independent random variables, each with finite variance 𝕒𝕣𝕒𝕣𝕒𝕣, then:

𝕒𝕣𝕒𝕣𝕒𝕣𝕒𝕣

A Bernoulli random variable models the number of successes that occur in a single trial with a fixed probability of success, .

By convention, we say that the Bernoulli random variable can take on either (indicating a failure) or (indicating a success). Thus, the support of the Bernoulli distribution is .

Example (Flipping a fair coin once): If we define a success as landing heads and a failure as landing tails, then a single flip of a fair coin is an example of a Bernoulli trial with a fixed probability of success, . We denote this random variable as follows:

The probability mass function of the Bernoulli distribution is given by:

Equivalently, we can express the Bernoulli distribution in closed form as:

If , then the expectation of is given by:

Proof

Using the closed-form definition of the Bernoulli distribution, we have:

If , then the variance of is given by:

𝕒𝕣

Proof

Using the closed-form definition of the Bernoulli distribution, we have:

𝕒𝕣

A Binomial random variable models the number of successes that occur in independent Bernoulli trials with a fixed probability of success, .

Since it models a count of the number of successes in trials, the Binomial random variable can take on any whole number between and . Thus, the support of the Binomial distribution is .

Example (Flipping a fair coin 50 times): If we define a success as landing heads and a failure as landing tails, then 50 flips of a fair coin is an example of a Binomial procedure with a fixed probability of success in each trial, . We denote this random variable as follows:

The probability mass function of the Binomial distribution is given by:

Alternative Definition of the Binomial Distribution

Section titled “Alternative Definition of the Binomial Distribution”

Very often, it is useful to interpret a Binomial random variable as the sum of independent Bernoulli random variables, with probability of success . In other words:

If , then .

Expressing the Binomial random variable in this way allows for certain expressions to be simplified, such as computing the expectation and the variance of the Binomial distribution.

If , then the expectation of is given by:

Proof

Using the alternative definition of the Binomial distribution, we can express:

If , then the variance of is given by:

𝕒𝕣

Proof

Using the alternative definition of the Binomial distribution, we can express:

𝕒𝕣𝕒𝕣𝕒𝕣𝕒𝕣𝕒𝕣