Summary of Stochastic Processes - Course offered by Dr. Sean Smith

# Introduction

Statistics is founded on probability theory. It provides a means for modeling populations, experiments, and almost any other real world process. All real world processes are stochastic processes. Sure processes are only a special case or idealizations of stochastic ones.

Statistics is founded on probability theory and probability theory is founded on set theory.

Probability can be classified into two categories: Physical and Evidential. Physical probability i salso referred to as objective or frequenta

and is usually associated with a random physical system. Evidentia probability also known as Bayesian probabililty is associated with

any statement. This leads to what is called subjective plausibility. There are two classical definitions for probability. The first, given by Laplace in 1812 is given by

where $A$ is an event, $P(A)$ is the probability of event $A$, $N_{A}$ is the number of times that event $A$ occurs, and $N$ is the number of all possible outcomes. The second definition stems from frequentism. It is based on the concept that a probability is a measure of how frequent an event occurs over a long period of time. This is written as:

(2)Finally, one can define a logical probability as *Evidence $E$ supports hypothesis $H$ to a high degree*. This is also known as epistemic or inductive probability.

# Definitions

## Sample Spaces

The set $S$ of all possible outcomes of a particular experiment is called the sample space for the experiment. For example, if the experiment consists of throwing a die, the sample space contains 6 outcomes corresponding to the 6 faces of the die, thus

(3)Sample spaces can be countable or uncountable. Countable spaces are those whose elements can be put into a one-to-one correspondance with a subset of the integers. Uncountable spaces cannot be cast in this way. This distinction is important only in that it dictates the way in which probabilities can be assigned.

If one follows the frequentist interpretation of probability, then all real sample spaces are countable because of the finite precision with which we measure experimental results. If one follows the classical or Bayesian interpretations, then sample spaces can be uncountable.

## Events

An event $A$ is any collection of possible outcomes of an experiment. Then, $A$ is a subset of $S$. Therefore, we say that the event $A$ occurs if the outcome of the experiment is in the set $A$.

## Disjoint Events

Two events A and B are said to be disjoint or mutually exclusive if their intersection is the empty set, $A \cap B = \emptyset$. A series of events $A_1, A_2,\cdots$ are said to be pairwise disjoint if $A_i \cap A_j = \emptyset$ for all $i\neq j$.

## Partitions

If $A_1, A_2,\cdots$ are pairwise dijoint and $\bigcup_{i=1}^{\infty}A_{i}=S$, then the collection $A_1, A_2,\cdots$ forms a partition of $S$.

A few definitions are in order to properly communicate basic concepts in probability:

- Propensity: A physical tendency of a given physical situation to yield

an outcome of a certain kind or to yield a long run relative frequency

of such an outcome.

- Experiment: Any activity that generates observable results.
- Outcome: Result of an experiment (countable or not).
- Trial: A single performance or realization of the experiment.
- Sample space $S$: The set of all possible outcomes.
- Countable: A one-to-one correspondance with integers.
- Event : Any subset of the sample space.
^{1} - Disjoint (mutually exclusive): The intersection of two sets is the empty set, i.e. $A\cap B=\emptyset$
- Partition: A partition $B_{i}$ of a set $S$ is a nonempty, disjoint divistion of the set into $B_{i}$ subsets. This immediately yields the following properties: $\bigcup_{i=1}^{\infty}B_{i}=S$, and $\bigcap_{i=1}^{\infty}=\emptyset$

### Example: Roll of a die

- Experiment: Roll die
- Outcome: 1, 2, 3, 4, 5, 6
- Sample space: 1, 2, 3, 4, 5, 6
- Trial: one roll
- Event: $\{1\}$, $\{2,3\}$

## Basics of Probability Theory

The realization of an experiment is an outcome in the sample space. If the experiment is repeated a number of times, different outcomes may occur each time or some outcomes may repeat. The *frequency of occurrence* of an outcome is one way of defining a probability of that particular outcome in that the outcomes that are more probable will occur more frequently. If one can describe the outcomes using probability, then we can analyze experiments statistically and infer valuable information about a certain physical process.

To avoid the pitfalls of probability interpretations, we use the axiomatic definition of probability. We first note that ALL interpretations of probability follow the same mathematical rules and procedures. Therefore, one can use the mathematics to devise the statistics of an experiment. An interpretation of these results may depend on particular situations where one interpretation may be more helpful in understanding the results than another one.

### Axiomatic Foundations

For each event $A$ in the sample space $S$, we want to associate a number between zero and one that will be called the probability of $A$ denoted by $P(A)$. What follows is the axiomatic definition of probability also known as the Kolmogorov axioms.

Given a sample space $S$ and an associated sigma algebra $\mathcal{B}$, a probability function is a real function $P$ that statisfies

- $P(A) \geq 0$ for all $A \in \mathcal{B}$.
- $P(S) = 1$.
- If $A_1, A_2,\cdots$ are pairwise dijoint, then $P\left(\bigcup_{i=1}^\infty A_i = \sum_{i=1}^\infty P(A_i) \right)$.

Any function $P$ that satisfies these axioms is called a probability function. The axiomatic definition makes no reference to any particular interpretation of probability. It also does not tell us what particular probability function P to choose. To define a probability function one must either deduce it from experiment, assume it, or base it on available information.