You are on page 1of 5

Matrix

Linear Independence
Linearly dependent: 𝑐1, 𝑐2, … , 𝑐𝑛 not all zeros
Vectors can be express as linear combi to other vectors
Linearly independent: 𝑐1 = 𝑐2 = ⋯ = 𝑐𝑛
No vectors are express as linear combi to other vectors
Both satisfies: note 𝒂𝟏 , 𝒂𝟐 , … , 𝒂𝒏 are vectors
Matrix  Capital Letters
Transposition
Rank of a Matrix
Rank of matrix r(A): # linearly independent column vectors in matrix A
Rank determines the existence and multiplicity of solutions to linear systems
of equations (calculator fun: rref(matrix))
Minor of order k in matrix A: delete row and columns to get k x k matrix

Determinants
4 minors of order 3
18 minors of order 2
12 minors of order 1
Rank = order of largest minor that is non-zero

Augmented matrix

Inverse Matrix
; adj (A) = adjoint A Note that 𝐴𝑥 = 𝑏
There will only be solution if 𝒓(𝑨) = 𝒓(𝑨𝒃 )

k = rank
m = number of solutions (number of simulations equations)
n = number of variables

k<m rank less than number of solutions, there are superfluous equations (m-
k) extra equations
k<n rank less than number of variables, then (n-k) are chosen freely thus
degrees of freedom.

Eigenvalues
Cramer’s Rule Scalar 𝜆
Eigenvalue aka characteristic roots ie value of 𝜆
Eigenvector aka characteristic vectors ie when 𝜆 is of a certain value what is
the matrix?
𝑨𝒙 = 𝝀𝒙
Unique eigenvectors (orthonormalizing the eigenvectors) ie x’x=1

Sum of eigenvalues of n x n matrix = sum of diagonal elements (aka trace,


tr(A))
Product of eigenvalues of n x n matrix = determinant of A

When (1,2,8) is all zeros  Homogeneous then |A| = 0 Eigenvalue formula: |𝑨 − 𝝀𝑰| = 𝟎
Compute 𝜆 value

Vectors  lower case bold letter Sub 𝜆 into { 𝐴 − 𝜆𝐼} (𝑥) = 0


Solutions are Eigenvector (order matters)
i.e.

Euclidean norm Orthonormalized eigenvectors


x’x=1
Transpose x1 and find value of t
Plug the t value into x1 and x2 equation to find the orthonormalized

eigenvectors

Orthogonal Matrix satisfies

Orthogonal = Right angle

1|Page
Non-Symmetric
Quadratic Forms
FINDING DEFINITENESS
Symmetric matrix
Square matrix

When,

Q(x) > 0 Positive definite


Q(x) >= 0 Positive semidefinite
Q(x) < 0 Negative definite
Q(x) <= 0 Negative semidefinite

Q(x) indefinite when Q(x*) < 0 and Q(y*) > 0 thus assume both negative and
positive values
OR
Principal minor ∆𝑟  Deleting corresponding rows and columns
Note: |A| is a principal minor itself (no row or column deleted)

Leading principal minor  must contain 𝑎11


Note: 𝐷𝑟 is the determinant

Q positive definite 𝐷𝑘 > 0


Q positive semidefinite ∆𝑘 ≥ 0
Q negative definite (−1)𝑘 𝐷𝑘 > 0
Q negative semidefinite (−1)𝑘 ∆𝑘 ≥ 0

Calculating ∆𝑘 using matrix A

OR

Calculus and Matrix Algebra


Symmetric

2|Page
Optimization
Existence of Extrema
(a,b)
set: OPEN  only interior points
A set: CLOSED  Interior + Boundary points
A point: Boundary point (shown in diagram)
Single Variable Optimization Bounded  whole set is contained within a sufficiently large circle
Global & Local min/max Closed set not equal to Bounded (PROOF)
Close bound [a,b] include end points So, Closed + Bounded = Compact
f’(c) = 0 stationary point
Question
Find extreme values
#01: FIND STATIONARY POINT ie. f’(x) or f’(y) value first
#02: SUB INTO FUNCTION EQUATION
a. Constraint equation
b. limits – note that anything with 𝑥 2 𝑎𝑛𝑑 𝑦 2 are circles so limit is mirrored
Largest f(x,y) maximum
Smallest f(x,y) minimum

Functions with Three or more variables


Hessian matrix

Constrained Optimization
Lagrange Multiplier

Multivariate Optimization
Convex set

IF given X&Y variable; ask to show max/min pt use formula below


Concave function

Convex function

Saddle Point
Taking on both ‘+’ and ‘-‘ (max and min form)  can check using the
equations above.

#01: FIND STATIONARY POINT ie. f’(x) or f’(y) value first


#02: Determine if it fits the equation above

3|Page
Probability Discrete
Basic Notations
Sample space Ω

Distribution
Uncertain outcome Capital letters
Discrete Random variable probability distribution  probability mass function
(pmf)
Continuous Random variable “  probability density function (pdf)
All outcomes Universal set

{∅} refers to the null set. Variance


P(A') refers to the probability of the complement of event A.
P(A ∩ B) refers to the probability of the intersection of events A and B.
P(A ∪ B) refers to the probability of the union of events A and B.

Mutually exclusive no common element


Exhaustive events All union of event = Ω 𝑟 𝑡ℎ moment/ central moment

Skewness

Degree of peakedness of distribution aka kurtosis


Combinatorial Analysis (Count)

Bernoulli Trails
“success” and “failure” – Bernoulli experiment
Probability of success denoted by p

Order is important
Without replacement
With replacement = 𝑛𝑟

Order is not important


Without replacement

Conditional Probability
The Binomial Distributions
𝑃(𝐴∪𝐵)
𝑃(𝐴|𝐵) = Read as probability of A given B
𝑃(𝐵)
A and B independent 𝑃(𝐴𝐵) = 𝑃(𝐴)𝑃(𝐵)

Bayes Theorem

4|Page
The Geometric Distributions The Poisson Distribution

The Negative Binomial Distribution

The Hypergeometric Distribution

5|Page

You might also like