3110 Midterm Cheat Sheet

Change of variablesDistribution function techniqueFY(y)=P(Yy)=P(h(X1,,Xn)y),fY(y)=FY(y)yTransformation techniquefy(y)=fX(h1(y))|h1(y)y| if only one regionfY(y)=fX(h11(y))|dh11(y)dy|+fx(h21(y))||dh21(y)dy|fY1,...,Yn(y1,...,yn)=fX1,...,Xn(g1(y1,...,yn),...,gn(y1,...,yn))|J||J|=det[X1Y1X1Y2X2Y1X2Y2] (integrate the dummy vars)Order StatisticsfX(1)=n[1FX(x)]n1fX(x)fX(n)=n[FX(x)]n1fX(x)fX(r)(x)=n!(r1)!(nr)![FX(x)r1]fX(x)[1FX(x)]nrfX~(x)=fX(n+1)(x)=(2n+1)!n!n![FX(x)]nfX(x)[1FX(x)]nUnbiasednessbias(θ^)=E(θ^θ)=E(θ^)θMSE(θ^)=E[(θ^θ)2]=var(θ^)+[E(θ^)θ]2Efficiency (compare unbiased estimators)efficiency(θ^)=CRLBVar(θ^)1 if Var(θ^)=CRLBUMVUEvar(θ^)1nE[(lnf(x;θ)θ)2]
Distribution PDF / PMF (E(X)) E(X²) (Var(X))
Uniform(a, b) 1ba,axb a+b2 a2+ab+b23 (ba)212
Beta(α, β) Γ(α+β)Γ(α)Γ(β)xα1(1x)β1,0<x<1 αα+β α(α+1)(α+β)(α+β+1) αβ(α+β)2(α+β+1)
Gamma(k, θ) 1Γ(k)θkxk1ex/θ,x>0 kθ k(k+1)θ2 kθ2
Poisson(λ) λxeλx!,x=0,1,2, λ λ(λ+1) λ
Binomial(n, p) (nx)px(1p)nx,x=0,1,,n np np(1p)+np2 np(1p)
Geometric(p) (1p)x1p,x=1,2,3, 1p 2pp2 1pp2
χ2(k) 12k/2Γ(k/2)xk/21ex/2,x>0 k k(k+2) 2k
Expntl(λ) λeλx,x>0 1λ 2λ2 1λ2
Normal(μ,σ2) 12πσe12σ2(xμ)2 μ μ2+σ2 σ2
limnP(|θ^θ|<ϵ)=P(ϵθ^θϵ)=P(θϵθ^θ+ϵ)=1

If θ^ is an unbiased estimator of the parameter θ and the Var(θ^)0 as n, then θ^ is a consistent estimator of θ

 Chebyshev’s TheoremP(|Xμ|<kσ)11k2,orP(|Xμ|kσ)1k2

Sufficiency
If the conditional given θ^ depends on θ, θ^ is not sufficient

f(X1=x1,,Xn=xn|θ^)=f(X1=x1,,Xn=xn,θ^)g(θ^) pdf of θ^=f(X1=x1,,Xn=xn)g(θ^)

θ^ is a sufficient estimator iff the joint can be factorized (fact. theorem)

f(X1=x1,,Xn=xn;θ)=g(θ^,θ)h(x1,,xn)Method of Moments (k moments for k parameters)E(X)=X¯,E(X2)=X¯2Method of maximum likelihood estimatorL(θ)=L(θ;x1,,xn)=f(x1,,xn;θ)=i=1nf(xi;θ)l(θ)=lnL(θ)=lnf(xi;θ)=i=1nlnf(xi;θ)dl(θ)dθ=0 to find crit point at θ^d2l(θ)dθ2|θ=θ^= to check maximum, should be <0 case 2+ parameters:if[2θ122θ1θ22θ2θ12θ22]<0, then our θ1^,θ2^ are MLEs of θ1,θ2Bayesian Estimation(Prior distribution) g(θ)=Prior belief about θ(Likelihood) L(θ)=f(x;θ)=Data’s likelihood given θ(Posterior distribution) h(θ|x)=f(x,θ)f(x)=f(x;θ)g(θ)f(x)=L(θ)g(θ)f(x)L(θ)g(θ)=Unnormalized posteriorf(x)=all θL(θ)g(θ)dθ=Marginal likelihoodh(θ|x)=L(θ)g(θ)f(x)=Normalized posteriorθ^B=E(θ|x)=all θθh(θ|x)dθ=Bayesian estimate