3110 Final Cheat Sheet

Change of variablesDistribution function techniqueFY(y)=P(Yy)=P(h(X1,,Xn)y),fY(y)=FY(y)yTransformation techniquefy(y)=fX(h1(y))|h1(y)y| if only one regionfY(y)=fX(h11(y))|dh11(y)dy|+fx(h21(y))||dh21(y)dy|fY1,...,Yn(y1,...,yn)=fX1,...,Xn(g1(y1,...,yn),...,gn(y1,...,yn))|J||J|=det[X1Y1X1Y2X2Y1X2Y2] (integrate the dummy vars)Order StatisticsfX(1)=n[1FX(x)]n1fX(x)fX(n)=n[FX(x)]n1fX(x)fX(r)(x)=n!(r1)!(nr)![FX(x)r1]fX(x)[1FX(x)]nrfX~(x)=fX(n+1)(x)=(2n+1)!n!n![FX(x)]nfX(x)[1FX(x)]nUnbiasednessbias(θ^)=E(θ^θ)=E(θ^)θMSE(θ^)=E[(θ^θ)2]=var(θ^)+[E(θ^)θ]2Efficiency (compare unbiased estimators)efficiency(θ^)=CRLBVar(θ^)1 if Var(θ^)=CRLBUMVUEvar(θ^)1nE[(lnf(x;θ)θ)2]
Distribution PDF / PMF (E(X)) E(X²) (Var(X))
Uniform(a, b) 1ba,axb a+b2 a2+ab+b23 (ba)212
Beta(α, β) Γ(α+β)Γ(α)Γ(β)xα1(1x)β1,0<x<1 αα+β α(α+1)(α+β)(α+β+1) αβ(α+β)2(α+β+1)
Gamma(k, θ) 1Γ(k)θkxk1ex/θ,x>0 kθ k(k+1)θ2 kθ2
Poisson(λ) λxeλx!,x=0,1,2, λ λ(λ+1) λ
Binomial(n, p) (nx)px(1p)nx,x=0,1,,n np np(1p)+np2 np(1p)
Geometric(p) (1p)x1p,x=1,2,3, 1p 2pp2 1pp2
χ2(k) 12k/2Γ(k/2)xk/21ex/2,x>0 k k(k+2) 2k
Expntl(λ) λeλx,x>0 1λ 2λ2 1λ2
Normal(μ,σ2) 12πσe12σ2(xμ)2 μ μ2+σ2 σ2
limnP(|θ^θ|<ϵ)=P(ϵθ^θϵ)=P(θϵθ^θ+ϵ)=1

If θ^ is an unbiased estimator of the parameter θ and the Var(θ^)0 as n, then θ^ is a consistent estimator of θ

 Chebyshev’s TheoremP(|Xμ|<kσ)11k2,orP(|Xμ|kσ)1k2

Sufficiency
If the conditional given θ^ depends on θ, θ^ is not sufficient

f(X1=x1,,Xn=xn|θ^)=f(X1=x1,,Xn=xn,θ^)g(θ^) pdf of θ^=f(X1=x1,,Xn=xn)g(θ^)

θ^ is a sufficient estimator iff the joint can be factorized (fact. theorem)

f(X1=x1,,Xn=xn;θ)=g(θ^,θ)h(x1,,xn)Method of Moments (k moments for k parameters)E(X)=X¯,E(X2)=X¯2Method of maximum likelihood estimatorL(θ)=L(θ;x1,,xn)=f(x1,,xn;θ)=i=1nf(xi;θ)l(θ)=lnL(θ)=lnf(xi;θ)=i=1nlnf(xi;θ)dl(θ)dθ=0 to find crit point at θ^d2l(θ)dθ2|θ=θ^= to check maximum, should be <0 case 2+ parameters:if[2θ122θ1θ22θ2θ12θ22]<0, then our θ1^,θ2^ are MLEs of θ1,θ2Bayesian Estimation(Prior distribution) g(θ)=Prior belief about θ(Likelihood) L(θ)=f(x;θ)=Data’s likelihood given θ(Posterior distribution) h(θ|x)=f(x,θ)f(x)=f(x;θ)g(θ)f(x)=L(θ)g(θ)f(x)L(θ)g(θ)=Unnormalized posteriorf(x)=all θL(θ)g(θ)dθ=Marginal likelihoodh(θ|x)=L(θ)g(θ)f(x)=Normalized posteriorθ^B=E(θ|x)=all θθh(θ|x)dθ=Bayesian estimaten=0xarn=a(1rx+1)1rConfidence Intervals:μ=X¯±z1α/2σn(known σ)μ=X¯±tn1,α/2Sn(unknown σ)μ=X¯±z1α/2Sn(CLT)μ1μ2=(X¯1X¯2)±z1α2σ12n2+σ22n2μ1μ2=(X¯1X¯2)±tν,α/2S12n1+S22n2(n1,n230)μ1μ2=(X¯1X¯2)±tn1+n22,α/2Sp2(1n1+1n2)Sp2=(n11)S12+(n21)S22n1+n22σ2=((Xiμ)2χ1α/2,n2,(Xiμ)2χα/2,n2) (known μ)σ2=((n1)S2χ1α/2,n12,(n1)S2χα/2,n12)p=p^±z1α/2p^(1p^)np1p2=(p^1p^2)±z1α/2p^1(1p^1)n1+p^2(1p^2)n2σ12σ22=(S12S221F1α/2,n11,n21,S12S221Fα/2,n11,n21)Test function: ϕ(x1,,xn)={1if (x1,,xn)C(reject H0)0otherwiseType I error=α=P(reject H0H0 true)Type II error=β=P(fail to reject H0H0 false)Power=1β=P(reject H0H1 true)π(θ)=P(reject H0θ)(power function)Neyman–Pearson Lemma (for simple H0 vs simple H1):L0=i=1nf(xi;θ0)(likelihood under H0)L1=i=1nf(xi;θ1)(likelihood under H1)L0L1k reject H0Choose k to satisfy P(reject H0H0)=αCritical region: set of values making L0L1kLikelihood Ratio Test (general):L(θ)=i=1nf(xi;θ)(full likelihood)Λ=max L0max L=L(θ~)L(θ^)=i=1nf(xi,θ0)i=1nf(xi,θ^)Reject H0 if Λk(or equivalently, 2lnΛc)Choose k (or c) so that test has level αAsymptotically: 2lnΛχdf2 under H0Z/T Tests for Mean(tn1)C={(x1,,xn);|x¯μ0|Z1α2σn} (2 sided test)θ<θ0:zZ1α,θ>θ0:zZ1αDifference in Means (Unknown Variances)Zobs=x¯y¯δ0S12n1+S22n2n30t=x¯y¯δ0Sp2(1n1+1n2)tn1+n22n<30,pooled σ2Test for VariancesKnown μ:χ2=nσ^2σ02=(xiμ)2σ02χn2Unknown μ:χ2=(n1)s2σ02χn12Reject H0:χobs2<χα/22 or χobs2>χ1α/22F=S12S22Fn11,n21Reject H0:Fobs>F1α/2, or Fobs<Fα/2Binomial Proportion Test (Exact)Two-sided:XK(α/2) or XK(α/2), K smallest and K largest for α2One-sided:XK(α), where P(XK|H0)αBinomial Proportion (Normal Approx.)Z=Xnθ0nθ0(1θ0)N(0,1)Continuity correction:Z=(x±12)nθ0nθ0(1θ0)Use +12:xnθ0,12 if x<nθ0Chi-squared Test, Reject if χobs2χdf,1α2i=1KZi2χk2=(xiniθiniθi(1θi))2=ij(fijEij)2Eijχdf2df=(k1)(c1), or k(c1) if θjs are given

To test for association (independence):

Eij=nπ^ij=nπ^iπ^j,df=HaH0

Step 1. Estimate the parameter for the assumed distribution
Step 2. Compute the probability for each observation under the assumed distribution
Step 3. Compute the expected frequencies
Step 4. Test the goodness-of-fit of the assumed distribution to the observed data
df=k1# parameters estimated