Generalized Linear model

Iteratively Reweighted Least Square Algorithm

Likelihood functions for binomial regression

P(Y=y;π)=(miyi)πiyi(1πi)miyi

The likelihood and the log-likelihood function of π are:

L(π;y,m)=exp{yilogπi1πi+milog(1πi)+log(miyi)}=exp{yilogπi1πi+milog(1πi)+log(miyi)}l(π;y,m)=[yilogπi1πi+milog(1πi)+log(miyi)]=[yiθimilog(1+eθi)+ci]

Using the logistic link, we can re-parameterize the log0likelihood function fo π to the log-likelihood of β

θi=g(μi)=ηi=β0+β1xi1++βpxip=xiTβl(β;x,y,m)=[yiηimilog(1+eηi+ci)]=[yixiTβmilog(1+exiTβ)+ci]

Suppose p+1<n, we now have enough dfs

...
MLE of β

...

liθi=yib(θi)a(ϕ)=yiμia(ϕ)θiμi=1μiθi=1b(θi)=1v(μi)ηiβj=βjβ0+β1Xi1++BpXip=xijLet Wi=1V(μi)(μiηi)2liβj=yiμia(phi)

Information matrix

I(β)rs=[Sβ]rs=2lβrβs=βs(yiμi)Wiηiμixir=[βsβs(yiμi)wiηiμixir+(yiμi)βs(Wiηiμixi)]

The fisher

I(β)=E(H(β))E(I(β)r,s)=E(βs(yiμi)wiηiμixir)+E(yiμi)βs(Wiηiμixi)

Now we have

β(k+1)=

Multiply both sides by

Iβ(k)β(k+1)=Iβ(k)β(k)+S(β)k)I(β(k))β(k)=(wi(k)xi0ηi(k+1)wi(k)xipηi(k+1))=()

...

β(k+1)=(XW(k)X)1XTW(k)Z(k)with W(k)=(W1(k)0000W2(k)00Wp(k))ππ