STATS 763
SECOND SEMESTER, 2023
STATISTICS
Advanced Regression Methodology
1. Consider the following code and output, wherein we model and test the Volume variable from the trees data seen in class.
> data(trees) > head(trees)
Girth Height Volume
1 8.3 70 10.3
2 8.6 65 10.3
3 8.8 63 10.2
4 10.5 72 16.4
5 10.7 81 18.8
6 10.8 83 19.7
> dim(trees)
[1] 31 3
> ## Model with three free parameters
> mod.free <- glm(Volume~log(Height)+log(Girth)
+ ,family=Gamma(link=log),data=trees)
> coef(mod.free)
(Intercept) log(Height) log(Girth)
-6.691109 1.132878 1.980412
> ## Model with fixed coefficients for log(Height) and log(Girth)
> mod.fixed <- glm(Volume~offset(log(Height))+offset(2*log(Girth))
+ ,family=Gamma(link=log),data=trees)
> coef(mod.fixed)
(Intercept)
-6.166161
> ## Inverse link for log link
> ilink <- function(eta) return(exp(eta))
> ## Derivative of the link log(mu)
> dlink <- function(mu) return(1/mu)
> ## Variance function V(mu)=mu^2
> vfun <- function(mu) return(mu^2)
> ## Model matrix X from model with all three parameters and outcome Y
> X <- model.matrix(mod.free)
> Y <- trees$Volume
• Mystery Function 1
> Mystery.Function1 <- function(beta,X,ilink,dlink,vfun)
{ + fit <- as.vector(ilink(X%*%beta))
+ D <- diag(1/dlink(fit))
+ Vinv <- diag(1/vfun(fit))
+ return(t(X)%*%D%*%Vinv%*%D%*%X)
+ }
> Mystery.Function1(coef(mod.free),X,ilink,dlink,vfun)
(Intercept) log(Height) log(Girth)
(Intercept)
|
31 .00000
|
134 .1442
|
79 .27734
|
log(Height)
|
134 .14420
|
580 .6935
|
343.36999
|
log(Girth)
|
79 .27734
|
343.3700
|
204.37612
|
• Mystery Function 2
> Mystery.Function2 <- function(beta,X,Y,ilink,dlink,vfun) {
+ fit <- as.vector(ilink(X%*%beta))
+ D <- diag(1/dlink(fit))
+ Vinv <- diag(1/vfun(fit))
+ return(t(X)%*%D%*%Vinv%*%(Y-fit))
+ }
> Mystery.Function2(coef(mod.free),X,Y=trees$Volume
+ ,ilink=ilink,dlink=dlink,vfun=vfun)
[,1]
(Intercept) -7 .253620e-10
log(Height) 8.338585e-08
log(Girth) 1 .682590e-07
• Mystery Function 3
> Mystery.Function3 <- function(beta,df,X,Y,ilink,vfun) {
+ fit <- as.vector(ilink(X%*%beta))
+ return(
+ sum((Y-fit)^2/vfun(fit))/(nrow(X)-df)
+ )}
> Mystery.Function3(coef(mod.free),3,X,Y=trees$Volume,ilink,vfun)
[1] 0 .006427286
• Mystery Function 4
> Mystery.Function4 <- function(beta,df,X,Y,ilink,dlink,vfun) {
+ VEC <- Mystery.Function2(beta,X,Y,ilink,dlink,vfun)
+ return(t(VEC)%*%
+ solve(Mystery.Function1(beta,X,ilink,dlink,vfun))%*%
+ VEC/Mystery.Function3(beta,df,X,Y,ilink,vfun)
+ )}
a) [9 marks] Describe the objects returned by each of Mystery functions 1, 2 and 3. Be brief and precise.
b) [4 marks] Explain the output of Mystery function 2 when evaluated at coef(mod. free).
c) [4 marks] Explain how you can use the outputs of Mystery functions 1 and 3 to estimate the variance matrix of coef(mod. free).
For d), e) and f), assume that the Gamma model for tree volume with covariates (Intercept), log(Height) and log(Girth) is the correct model for the data. Let the null hypothesis be that the coefficients for log(Height) and log(Girth) are 1 and 2 respec- tively. We want to use a call to Mystery. function4 to test the null hypothesis.
d) [6 marks] With what numerical values in the argument beta do you need to call Mystery function 4 to obtain a statistic to test H0 : βgirth = 2 and βheight = 1 vs H1 : not H0.?
e) [3 marks] What is the approximate null distribution for the test-statistic obtained from Mystery function 4 with suitable arguments?
f) [4 marks] If we call Mystery function 1 with the argument beta=c(1,1,1), we get
> Mystery.Function1(c(1,1,1),X,ilink,dlink,vfun)
(Intercept) log(Height) log(Girth)
(Intercept) 31 . 00000 134 . 1442 79 . 27734
log(Height) 134 . 14420 580 . 6935 343.36999
log(Girth) 79 . 27734 343.3700 204.37612
i.e. the same value as if we called it with the maximum likelihood estimate of beta. Explain why that is the case.
2. (30 marks) NHANES is a survey that periodically collects health and nutrition data from a clustered probability sample of the United States population. Here we analyse data from four years of the survey. We are interested in the relationship between sodium (‘salt’) and potassium in the diet and systolic blood pressure (in a blood pressure reading of, say, 120/70, the 120 is the systolic and the 70 is the diastolic pressure. We expect, based on previous research, that there will be a positive effect of higher sodium intake on blood pressure and a negative effect of higher potassium intake. The population standard deviations of the variables are approximately 20 mmHg for systolic blood pressure, 2 grams/day for sodium intake, and 1.5 grams/day for potassium intake.
The following coefficients come from linear regression models. Model A has dietary sodium intake (grams per day) and potassium intake (grams per day) as predictors in an ordinary linear model. Models B through E use the weights, clusters, and strata from the NHANES design and are fitted with the svyglm() function. Model B has the same predictors as model A. Model C adds age and gender, both known from other data to be related to blood pressure. Model D adds body mass index (as a measurement of weight) and race/ethnicity; both are highly statistically significant. Model E adds diastolic blood pressure, which is also highly statistically significant.
A B C D E
sodium
|
-1.16
|
-0.69
|
0.59
|
0.42
|
0.37
|
0.11
|
0.17
|
0.16
|
0.16
|
0.15
|
potassium
|
1.13
|
0.78
|
-1.09
|
-0.85
|
-1.00
|
0.16
|
0.27
|
0.18
|
0.17
|
0.14
|
(a) [4 marks] Why are the point estimates for models A and B different?
(b) [4 marks] Why are the standard error estimates for models A and B different?
(c) [6 marks] Which model gives the best estimate of the effect of sodium and potassium intake on systolic blood pressure? Explain.
(d) [6 marks] What assumptions do you need for these estimates to estimate the effect of sodium and potassium intake?
(e) [5 marks] Is there strong evidence for association of sodium and potassium intake with blood pressure in the expected direction? Explain.
(f) [5 marks] Are the estimated effects of sodium and potassium on blood pressure large or small?
3. ( 30 marks) A bank is interested in predicting churn, ie, whether a customer will leave the bank within the next year (churn=1) or not (churn=0). You have data on a random sample of 10,000 bank accounts, with variables on the customer (income, bank balance, age, gender, city of residence in the United States) and on the relationship with the bank (time they have had an account at this bank, do they have a credit card with the bank, is their salary paid directly into the account, do they have a home loan with the bank).
(a) [6 marks] Explain briefly what regularisation is in the context of modelling this sort of data
(b) [8 marks] Compare subset selection with AIC, lasso-type penalisation, and ridge penal- isation in terms of how they would treat the city of residence variable.
(c) [5 marks] How would you use a second data set of 5000 records to choose the tuning parameter for the amount of regularisation for one of these regression methods?
(d) [5 marks] The bank wants to use this model to decide which of its new customers are at risk of leaving again in the next year. Why might the model perform badly for this purpose?
(e) [6 marks] Suppose the coefficient for “has a credit card with the bank” is negative. What does this imply about a strategy of issuing credit cards to customers to reduce churn?