Higher order autoregressive models
Summary of method
(? indicates default parameter values that might need changing)
The statistical analysis of the fMRI data was based on a linear model
with correlated errors. The design matrix of the linear model was first
convolved with a hemodynamic response function modeled as a difference of
two gamma functions (Glover, 1999) timed to coincide with the acquisition of each slice.
Drift was removed by adding polynomial covariates in the frame times,
up to degree 3(?), to the design matrix. The correlation
structure was modeled as an autoregressive process of degree 1 (Bullmore et
al., 1996). At each voxel, the autocorrelation parameter was estimated from
the least squares residuals using the Yule-Walker equations, after a
bias correction for correlations induced by the linear model.
The autocorrelation parameter was first regularized by spatial smoothing
with a 15(?)mm FWHM Gaussian filter, then used to `whiten' the data
and the design matrix. The linear model was then re-estimated using
least squares on the whitened data to produce estimates of effects and
their standard errors.
In a second step, runs, sessions and subjects were combined using a
mixed effects linear model for the
effects (as data) with fixed effects standard deviations taken
from the previous analysis. This was fitted using ReML implemented by the EM algorithm.
A random effects analysis was performed by first estimating the
the ratio of the random effects variance to the fixed effects variance, then
regularzing this ratio by spatial smoothing with a 15(?)mm FWHM Gaussian
filter. The variance of the effect was then estimated by
the smoothed ratio multiplied by the fixed effects variance to achieve
higher degrees of freedom.
The resulting T statistic images were thresholded using the minimum
given by a Bonferroni correction and random field theory (Worsley et al, 1996).
References
Bullmore, E.T., Brammer, M.J., Williams, S.C.R., Rabe-Hesketh, S.,
Janot, N., David, A.S., Mellers, J.D.C., Howard, R.
and Sham, P. (1996). Statistical methods of estimation and inference for
functional MR image analysis. Magnetic Resonance in Medicine, 35:261-277.
Glover, G.H. (1999). Deconvolution of impulse response in
event-related BOLD fMRI. NeuroImage, 9:416-429.
Worsley, K.J., Marrett, S., Neelin, P., Vandal, A.C., Friston, K.J.
and Evans, A.C. (1996). A unified statistical approach for determining
significant signals in images of cerebral activation. Human Brain
Mapping, 4:58-73.
The latest release of fmristat
John Aston has now written a GUI: fmristat_gui.m.
The latest release has been written up in two papers submitted to NeuroImage:
A General Statistical Analysis for fMRI Data and
Estimating the delay of the hemodynamic response in fMRI data.
All programs now read and write ANALYZE format (if the extension
is .img) as well as MINC format files (if the extension is .mnc).
Fmrilm has been extended to estimate delays as well as effects with the
addition of one more parameter, and
fmridesign has been been extended to
provide the necessary
extra derivatives of the hrf. See the end of this document for instructions.
(Note that the EXCLUDE parameter is no longer necessary for
fmridesign and it must be ommitted.)
The program can now estimate delays in seconds, shown below
colouring the outer surface of the activated regions in top and
cut-away front views.
If you don't ask for delays,
fmrilm should be completely backwards compatible with previous versions. Fmrilm has been merged with fmrilm_arp so that fmrilm can now fit autoregressive models of arbitrary order.
Multistat has been updated to fit a random effects model
by REML using the EM algorithm. This gives a better analysis than the previous
version for combining subjects (the previous version was better at combining runs
on the same subject, but this is usually not as important as combining subjects).
Stat_threshold has been extended to find
thresholds for F statistics as well as T statistics, and to find P-values
for both if peak values are given instead. It also calculates P-values
for the spatial extent of clusters of voxels above a threshold.
Locmax is a new function that finds local maxima of a minc
file above a threshold.
Extract is a new function that extracts values from
a minc file at specified voxels (in `register' format).
Example contains all the matlab commands for the
example used here.
Data contains all the data for the
example used here.
Plotting the hemodynamic response function (hrf) using fmridesign
The hrf is modeled as the difference of two gamma density functions (Glover, G.H. (1999). "Deconvolution of impulse response in event-related
BOLD fMRI." NeuroImage, 9:416-429).
The parameters of the hrf are specified by a row vector whose elements are:
1. PEAK1: time to the peak of the first gamma density;
2. FWHM1: approximate FWHM of the first gamma density;
3. PEAK2: time to the peak of the second gamma density;
4. FWHM2: approximate FWHM of the second gamma density;
5. DIP: coefficient of the second gamma density.
The final hrf is: gamma1/max(gamma1)-DIP*gamma2/max(gamma2) scaled so that its total integral is 1.
If PEAK1=0 then there is no smoothing of that event type with the hrf.
If PEAK1>0 but FWHM1=0 then the design is lagged by PEAK1.
The default, chosen by Glover (1999) for an auditory stimulus, is:
hrf_parameters=[5.4 5.2 10.8 7.35 0.35]
To look at the hemodynamic response function, try:
time=(0:240)/10;
hrf0=fmridesign(time,0,[1 0],[],hrf_parameters);
plot(time,hrf0,'LineWidth',2)
xlabel('time (seconds)')
ylabel('hrf')
title('Glover hrf model')

Making the design matrices using fmridesign
Specifying all the frame times and slice times is now obligatory.
For 120 scans, separated by 3 seconds, and 13 interleaved slices every 0.12 seconds use:
frametimes=(0:119)*3;
slicetimes=[0.14 0.98 0.26 1.10 0.38 1.22 0.50 1.34 0.62 1.46 0.74 1.58 0.86];
The events are specified by a matrix of EVENTS, or directly by a stimulus design matrix S,
or both. EVENTS is a matrix whose rows are events and whose columns are:
1. EVENTID - an integer from 1:(number of events) to identify event type;
2. EVENTIMES - start of event, synchronized with frame and slice times;
3. DURATION (optional - default is 0) - duration of event;
4. HEIGHT (optional - default is 1) - height of response for event.
For each event type, the response is a box function starting at the event times, with the
specified durations and heights, convolved with the hemodynamic response function (see above).
If the duration is zero, the response is the hemodynamic response function whose integral
is the specified height - useful for ‘instantaneous’ stimuli such as visual stimuli.
The response is then subsampled at the appropriate frame and slice times to create a
design matrix for each slice, whose columns correspond to the event id number.
EVENT_TIMES=[] will ignore event times and just use the stimulus design matrix S (see next).
This is a sample run on Brian Ha's data, which was a block design of 3 scans rest,
3 scans hot stimulus, 3 scans rest, 3 scans warm stimulus, repeated 10 times (120 scans total).
The hot event is identified by 1, and the warm event by 2. The events start at times
9,27,45,63... and each has a duration of 9s, with equal height:
eventid=kron(ones(10,1),[1; 2]);
eventimes=(0:19)'*18+9;
duration=ones(20,1)*9;
height=ones(20,1);
events=[eventid eventimes duration height]
Events can also be supplied by a stimulus design matrix S (not recommended),
whose rows are the frames,
and columns are the event types. Events are created for each column, beginning at the
frame time for each row of S, with a duration equal to the time to the next frame, and
a height equal to the value of S for that row and column. Note that a constant term is
not usually required, since it is removed by the polynomial trend terms provided N_POLY>=0.
For the same experiment:
S=kron(ones(10,1),kron([0 0; 1 0; 0 0; 0 1],ones(3,1)));
Either of these give the same cache of design matrices, X_CACHE, a 4-D array
(Dim 1: frames; Dim 2: response variables;
Dim 3: derivative + 1 ; Dim 4: slices):
X_cache=fmridesign(frametimes,slicetimes,events,[],hrf_parameters);
X_cache=fmridesign(frametimes,slicetimes, [] , S,hrf_parameters);
or just
X_cache=fmridesign(frametimes,slicetimes,events);
if you are using the default hrf parameters.
To look at the two columns of the design matrix for the 5th slice:
plot(squeeze(X_cache(:,:,1,5)),'LineWidth',2)
legend('Hot','Warm')
xlabel('frame number')
ylabel('response')
title('Hot and Warm responses')

Analysing one run with fmrilm
First set up the contrasts. CONTRAST is a matrix whose rows are contrasts for the
statistic images, with row length equal to the number regressor variables in X_CACHE.
We wish to look for regions responding to the
hot stimulus, the warm stimulus, and the difference hot-warm:
contrast=[1 0;
0 1;
1 -1];
EXCLUDE is a list of frames that should be excluded from the analysis.
This must be used with Siemens EPI scans to remove the first few frames, which do
not represent steady-state images. Default is [1], but [1 2] is better:
exclude=[1 2];
WHICH_STATS is a logical matrix indicating output statistics by 1.
Rows correspond to rows of CONTRAST, columns correspond to:
1: T statistic image, OUTPUT_FILE_BASE_mag_t.mnc or .img (for contrasts
in the delays, mag is replaced by del in the extension).
The degrees of freedom is DF. Note that tstat=effect/sdeffect.
2: effect image, OUTPUT_FILE_BASE_mag_ef.mnc or .img.
3: standard deviation of the effect, OUTPUT_FILE_BASE_mag_sd.mnc or .img.
4: F-statistic image, OUTPUT_FILE_BASE_mag_F.mnc or .img, for testing
all rows of CONTRAST that deal with magnitudes (see CONTRAST_IS_DELAY
below). The degrees of freedom are [DF, P].
5: the temporal autocorrelation(s), OUTPUT_FILE_BASE_rho.mnc or .img.
6: the residuals from the model, OUTPUT_FILE_BASE_resid.mnc or .img,
only for the non-excluded frames (warning: uses lots of memory).
7: the whitened residuals from the model, OUTPUT_FILE_BASE_wresid.mnc or
.img, only for the non-excluded frames (warning: uses lots of memory).
8: the AR parameter(s) a_1 ... a_p, in OUTPUT_FILE_BASE_A.mnc or .img.
If WHICH_STATS to a row vector of length 8, then it is used for all
contrasts; if the number of columns is less than 8 it is padded
with zeros. Default is 1.
To get t statistics, effects and their sd's, and the F statistic for testing for
any effect of either hot or warm vs baseline, and the autocorrelation:
which_stats=[1 1 1 1 1];
Input minc file for one session, and output base for each row of CONTRAST
(hot for hot, wrm for warm, hmw for hot-warm):
input_file='e:/keith/data/brian_ha_19971124_1_100326_mri_MC.mnc';
output_file_base=['e:/keith/results/ha_100326_hot';
'e:/keith/results/ha_100326_wrm';
'e:/keith/results/ha_100326_hmw'];
The big run:
[df1 p]=fmrilm(input_file, output_file_base, X_cache, contrast, exclude, which_stats);
and the output gives the degrees of freedom of the T statistic images (df1=112)
and the F statistic image (df1=112 , p=2). Note that p=2, not 3, because
the third contrast is a linear combination of the first two. The files generated are:
e:/keith/results/ha_100326_hot_rho.mnc
e:/keith/results/ha_100326_hmw_mag_t.mnc
e:/keith/results/ha_100326_hmw_mag_ef.mnc
e:/keith/results/ha_100326_hmw_mag_sd.mnc
e:/keith/results/ha_100326_hot_mag_F.mnc
e:/keith/results/ha_100326_hot_mag_t.mnc
e:/keith/results/ha_100326_hot_mag_ef.mnc
e:/keith/results/ha_100326_hot_mag_sd.mnc
e:/keith/results/ha_100326_wrm_mag_t.mnc
e:/keith/results/ha_100326_wrm_mag_ef.mnc
e:/keith/results/ha_100326_wrm_mag_sd.mnc
To look at slice 4 (the slice used in Worsley et al., 2001) of the t-statistic for hot-warm try
m=fmris_read_image('e:/keith/results/ha_100326_hmw_mag_t.mnc',4,1);
imagesc(m.data'); colorbar; axis xy;

To see the brain, use the first slice of the fMRI data > 500 as a mask:
a=fmris_read_minc(input_file,4,1);
imagesc(m.data'.*(a.data'>500)); colorbar; axis xy;

To look at slices 4 to 6 simultaneously, try
m=fmris_read_image('e:/keith/results/ha_100326_hmw_mag_t.mnc',4:6,1);
mm=reshape(permute(m.data,[2 1 3 4]),m.dim(1),prod(size(m.data))/m.dim(1));
imagesc(mm.*repmat(a.data'>500,1,m.dim(3))); colorbar; axis xy;

F-tests should only be used when we are interested in any linear combination
of the contrasts. For example, an F-test would be appropriate for
detecting regions with high polynomial drift, since we would be interested in
either a linear, quadratic or cubic trend, or any linear combination
of these. The drift terms appear after the regressor terms, in the order
constant, linear, quadratic, cubic etc. So to test for a drift, use the contrast
contrast=[0 0 0 1 0 0;
0 0 0 0 1 0;
0 0 0 0 0 1]
which_stats=[0 0 0 1];
output_file_base='e:/keith/results/ha_100326_drift';
fwhm_rho='e:/keith/results/ha_100326_hot_rho.mnc';
fmrilm(input_file,output_file_base,X_cache,contrast,exclude,which_stats,fwhm_rho);
Note that by replacing the fwhm_rho parameter (15mm by default) with the file name e:/keith/results/ha_100326_hot_rho.mnc we avoid re-calculating the autocorrelations.
Another good use of the F-test is for detecting effects when the
hemodynamic response is modeled by a set of basis functions (see below).
A linear effect of temperature
Instead of using just two values (hot=49oC, and low=35oC), suppose the temperature of the stimulus varied 'randomly' over the 20 blocks, taking 5 equally spaced values between 35 and 49:
temperature=[45.5 35.0 49.0 38.5 42.0 49.0 35.0 42.0 38.5 45.5 ...
38.5 49.0 35.0 45.5 42.0 45.5 38.5 42.0 35.0 49.0]';
To model a linear effect of temperature (see Figure (a) below), use one event type with height=1 and another with a height=temperature:
events=[zeros(20,1)+1 eventimes duration ones(20,1);
zeros(20,1)+2 eventimes duration temperature]
The following contrasts will estimate first the slope of the temperature effect (per oC), then the hot, warm and hot-warm effects exactly as before:
contrast=[0 1;
1 49;
1 35;
0 14];

To model a quadratic effect of temperature (Figure b), add a third event type with a height equal to the temperature2:
events=[zeros(20,1)+1 eventimes duration ones(20,1);
zeros(20,1)+2 eventimes duration temperature;
zeros(20,1)+3 eventimes duration temperature^2]
contrast=[0 0 1];
The contrast will test for a quadratic effect. Taking this further, we may wish to test for a cubic (Figure c), quartic (Figure d) or even a 'non-linear' effect of temperature, where the temperature effects are arbitrary. Note that the quartic model is identical to the arbitrary model because a quartic can be fitted exactly to any arbitrary 5 points. To model the arbitrary effect, simply assign a different event type from 1 to 5 for each of the 5 different temperature values:
events=[floor((temperature-35)/3.5)+1 eventimes duration ones(20,1)]
To look for any effect of the stimulus compared to baseline, use:
contrast=[1 0 0 0 0;
0 1 0 0 0;
0 0 1 0 0;
0 0 0 1 0;
0 0 0 0 1];
contrast=[eye(5) ones(5,4)];
which_stats=[0 0 0 1];
The resulting F statistic image will have 5 and 109 d.f.. To see if the changes in temperature have any effect, subtract the average from each row of the contrasts:
contrast=[.8 -.2 -.2 -.2 -.2;
-.2 .8 -.2 -.2 -.2;
-.2 -.2 .8 -.2 -.2;
-.2 -.2 -.2 .8 -.2;
-.2 -.2 -.2 -.2 .8];
contrast=[eye(5)-ones(5)/5 ones(5,4)];
which_stats=[0 0 0 1];
The resulting F statistic image will have 4 and 109 d.f.. To see if the changes in temperature have a linear effect, subtract the average from the 5 temperature values:
contrast=[-7.0 -3.5 0 3.5 7.0];
which_stats=[1 1 1];
This will produce a T statistic with 109 d.f.. How is this different from the previous linear effect? The effect is the same, but the standard deviation may be slightly different. The reason is that in this model, we are basing the standard deviation on errors about each temperature value; in the first analysis, it is based on errors about a linear model. As a result, the degrees of freedom is slightly different: 112 before, but 109 here.
Combining runs/sessions/subjects with multistat
Repeat previous analysis for hot-warm for the three other sessions (note that the same
X_CACHE is used for each):
contrast=[1 -1];
input_file='e:/keith/data/brian_ha_19971124_1_093923_mri_MC.mnc';
output_file_base='e:/keith/results/ha_093923_hmw';
[df2 p]=fmrilm(input_file, output_file_base, X_cache, contrast, exclude, which_stats);
input_file='e:/keith/data/brian_ha_19971124_1_101410_mri_MC.mnc';
output_file_base='e:/keith/results/ha_101410_hmw';
[df3 p]=fmrilm(input_file, output_file_base, X_cache, contrast, exclude, which_stats);
input_file='e:/keith/data/brian_ha_19971124_1_102703_mri_MC.mnc';
output_file_base='e:/keith/results/ha_102703_hmw';
[df4 p]=fmrilm(input_file, output_file_base, X_cache, contrast, exclude, which_stats);
Now we can average the different sessions as follows. NOTE THAT ALL FILES MUST
HAVE EXACTLY THE SAME SHAPE (SLICES, COLUMNS, ROWS). If not, reshape them!
This is done by again setting up design matrix and a contrast:
X=[1 1 1 1]';
contrast=[1];
which_stats=[1 1 1];
input_files_effect=['e:/keith/results/ha_093923_hmw_mag_ef.mnc';
'e:/keith/results/ha_100326_hmw_mag_ef.mnc';
'e:/keith/results/ha_101410_hmw_mag_ef.mnc';
'e:/keith/results/ha_102703_hmw_mag_ef.mnc'];
input_files_sdeffect=['e:/keith/results/ha_093923_hmw_mag_sd.mnc';
'e:/keith/results/ha_100326_hmw_mag_sd.mnc';
'e:/keith/results/ha_101410_hmw_mag_sd.mnc';
'e:/keith/results/ha_102703_hmw_mag_sd.mnc'];
output_file_base='e:/keith/results/ha_multi_hmw'
For the final run, we need to specify the row vector of degrees of freedom
of the input files, printed out by fmrilm.m:
input_files_df=[df1 df2 df3 df4]
and their fwhm, in mm:
input_files_fwhm=6
These two parameters are only used to calculate the final degrees of
freedom, and don't affect the images. Finally, for a fixed
effects analysis (not recommended!) we set the last parameter
to Inf (see later). The final run is:
df=multistat(input_files_effect,input_files_sdeffect,input_files_df, ...
input_files_fwhm,X,contrast,output_file_base,which_stats,Inf)
Note that the program prints and returns the final
degrees of freedom of the tstat image, which is df=448 (see discussion on
fixed and random effects).
For a more elaborate analysis, e.g. comparing the first two sessions with
the next two, you can do it by:
X=[1 1 0 0; 0 0 1 1]';
contrast=[1 -1];
and the rest as above.
Fixed and random effects
In the above run, we did a fixed effect analysis, that is, the
analysis is only valid for the particular group of 4 sessions on Brian
Ha. It is not valid for an arbitrary session on Brian Ha, nor for
the population of subjects in general. It may turn out that the
underlying effects themselves vary from session to session, and from
subject to subject. To take this extra source of variability into
account, we should do a random effects analysis.
The ratio of the random effects standard error, divided by the
fixed effects standard error, is outputed as ha_multi_sdratio.mnc. If there
is no random effect, then sdratio should be roughly 1. If there is a
random effect, sdratio will be greater than 1. To allow for the random
effects, we multiply the sdeffect by sdratio, and divide the tstat by
sdratio. The only problem is that sdratio is itself extremely noisy
due to its low degrees of freedom, which results in
a loss of sensitivity. One way around this is to assume that the
underlying sdratio is fairly smooth, so it can be better estimated by
smoothing sdratio. This will increase its degrees of freedom, thus
increasing sensitivity. Note however that too much smoothing could
introduce bias.
The amount of smoothing is controlled by the final parameter,
fwhm_varatio. It is the fwhm in mm of the Gaussian filter used to smooth the
ratio of the random effects variance divided by the fixed effects variance.
0 will do no smoothing, and give a purely random effects analysis.
Inf will do complete smoothing to a global ratio of one,
giving a purely fixed effects analysis (which we did above).
The higher the fwhm_varatio, the higher the ultimate degrees of
freedom of the tstat image (printed out before a pause),
and the more sensitive the test. However too much smoothing will
bias the results. The program prints and returns the df before doing any analysis;
if it is too low, press control-c to cancel,
and try increasing fwhm_varatio. The default is 15, and this is the
recommended value. To invoke this, just run
which_stats=[1 1 1 1];
df=multistat(input_files_effect,input_files_sdeffect,input_files_df, ...
input_files_fwhm,X,contrast,output_file_base,which_stats,15)
Note that the degrees of freedom of tstat is now reduced to df=112.
The sd ratio of random to fixed effects looks like this:
m=fmris_read_image('e:/keith/results/ha_multi_hmw_sdratio.mnc',4,1);
imagesc(m.data'); colorbar; axis xy;

Note that it is roughly 1 outside the brain and at most places inside, indicating little random effect over the runs, all the random effect sd is about 60% higher than the fixed effect sd (sdratio = 1.6) in posterior regions. The sdratio is much higher (~3) over subjects.
Suppose we search e:/keith/results/ha_multi_hmw_t.mnc for local maxima
inside a search region of 1000cc. The voxel volume is 2.34375*2.34375*7= 38.4521.
There is 6mm smoothing.
The degrees of freedom of is returned by multistat,
is 112. The significance is the standard 0.05. The threshold to use is
stat_threshold(1000000,38.4521,6,112,0.05)
which gives 4.86. The resulting thresholded tstat is shown in the logo at
the top of this page, rendered using IRIS Explorer.
For the critical size of clusters above a threshold of say 3 use
stat_threshold(1000000,38.4521,6,112,0.05,3)
which gives peak_threshold=4.86 as above and extent_threshold=407 mm^3.
This means that any cluster
of neighbouring voxels above a threshold of 3
with a volume larger than 407 mm^3 is
significant at P<0.05. Extent_threshold_1 = 109 mm^3
is the extent of a single cluster chosen in advance
of looking at the data, e.g. the nearest cluster to a pre-chosen voxel
or ROI - see Friston (1997).
Fdr_threshold calculates the threshold for a t or F image for
controlling the false discovery rate (FDR). The FDR is the expected
proportion of false positives among the voxels above the threshold.
This threshold is higher than that for controlling the expected proportion
of false positives in the whole search volume, and usually lower than the
Bonferroni threshold (printed out as BON_THRESH) which controls the
probability that *any* voxels are above the threshold. The threshold depends
on the data as well as the search region,
but does not depend on the smoothness of the data. We must specify a search volume,
here chosen as the first frame of the raw fMRI data thresholded at a value of 500:
input_file='e:/keith/results/ha_multi_hmw_t.mnc';
mask_file='e:/keith/data/brian_ha_19971124_1_100326_mri_MC.mnc';
fdr_threshold(input_file,[], mask_file,500,112,0.05)
The result is a threshold of 2.97, considerably lower than the random field /
Bonferroni threshold of 4.86 above. But remember that the interpretation is
different: 95% of the voxels above 2.97 are true signal (on average), whereas the
19 times out of 20, there will be *no* false positives in the voxels above 4.86.
References:
Worsley, K.J., Marrett, S., Neelin, P., Vandal, A.C.,
Friston, K.J., and Evans, A.C. (1996).
A Unified Statistical Approach for Determining Significant Signals in
Images of Cerebral Activation.
Human Brain Mapping, 4:58-73.
Locating peaks with locmax
To locate peaks, use
lm=locmax('e:/keith/results/ha_multi_hmw_t.mnc',4.86)
which gives a list of local maxima above 4.86, sorted in descending order.
The first column of LM is the values of the local maxima, and the next three
are the x,y,z voxel coordinates in `register' format, i.e. starting at zero.
To find P-values for these local maxima, replace the 0.05 by the list of
peak values from the first column of LM:
pval=stat_threshold(1000000,38.4521,6,112,lm(:,1))
Extracting values from a minc file using extract
You may want to look at the effects and their standard deviations at
some of the local maxima just found, for example the peak with
value 6.1789 in slice 5 with voxel coordinates [62 65 4]:
voxel=[62 65 4]
effect=extract(voxel,'e:/keith/results/ha_multi_hmw_ef.mnc')
sdeffect=extract(voxel,'e:/keith/results/ha_multi_hmw_sd.mnc')
which gives 5.4676 and 0.8849 respectively. Note that 5.4676/0.8849 = 6.1789,
the value of the T statistic. A matrix of voxels can be given, and the corresponding
vector of values will be extracted.
To look at the time course of the data for the first run,
and compare it to the fitted values, try this:
ref_data=squeeze(extract(voxel,'e:/keith/data/brian_ha_19971124_1_100326_mri_MC.mnc'));
ef_hot=extract(voxel,'e:/keith/results/ha_100326_hot_mag_ef.mnc')
ef_wrm=extract(voxel,'e:/keith/results/ha_100326_wrm_mag_ef.mnc')
fitted=mean(ref_data)+ef_hot*X_cache(:,1,1,5)+ef_wrm*X_cache(:,2,1,5);
plot(frametimes,[ref_data fitted],'LineWidth',2); ylim([920 960]);
legend('Reference data','Fitted values');
xlabel('time (seconds) from start of epoch');
ylabel('fMRI response');

Estimating the time course of the response
We may be intersted in estimating the time course of the response, rather than just the data as above. The easiest way
to do this might be to average the responses beginning at the start of each `epoch'
of the response, e.g. averaging the 12 scans starting at scans 1, 13, 25, ... 109.
This does not remove drift, nor does it give you a standard error. A better, though
more complex way, is to replace the block design of 2 blocks (9s hot, 9s warm) with
say 12 blocks of 3s each covering the entire epoch, then estimate their effects with NO convolution by the hrf:
eventid=kron(ones(10,1),(1:12)');
eventimes=frametimes';
duration=ones(120,1)*3;
height=ones(120,1);
events=[eventid eventimes duration height]
X_hrf=fmridesign(frametimes,slicetimes,events,[],zeros(1,5));
plot(squeeze(X_hrf(:,1:4,1,5)),'LineWidth',2)
legend('Time01','Time02','Time03','Time04')
xlabel('frame number')
ylabel('response')
title('Slice 5, first 4 times only')

The advantage of this is that it is much more flexible; it can be applied to
event related designs, with randomly timed events, even with events almost
overlaping, and the duration of the `events' need not be equal to the TR.
Moreover, the results from separate runs/sessions/subjects can be
combined using multistat.
Now all the time course is covered by events, and the baseline has disappeared,
so we must contrast each event with the average of all the other events, as follows:
contrast=[eye(12)-ones(12)/12];
exclude=[1 2];
which_stats=[0 1 1 1];
input_file='e:/keith/data/brian_ha_19971124_1_100326_mri_MC.mnc';
output_file_base=['e:/keith/results/ha_100326_time01';
'e:/keith/results/ha_100326_time02';
'e:/keith/results/ha_100326_time03';
'e:/keith/results/ha_100326_time04';
'e:/keith/results/ha_100326_time05';
'e:/keith/results/ha_100326_time06';
'e:/keith/results/ha_100326_time07';
'e:/keith/results/ha_100326_time08';
'e:/keith/results/ha_100326_time09';
'e:/keith/results/ha_100326_time10';
'e:/keith/results/ha_100326_time11';
'e:/keith/results/ha_100326_time12'];
fmrilm(input_file,output_file_base,X_cache,contrast,exclude,which_stats,fwhm_rho);
We need 12 output files, but only the effect and its standard error. The F-statistic
has been requested so we can find voxels with significant responses. Its
critical threshold is:
stat_threshold(1000000,38.4538,6,[11 103])
which gives 5.18. The local maxima above this are:
lm=locmax([output_file_base(1,:) '_mag_F.mnc'],5.18)
The extracted effects and their standard errors near the previous
voxel=[63 66 4] are:
voxel=[63 66 4]
values=extract(voxel,output_file_base,'_mag_ef.mnc')
sd=extract(voxel,output_file_base,'_mag_sd.mnc')
Plotting these out against time gives the estimated response, together with 1 sd error bars. Note that the times are offset by the slicetimes, so they are in seconds starting from the beginning of slice acquisition. On top of this we can add the modeled response
by extracting the effects of the hot and warm stimuli, then multiplying these effects by the hrf convolved with the 9s blocks for hot and warm stimuli (also shown):
b_hot=extract(voxel,'e:/keith/results/ha_100326_hot_mag_ef.mnc')
b_wrm=extract(voxel,'e:/keith/results/ha_100326_wrm_mag_ef.mnc')
time=(1:360)/10;
X_hrf=fmridesign(time,0,[1 9 9 1],[],hrf_parameters);
hrf=squeeze(X_hrf(:,1,1));
plot((0:12)*3+slicetimes(voxel(3)),values([1:12 1]),'k', ...
[0:11; 0:11]*3+slicetimes(voxel(3)), [values+sd; values-sd],'g', ...
time,[zeros(1,90) ones(1,90) zeros(1,180)],'r', ...
time,[zeros(1,270) ones(1,90)],'b', ...
time,hrf*b_hot+hrf([181:360 1:180])*b_wrm,'g');
legend('Estimated response','Modeled response');
text(10,0.5,'Hot')
text(28,0.5,'Warm')
xlabel('time (seconds) from start of epoch');
ylabel('fMRI response');

The modeled response and the fitted response are quite close.
Note that the graph `wraps around' in time.
Estimating the delay
Rather than the whole response at one voxel, we may prefer
just one parameter, the delay, estimated at all voxels.
The second to last parameter of fmrilm is the REFERENCE_DELAY. There is
one value for each event type, or if a single value is supplied,
it is used for all event types.
If the first element of REFERENCE_DELAY is >0 then delays are estimable
as well as effects. We start with a reference hrf h(t) close
to the true hrf, e.g. the Glover (1999) hrf used by FMRIDESIGN.
The hrf for each variable is then modeled as h( t / c ) / c where
c = DELAY / REFERENCE_DELAY, i.e. the time is re-scaled by c.
The program then estimates contrasts in the c's (specified by
CONTRAST; contrasts for the polynomial drift terms are ignored),
their standard deviations, and T statistics for testing c=1
(no change). Output is DELAY = c * REFERENCE_DELAY in
OUTPUT_FILE_BASE_del_ef.mnc, sd is in OUTPUT_FILE_BASE_del_sd.mnc,
and t statistic in OUTPUT_FILE_BASE_del_t.mnc. Note that
REFERENCE_DELAY is only used to multiply c and its sd for
output. If REFERENCE_DELAY is set equal to the first
element of HRF_PARAMETERS (5.4 seconds by default) then DELAY is
the estimated time to the first peak of the hrf (in seconds).
To do the estimation, the stimuli must be convolved with first
and second derivatives of h with respect to log(c) at c=1
which are already in the 2nd and 3rd component of the 3rd dimension
of X_CACHE. Note that F statistics are not yet available with this option.
CONTRAST_IS_DELAY is a logical vector indicating if the row of CONTRAST
is a contrast in the delays (1) or magnitudes (0). Note that you cannot
estimate delays of the polynomial terms or confounds, and that if
you estimate delays (1) then each non-zero element of CONTRAST must
be estimable i.e. reference delay >0 for that response. If the length
of CONTRAST_IS_DELAY is less then the number of contrasts, it is padded
with zeros. Note that the t test for contrasts with a single number tests
that the delay is equal to the reference delay (not that it equals zero).
Default is 0, i.e. all contrasts refer to magnitudes
Here is an example to estimate the delays of the hot, the warm
and the difference hot-warm stimuli for the above data. In this case
every contrast is a delay, so CONTRAST_IS_DELAY = [1 1 1]:
contrast=[1 0;
0 1;
1 -1]
which_stats=[1 1 1]
input_file='e:/keith/data/brian_ha_19971124_1_100326_mri_MC.mnc';
output_file_base=['e:/keith/results/ha_100326_hot';
'e:/keith/results/ha_100326_wrm';
'e:/keith/results/ha_100326_hmw']
n_poly=3;
fwhm_rho='e:/keith/results/ha_100326_hot_rho.mnc';
reference_delay=5.4;
contrast_is_delay=[1 1 1];
fmrilm(input_file, output_file_base, X_cache, contrast, exclude, ...
which_stats, fwhm_rho, n_poly, reference_delay, contrast_is_delay);
The result is the following files:
e:/keith/results/ha_100326_hmw_del_t.mnc
e:/keith/results/ha_100326_hmw_del_ef.mnc
e:/keith/results/ha_100326_hmw_del_sd.mnc
e:/keith/results/ha_100326_hot_del_t.mnc
e:/keith/results/ha_100326_hot_del_ef.mnc
e:/keith/results/ha_100326_hot_del_sd.mnc
e:/keith/results/ha_100326_wrm_del_t.mnc
e:/keith/results/ha_100326_wrm_del_ef.mnc
e:/keith/results/ha_100326_wrm_del_sd.mnc
The best way of visualizing this is to use register like this:
register e:/keith/results/ha_100326_hot_mag_t.mnc e:/keith/results/ha_100326_hot_del_ef.mnc
After synchronising the two images,
you can then search the tstat image in the first window
for regions of high signal e.g. >4, then
read off the estimated delay at that location from the second window. A fancier
rendering is shown at the top of this page.
To visualize the results in MATLAB, try this:
t=fmris_read_minc('e:/keith/results/ha_100326_hot_mag_t.mnc',5,1);
d=fmris_read_minc('e:/keith/results/ha_100326_hot_del_ef.mnc',5,1);
s=fmris_read_minc('e:/keith/results/ha_100326_hot_del_sd.mnc',5,1);
a=fmris_read_minc(input_file,5,1);
imagesc([(d.data.*(t.data>4)+(a.data<500))' (s.data.*(t.data>4)+(a.data<500))']); colorbar; axis xy;

The first image is the delay, in seconds, where the t-statistic for the
magnitude is greater than 4. The brain is outlined by thesholding the
first fMRI scan at 500. The second image gives the sd of the delay.
The delay is roughly 5.5 +/- 0.6 seconds, close to the nominal delay
of 5.4 seconds for the Glover HRF.
Note that the tstat image is not quite the same as what you would get from fmrilm
without asking for delays, i.e. with REFERENCE_DELAY=0.
The reason is that in order to calculate delays,
fmrilm fits a model with convolutions of the stimulus with
the derivative of the hrf added to the linear model.
You could now put delay effects and sdeffects from separate runs/sessions/subjects
through multistat as before.
Effective connectivity of all voxels with a reference voxel
We might be interested in all voxels that are correlated with
the time course at a pre-chosen reference voxel, removing
the effect of the signal, to find
those voxels that are effectively connected. To do this,
we add the data from the pre-chosen voxel (ref_data) as a confound,
then estimate its effect, sd, and t statistic as before:
input_file='e:/keith/data/brian_ha_19971124_1_100326_mri_MC.mnc';
output_file_base='e:/keith/results/ha_100326_connect';
contrast=[0 0 0 0 0 0 1];
which_stats=[1 1 1];
reference_delay=0;
contrast_is_delay=0;
confounds=ref_data;
fmrilm(input_file, output_file_base, X_cache, contrast, exclude, ...
which_stats, fwhm_rho, n_poly, reference_delay, contrast_is_delay, confounds);
m=fmris_read_image('e:/keith/results/ha_100326_connect_mag_t.mnc',4,1);
imagesc((m.data.*(a.data>500))'); colorbar; axis xy;

Note that voxels in the neighbourhood of the pre-chosen voxel
are obviously highly correlated due to smoothness of the image.
To avoid very large t statistics, the t statistic image has
a ceiling of 100 (the F statistic image has a ceiling of 1000).
Local maxima of this image could be identified, their values extracted
and added as columns to ref_data to find further voxels that are correlated
with the first reference voxel removing the effect of the second reference voxel.
This could be repeated to map out the connectivity.
The same idea could be used with multistat
to look for connectivity
across runs or subjects; simply add an extra column to the design matrix X that
contains the subject or run values at the reference voxel, and set the contrast
to choose that column by puting a value of 1 for that column and zero for the others.
Higher order autoregressive models
The last parameter of FMRILM is NUMLAGS, the number of lags or order of the autoregressive model
for the temporal correlation structure. Order 1 (the default) seems to be
adequate for 1.5T data, but higher order models may be needed for 3T data.
To assess this, the first NUMLAGS autoregression parameters can be outputted
as 'frames' by
setting the 8th element of WHICH_STATS to 1. These parameters should be zero
for orders higher than the true order of the data. Note: if NUMLAGS>1 it takes
a lot longer to fit. Here is an example of an AR(2) model:
which_stats=[1 1 1 1 1 0 0 1];
fmrilm(input_file, output_file_base, X_cache, contrast, exclude, ...
which_stats, 15, 3, 0, 0, [], 2);
m=fmris_read_minc('e:/keith/results/ha_100326_hot_A.mnc',5,1:2);
mm=reshape(permute(m.data,[2 1 3 4]),m.dim(1),prod(size(m.data))/m.dim(1));
imagesc(mm); colorbar; axis xy;
The images of the two autoregressive coeficients (below) show that the
second one, a_2, is near to zero so the first order autoregressive model
is adequate, which is what we were using before (NUMLAGS=1, the default):
