FromMichael E. MannDateTue, 02 Sep 2003 14:30:48 -0400
ToTim Osborn
CCScott Rutherford, Michael E. Mann
SubjectRe: reconstruction uncertainties
Hi Tim
Thanks for sending this. Unfortunately, I don't really have the time look into any of this
in detail, but let me offer the following additional explanation which will hopefully
clarify the nature of any differences between our results. I fear that I may not have been
clear enough in my previous explanation.
The reason that our uncertainty estimates reduce little fwith increasing timescale for the
earlier networks is that the effective degrees of freedom are diminished sharply by the
redness of the calibration residuals for networks prior to AD 1600 and earlier. But unlike
you, wee do not model the residuals as an AR process--this may the source of some of the
differences.
Back to AD 1600 (and later networks), the calibration residuals pass for "white noise" ,
and the estimates follow simply from the residual uncalibrated variance, and the reduction
of variance upon averaging follows standard sqrt(N) statistics.
Prior to that, the networks failed the test. So we decomposed the calibration residuals
into a "low-frequency" band (all timescales longer than 40 years which are not
distinguishable from secular timescales, since I had a roughly 80 years series and was
evaluating the spectrum using a multiple-taper estimate with a spectral bandwidth of +/-2
Rayleigh frequencies). We then estimated the enhancement of unresolved variance in the
low-frequency band relative to the nominal white noise level. The enhancement was about a
factor of 5-6 or so for the earlier networks, as I recall. To get the component of
uncertainty for the low-frequency band alone (timescales longer than 40 years), I simply
took that enhancement factor x the nominal unresolved calibration variance x the bandwidth
of the "low-frequency" band (0.025 cycle/year). This yields a reduction in variance that is
far less than the nominal "sqrt N" reduction applied to the individual annual
uncertainties. Of course, one could calculate the equivalent N' (effective temporal
degrees of freedom) that this implies in a model of the residuals as AR(1) red noise, but
we didn't take this approach. We modeled it as a simple step-increase spectrum (w/ the
boundary at f=0.025 cycle/yr). Modeling the residuals as red noise would, my guess is,
generally yield the same result, but it might have the effect of dampening the estimated
enhancement of unresolved variance at the longest timescales. In any case, it should yield
similar, but it would be very surprising if identical(!), results, consistent w/ your
observations.
My guess for the difference in the AD 1600 network is that, based on the spectrum test, we
did not reject the white noise null hypothesis for the residuals. So there was no variance
enhancement factor for that, or subsequent, networks. It would appear that your method
argues for significant serial correlation in that case. Not sure why we come to different
conclusions in this case (perhaps using different criteria for testing for the significance
of redness in the spectrum/serial correlation), but that's probably the reason...
I hope that clarifies this. Please keep me in the loop on this. I've copied to Scott, who
may have some additional insights here, since we've been dealing w/ these issues now in the
RegEM estimates (Scott:did we ever reject the white noise null hypothesis in the residuals
for any of our proxy-based NH reconstrucitions in the paper submited to J. Climate? I don't
recall).
Thanks,
mike
At 04:33 PM 8/29/2003 +0100, you wrote:

Hi Mike,
after a few bits of holiday here and there, I've now had time to complete my (initial)
approach to estimating reconstruction errors on your NH temperature reconstruction.
This is all based on the calibration residuals that you kindly sent me a few weeks ago.
My rationale for doing this was that I wanted uncertainty/error estimates that were
dependent on the time scale being considered (e.g. a decadal mean, an annual mean, a
30-year mean, etc.). I didn't think you had published timescale-dependent errors, hence
my attempt.
A second reason is that I wanted to be able to model (i.e., stochastically generate)
time series of the errors, with appropriate timescale characteristics. Again, I didn't
think that I could get this from your published results.
The attached document summarises the progress I've made. There are a few questions I
have, and I'm concerned that the reduction in uncertainty with increasing time scale is
too great. Perhaps one should be ultra conservative and have no reduction with time
scale? Yet surely there ought to be some cancelling of partly uncorrelated errors? The
document is not meant to form part of any paper on this (I hope to use the errors in a
paper, but the point of the paper is on trend detection, not estimating errors), it just
seemed appropriate to write it up like this to inform you of what I've done so far.
Any comments or criticisms will be very useful.
Cheers
Tim
Dr Timothy J Osborn
Climatic Research Unit
School of Environmental Sciences, University of East Anglia
Norwich NR4 7TJ, UK
e-mail: t.osborn@uea.ac.uk
phone: +44 1603 592089
fax: +44 1603 507784
web: [1]http://www.cru.uea.ac.uk/~timo/
sunclock: [2]http://www.cru.uea.ac.uk/~timo/sunclock.htm

______________________________________________________________
Professor Michael E. Mann
Department of Environmental Sciences, Clark Hall
University of Virginia
Charlottesville, VA 22903
_______________________________________________________________________
e-mail: mann@virginia.edu Phone: (434) 924-7770 FAX: (434) 982-2137
[3]http://www.evsc.virginia.edu/faculty/people/mann.shtml
Attachment Converted: "c:\documents and settings\tim osborn\my documents\eudora\attach\Mann
uncertainty.doc"

References

1. http://www.cru.uea.ac.uk/~timo/
2. http://www.cru.uea.ac.uk/~timo/sunclock.htm
3. http://www.evsc.virginia.edu/faculty/people/mann.shtml