In the previous post, we showed that for a ^{21}Ne sample the apparent age increases with erosion rate:

Let’s repeat this for a ^{36}Cl sample. From the ACE homepage, go to the ^{36}Cl data section and download the MK03-5-Mo sample listed in the ^{36}Cl template file:

This sample has no erosion rate, so import it into Excel/OpenOffice:

Add some identical rows, and then create a new column titled ‘erosion rate’. In these columns, start with 1E-5 cm / yr and increase in steps for each row:

Now save this file, import into ACE and compute ages. The result? In the Sample Browser, select ‘erosion rate’ to sort by, and then ‘age’. Now click on **Plot Sort Attributes**:

For our ^{36}Cl sample. computed ages *decrease* with assumed erosion rate. Why is this? As for our ^{21}Ne example, insight is given by looking at the Production Rate Profile:

Because of the moderation of high-energy neutrons just below the surface of the earth and diffusion of low-energy neutrons back into the atmosphere, low-energy neutrons are more concentrated at depth than at the surface (e.g. Phillips et al., 2001). This means that unlike other nuclides, a ^{36}Cl sample just below the surface has a higher cosmogenic nuclide production rate than the same ^{36}Cl sample at the surface. The production rate profile suggests that at a depth of about 20 cm, the total production rate is about the same as the surface production rate. Let’s increase the erosion rates in the csv file and see what happens:

So an erosion rate of 0.002 cm / yr gives a similar computed age as for zero erosion.

]]>In the previous post, we showed how to determine the sensitivity of sample ages to erosion. We used a stable nuclude (^{21}Ne) with a large cosmogenic inventory. The sensitivity is shown below:

The sensitivity of sample ages (y axis, years) to erosion rate (x axis, centimeters per year) appears to be linear. This can be verified by exporting the samples to a csv file using the **Export Samples **button and then reading the file into into excel/openoffice/matlab:

As a linear relationship, the sensitivity has a correlation between erosion rate and computed age with R^{2}=0.9923. Why?

Perhaps it should first be noted that with the large inventory, the sample is quite old and temporal variability in things like geomagnetic intensity/polarity tends to integrate towards a constant value. However further insight can be gained by examining the samples production rate profile, which can be invoked by using the Production Rate Utility:

The production rate utility is useful as it highlights the sensitivity of the sample to erosion. For the case of our ^{21}Ne sample, production is due exclusively to spallation, and at least for the upper part of the surface of the Earth it can be approximated as a straight line. Could this explain the apparent linearity between erosion rate and age for this sample? The sample has a cosmogenic inventory of 21.7 x 10^{6} a / g, and ACE calculates an age for no erosion of 374560 yr. For a stable nuclide, where inventory = mean production rate x age, ACE calculates an average production rate of about 57.9 a / g / yr, which is the near surface value for the production rate profile shown above. However if the sample was eroding, the production rate would be less and the sample age would be older, as indicated by the correlation.

If the sample was eroding, how much would the production rate change? Let’s use the ACE computed age of 464360 yr for corresponding to the erosion rate of 0.0001 cm / yr. In this case, over this time 46.4 cm of erosion would have occurred. By zooming into the production rate plot we find that the production rate at this depth is 38 a / g / yr:

In terms of the sample history, this would suggest that at 464360 years before the present the sample was at a depth of 46.4 cm below the surface, and experienced a production rate at this time of ~38 a / g / yr. So the surface (present day) production rate is 58 a / g / yr, the production rate at 46.4 cm (initial exposure time) is 38 a / g / yr, and assuming a linear production rate profile the mean production rate between present day and 464360 yr would therefore be (58 + 38)/2 = 48 a / g / yr. In this case the sample age would be 21.7 x 10^{6} / 48 = 452 000 yr, quite close to the ACE calculated value of 464360 yr. Why the difference? The figure below shows the actual change in spallation production rate with time for this sample:

The effects of erosion and temporal variability are evident in this plot as the slowly increasing production rate with time as the sample approaches the surface, and also the fluctuations on shorter timescales resulting (mostly) from changes in geomagnetic intensity. The mean production rate (black line) here is 46.7 a / g / yr, corresponding to a sample age of 21.7 x 10^{6} / 46.7 = 464 700 yr, as predicted by ACE.

So the production rate profile can be useful when examining the sample sensitivity to erosion. The above example is a little simplistic as ^{21}Ne is a stable nuclide without production from muons/low energy neutrons and the sample is quite old, so temporal variability play a lesser role. Also, the sample is reported as thickness corrected cosmogenic inventory. The next post will repeat this exercise for a finite width ^{36}Cl sample, with very different results.

One of the largest uncertainties in cosmogenic nuclide dating is erosion rates, because as production rates vary with depth in the earth, cosmogenic nuclide production rates can vary significantly. As ACE uses time stepping, erosion is modelled slightly differently to the time averaged erosion production rate calculation (eg Lal 1991). Essentially ACE works backwards in time from the cosmogenic inventory until the inventory is zero. Therefore, at each timestep ACE ‘adds’ a layer that was eroded during that timestep and determines the change in production rate as a result of the fact that this sample is now at depth.

The following shows how to determine the sensitivity of a sample to erosion rates. We begin with the default samples that ship with ACE:

Plotting ‘sample id’ against ‘cosmogenic inventory’, the sample with largest cosmogenic inventory is the ^{21}Ne sample 05BT20:

For purposes here we delete the other samples and save this one to a csv file using the **Export Samples** button at the bottom of the sample browser:

Where we called the csv file ’05BT20.csv’ to remind us of the exported sample. Now open this file in a spreadsheet editor such as Excel or OpenOffice:

The field ‘erosion rate’ is already listed as a column in this file, as we exported the sample with the ‘All’ view attribute. However for other or new files erosion rate can be added as a new column anywhere in the spreadsheet. To examine the sensitivity of this sample to erosion, we assume an erosion rate of somewhere between 0 and 1 mm/ka. From the ACE variables page, we see the the units for erosion rate are cm/yr, so that we want a maximum of 1E-4 cm/yr (1 mm/ka). In the spreadsheet program, duplicate the sample row ten times and for each erosion rate increase the value by 1E-5 cm/yr. This will allow us to see trends in sample age with erosion rate:

Now delete any unused columns, save the file, and import it back to ACE using the **Import Samples** button at the bottom of the sample browser. You don’t have to assign different names to each sample with a different erosion rate, as ACE will see that the sample names repeat and allocate each a unique value when it imports the csv file:

Now we can run each sample with a different erosion rate and calculate the computed ages using the experiment ’21Ne Demonstration Experiment’ that ships with ACE. After the experiment is performed we plot ‘age’ vs ‘erosion rate’ using the **Plot** tool in the sample browser:

So why export this sample to a spreadsheet program just to change the erosion rate? ACE is designed to demonstrate the *sensitivity* of a computed sample age to measurement and theoretical uncertainties such as erosion. If ACE allowed users to type in a different number for the erosion rate and recompute, very little would be learned.

A reviewer of our Quaternary Geochronology manuscript noted that when using Dunai (2001) scaling in ACE, HLSL production rates are about 10% lower than for other scalings. This is verified by running two ACE experiments with different scaling and comparing calibrated output:

For the Demonstration experiment (using Desilets and Zreda 2003 scaling) the HLSL spallation scaling is ~5 a / g / yr and for the experiment using Dunai (2001) scaling the HLSL spallation scaling is ~4.5 a / g / yr. Details of scalings are listed here, however this result is different to other results (eg Table 6 of Balco et al, 2008). Upon further investigation we found the difference was in the calculation of sea level pressure. The following comes from DunaiScalingFunctions.py in the Components Directory:

slpressure = s["sea level pressure"] * (10. / g_0)

Meaning, ‘for sea level pressure take the NCEP reanalysis and convert to g cm^{-2}. Although this seems a reasonable estimate of sea level pressure, in 2001 Dunai assumed a globally uniform value of 1030 g cm^{-2}. Our NCEP derived sea level pressure is shown here. By changing the above line to 1030 g cm^{-2}, we get a HLSL spallation production rate of ~5 a / g / yr, in line with other scalings.

So which is correct? Using a sea level pressure of 1030 g cm^{-2 }is probably more in keeping with the spirit of the Dunai 2001 paper, but using reanalysis data give a better calibration, reflected by a lower chi-square (see calibration results in figure). What is the appropriate choice?

The problem is that ACE is an open source development environment, which means that if you don’t like the way we have implemented something you can easily go to the source directory and change it. For the foreseeable future we expect to be doing most of the development (some exciting things in the pipeline!) but that shouldn’t stop anyone else from changing the code.

In this way, any change to the code should be considered a different version, and more importantly, if we change the code one way and someone else changes the code another, we have a version fork. At the moment we are taking a ‘wait and see’ approach to how to resolve this, but are open to suggestions. We imagine that at some time towards the end of 2009 we will release an ‘official’ version of ACE, which will reflect exactly what is written in the article. From that point forwards, we will probably note updates on the website as they occur. Is that reasonable?

]]>
Nuclide |
Spallation |
Muons |
Low-Energy Neutron Capture |

^{3}He |
√ | ||

^{10}Be |
√ | √ | |

^{14}C |
√ | √ | |

^{21}Ne |
√ | ||

^{26}Al |
√ | √ | |

^{36}Cl |
√ | √ | √ |

The only default-supplied nuclide for which low-energy neutron capture is considered is ^{36}Cl. The procedure by which ACE calibrates and dates using low-energy neutron capture is documented in a previous article.

Muons production rates are not calibrated for in ACE, as compared to spallation they can generally be considered second order. During ACE’s experimental design phase muon production rates are requested as inputs to calibration:

In the above example, muon production rates are given as 2% of total production for both fast and slow muons, a common choice for ^{10}Be and ^{26}Al. For ^{36}Cl, muon production rates are input as muon flux rates (muons cm^{-2} yr^{-1}), with 175 and 700000 the default values for slow and fast, respectively.

In a calibration, ACE takes these muon calibration rates, determines the component of the nuclide inventory which can be attributed to muon production and subtracts it from the total. In this way, HLSL spallation and low-energy neutron capture production rates will be directly proportional to the inventory and HLSL rates can be estimated via linear regression. However, muon production rates scale differently to spallation production rates, and these must be accounted for during dating and calibration.

So how does ACE treat muons? Essentially just as described above. During calibration, muon contributions are removed according to the percentage and scalings chosen, and during dating post-calibrated percentages and scalings are used. Something to be aware of is that during the calibration, ACE doesn’t know what the total production rates are, so cannot tell if the percentages of total are accurate. In the above experiment 2% contributions from fast and slow muons were input. However, post calibration the muon contributions will look slightly different:

While the pre-calibrated muon production rates were entered as 2%, the post-calibrated production rates are 1.91%. While this is unfortunate, the the total rates are known reasonably well in advance so differences pre- and post- calibration should be small. If you really need exact numbers for post-calibrated muon contributions, try making a new experiment and changing the inputs slightly. For example if 2% was required post-calibration, try 2.1% pre-calibration and iterate from there.

Generally, muon contributions are small and pre- and post-calibration differences should be almost inconsequential. To see some examples, take a look at the muon production rate profiles with depth on the Production Rate Profile utility.

]]>- s[“id’] is the sample name,
- s[“latitude”] is the sample latitude,
- s[“erosion rate”] is the erosion rate for the sample.

For variables which are imported to and exported from ACE these attributes and units are listed here (in) and here (out). However, in ACE intermediate and computed variables are also attached to each sample in the code. For example:

- s[“rigidity”] for the cutoff rigidity of a sample at a particular timestep
- s[“age”] for the computed sample age

When developing ACE, any variable name can be used and attached to a sample for development purposes. However to be shown at the end of an ACE calculation as a new column in the Sample Browser, the ‘Output Att’ checkbox must be ticked in the Attribute Editor. This is demonstrated below.

Here we show how to introduce new variables into ACE. The variable will be the difference between calculated and independent ages for a calibration sample. This variable will then show how accurately any experiment reproduces the independently determined age of a calibration sample. This example is a little silly, as an easy solution to compare ages would be to select the ‘Ages’ option in the **View** menu of the Sample Browser, and then export it to a csv file:

Then the difference between ages could be calculated in a spreadsheet program. However, this would only work for existing sample attributes, not intermediate or user created ones.

First, create the new variable in the Attribute Editor. Let’s call it ‘age difference’. As it will be a number, not a string, we allocate it the type ‘Float’:

And we select **Output Att** to make this attribute available for viewing at the end of the calculation. Now repeat this process for the uncertainty in the age difference, which can be called ‘age difference uncertainty’.

Now to the code. As sample ages are almost the final things computed by the dating workflows, the calculation of age difference should be near the end of the procedure. Here is the workflow for the 36Cl Time Stepping Dating workflow, listed in the ACE user repository ‘workflows’ directory:

dating ["36Cl"] InitInventories => ChemicalCompositions ChemicalCompositions => StepForInventories StepForInventories.loop => Factor<geomagneticIntensity> Factor<geomagneticIntensity> => Factor<seaLevel> Factor<seaLevel> => InstElevation InstElevation => AtmosphericPressure AtmosphericPressure => Factor<geomagneticLatitude> Factor<geomagneticLatitude> => Factor<geographicScaling> Factor<geographicScaling> => StepDiffusionEquation StepDiffusionEquation => SCPDating SCPDating => NonCosmogenicProduction NonCosmogenicProduction => InventoryChangeCalculation InventoryChangeCalculation => StepForInventories StepForInventories => OutputFor36ClDating

All of the modules are in the ACE Components directory (src/ACE/Components/) and the last procedure is called OutputFor36ClDating. This file can be opened with any text editor:

def calculateInvcUncertainty(self, s): # calculate =SQRT(measured inventory uncertainty^2+(0.2*Inv_r)^2) prod1 = 0.2 * s["nucleogenic inventory"] term1 = s["measured inventory uncertainty"]**2.0 term2 = prod1**2.0 term3 = s["Inv_c_uncertainty"]**2.0 # Do not include production rate uncertainty #sum1 = term1 + term2 + term3 sum1 = term1 + term2 s["cosmogenic inventory uncertainty"] = math.sqrt(sum1)

This routine computes the cosmogenic inventory uncertainty, which from the above is calculated as the square root of the sum of squared measured and nucleogenic inventory uncertainties. And actually, it doesn’t even do that, as this routine is not called within OutputFor36ClDating.py. To use this routine, we need to add the following below the ‘super’ line:

super(OutputFor36ClDating, self).__call__(samples)

for s in samples: self.calculateAgeDifference(s);

This routine a good place to calculate the difference between computed and independent ages for calibration samples. In the code we add the following lines, defined as (to match the call statement above) calculateAgeDifference:

def calculateAgeDifference(self, s): # calculate the difference in computed and independent ages of samples if s["independent age"] > 0: s["age difference"] = s["age"]-s["independent age"] #Now calculate uncertainty in age difference term1 = s["age uncertainty"]**2.0 term2 = s["indepdent age uncertainty"]**2.0 sum1 = term1 + term2 s["age difference uncertainty"] = math.sqrt(sum1)

The *if* statement checks that the sample has an independent age, so that the code only operates for samples from the calibration databases. Note the Python conventions here, a colon at the end of an *if* statement, and a 2 space indentation to show the body of code to execute when the *if* statement is true.

Now date some calibration samples using the workflow. Then to make viewing results easier (and to check the numbers), add the new attributes to the Ages option of the **View** menu using the View Editor:

Now differences can be viewed alongside existing ages:

The age differences can then be plotted to see how well computed sample ages match independent ages. For example, the following plot was made by sorting first by age difference and then by id in the Sample Browser, and clicking ‘plot’:

So ignoring the uncertainties, the default 36Cl experiment underestimates the age of sample 9353 the most (top), by under 2000 years, and overestimates the age of sample MC-5 the most (bottom), by over 4000 years. Could this be a result of secular variability? If so there should be a trend with sample age, which we could see by plotting age difference against indepedent age:

The appended file OutputFor36ClDating.py can be downloaded here. If you use it, don’t forget to create the new variables ‘age difference’ and ‘age difference uncertainty’ in the Attribute Editor.

]]>- Spallation of
^{40}Ca - Spallation of
^{39}K - Neutron capture of
^{35}Cl

As a result, to calibrate for ^{36}Cl a multiple regression must be performed to determine the relative importance of these processes. A further complication with ^{36}Cl is that state of the art models of neutron capture of ^{35}Cl rely on a formalism of *epithermal* (energies between 0.1 MeV and 0.5 eV) and *thermal* (around 0.025 eV) neutron fluxes, where epithermal neutrons are generated by the moderation of fast neutrons, and thermal neutrons are generated by the moderation of epithermal neutrons (eg Phillips et al., 2001):

The total production of ^{36}Cl from neutron capture is the sum of these equations. To calibrate via a least squares regression the above equations must be expressed in terms of , the production rate of epithermal neutrons from fast neutrons at the air/ground interface, which is the source of epithermal neutrons and therefore also the original source of thermal neutrons. Here we explain how we find Pf0 using a multiple regression in ACE.

Calculations using the diffusion equation solution for neutron capture (detailed below) show that the total production rate of ^{36}Cl from neutron capture is a linear function of , plus a constant term which is directly proportional to muon production rates:

(1)

The dependence of neutron capture on muons results from the fact that fast muons can generate photodisintegration reactions which emit neutrons, which can then be captured by ^{35}Cl (Gosse and Phillips, 2001, Section 3.3.4). We verify this using Maxima, a free algebraic manipulation software program very much like Mathematica. Maxima can be downloaded here:

http://maxima.sourceforge.net/

(Macintosh users can install maxima automatically using fink.) When installed the linear dependence of neutron capture production rates on can be verified using the batch file Diffusion Equation Linearity File. Maxima programs are very simple, so that the learning curve is very shallow. The first lines of the code add some libraries for later use, line 3 is the following:

Pn = S_th*ST*Q_th*f_th/LAMBDA_th_ss*(phistar_th_ss+(1+R_mu_p) *FscrDELTAphistar_eth+(1+R_mu_p*R_th)*FscrDELTAphistar_th_ss +R_mu_p*phistar_th_ss)+S_th*ST*Q_eth*(1-P_eth_ss) *f_eth/LAMBDA_eth_ss*(phistar_eth_ss+(1+R_mu*R_eth) *FDELTAphistar_eth+R_mu*phistar_eth_ss);

Which is the sum of equations 3.54 and 3.55 from Gosse and Phillips (2001) given above with some additional terms for thermal neutron scaling (S_th), sample shielding (ST), and sample depth rescaling (Q_th and Q_eth). All terms match Gosse and Phillips notation as closely as possible, eg in the above = Q_th.

The maxima batch file then expands the original expression for production rates from neutron capture into an expression which explicity contains and . For example line 4 of the maxima file takes the original expression for production from neutron capture (using the %, operator at the beginning of the line) and substitutes Q_th for more explicit terms (Equation 3.80, Gosse and Phillips, 2001):

Q_th = (phistar_th_ss*LAMBDA_f*(1-exp(-Zs/LAMBDA_f))+(1+R_mu_p) *FscrDELTAphistar_eth*L_eth_ss*(1-exp(-Zs/L_eth_ss))+ (1+R_mu_p*R_th)*FscrDELTAphistar_th_ss*L_th_ss *(1-exp(-Zs/L_th_ss))+R_mu_p*phistar_th_ss*LAMBDA_mu *(1-exp(-Zs/LAMBDA_mu)))/(Zs*(phistar_th_ss+(1+R_mu_p) *FscrDELTAphistar_eth+(1+R_mu_p*R_th)*FscrDELTAphistar_th_ss +R_mu_p*phistar_th_ss));

This process of subsitution continues to line 22, which contains the expression for (Equation 3.52, Gosse and Phillips, 2001):

R_mu = S_mu * P_mu_n_0 / (S_th * Pf_0 * R_eth);

This expression contains in the denominator, so that the dependence of neutron capture production rates is an inverse function of this term as well as direct functions of `phistar_eth_a`

and `phistar_eth_ss`

(Equation 3.26, Gosse and Phillips, 2001).

At this stage the maxima file has expressed the neutron capture production rate explicitly in terms of . To see this expression click here. Line 25 of this file expands the entire expression into products of :

isolate(expand(%),Pf_0);

Which makes for a very long expression. However, as a test we can insert numerical values into this expression calculated by other programs to test how varies with . As for expressions, we can subsitute numerical values into Maxima. Line 29 of the Maxima script has the following:

%, f_th = 0.007997574;

which literally means “find all occurences of f_th in the previous expression and insert the value 0.007997574”. For actual numbers we use output from the 36Cl calibration sample named “9353” from the Phillips et al (1997) calibration database (detailed here), but in fact the numbers are irrelevant. As long as they are non-zero, they will decide the coefficients between and , terms that we are not actually interested in at this stage.

The maxima file then subsitutes numerical values for all terms in the neutron capture production rate equation except for and . At the end of the maxima procedure, the output is:

Pn = .01046760791954185 Pf_0 + .002679086726981858 P_mu_n_0

Note the correspondence of this expression to that of Equation (1) above, with actual values for and . This means that despite the complexity of the diffusion equation solution for neutron capture, the production rate of ^{36}Cl is a linear function of both and (a result which could also be verified using the maxima coeff command). This allows us to use a multiple **linear** regression when determining the relative contribution of the three production mechanisms for ^{36}Cl. Furthermore, we have the coefficients for the linear regression in the maxima file, we just need to combine them nicely in coefficients of and .

To find the coefficients needed for the multiple linear regression, we use another, very similar Maxima file that instead of subsituting numerical values expands the equations and collects like coefficients of and . The file Diffusion Equation Refactor Terms does just this, although to make things simple you will need to restart maxima to use it properly. After substituting in all components of and , it finds coefficients of using the following maxima command (line 28):

coeff(%,Pf_0,1); /*Linear Term*/

Means “in the previous expression, find the linear coefficient of Pf_0“. For the sample 9353 above, this would be .01046760791954185. However we wish to express it in terms of variables so that all samples can be determined. The output from this maxima expression without numerical subsitution is:

(%o93) 0 = %t91 + %t90 + %t89 + %t88 + %t87 + %t86 + %t85 + %t84 + %t83 + %t82 + %t81 + %t80 + %t79 + %t78 + %t77 + %t76 + %t75 + %t74 + %t73 + %t72 + %t71 + %t70 + %t69 + %t68 + %t67 + %t66 + %t65 + %t64 + %t63 + %t62 + %t61 + %t60 + %t59 + %t58 + %t57 + %t56 + %t55 + %t54 + %t53 + %t52 + %t51 + %t50 + %t49 + %t48 + %t47 + %t46 + %t45 + %t44 + %t43 + %t42 + %t41 + %t40 + %t39 + %t38 + %t37 + %t36 + %t35 + %t34 + %t33 + %t32 + %t31 + %t30 + %t29 + %t28 + %t27 + %t26 + %t25 + %t24 + %t23 + %t22

Meaning that the coefficient of is the sum of terms 22 through to 91 in the maxima output. This is a lof of expressions, but represents the neutron capture production rate in its most expanded and least compact form. Many of the terms are similar to each other, and by trial and error they can be added together to make fairly compact expressions. For example line 30 of the maxima file groups together the first 10 expressions, more than 10 makes for a very long output:

f90(factor(%t22 + %t23 + %t24 + %t25 + %t26 + %t27 + %t28 + %t29 + %t30 + %t31));

The terms are added together, factored as much as possible, and output in fortran 90 form so they can easily be used in ACE. The output is:

-f_eth*LAMBDA_f*(P_eth_ss-1)*(Da*LAMBDA_f**2*L_eth_a*L_eth_ss**2* & SIGMA_eth_ss+Da*LAMBDA_f**3*L_eth_ss**2* & SIGMA_eth_ss-D_eth_ss*LAMBDA_f**2*L_eth_a*L_eth_ss**2*R_eth* & SIGMA_eth_a-Da*LAMBDA_f**3*L_eth_ss**2*R_eth*SIGMA_eth_a+Da* & LAMBDA_f**4*L_eth_ss*R_eth*SIGMA_eth_a+D_eth_ss*LAMBDA_f**4* & L_eth_a*R_eth*SIGMA_eth_a+D_eth_ss*Da*L_eth_a*L_eth_ss**2* & R_eth+Da**2*LAMBDA_f*L_eth_ss**2*R_eth-Da**2*LAMBDA_f**2* & L_eth_ss*R_eth-D_eth_ss*Da*LAMBDA_f**2*L_eth_a* & R_eth-D_eth_ss*Da*L_eth_a*L_eth_ss**2-D_eth_ss*Da*LAMBDA_f* & L_eth_ss**2)*S_th*ST/ & (LAMBDA_eth_ss*(Da*L_eth_ss+D_eth_ss*L_eth_a)*(LAMBDA_f**2* & SIGMA_eth_a-Da)*(LAMBDA_f**2*SIGMA_eth_ss-D_eth_ss)*Zs)

Unfortunately maxima has no Python output to use directly into ACE, but the codes are very similar. In fact the only changes are “\” instead of “&” for newline characters and “self.L_eth_ss” instead of “L_eth_ss” for compatibility with the rest of ACE. In the maxima file the term above is labelled term %t95 (for term 95). The equivalent code is then copied and pasted into the ACE Component SumCalibrationComponents.py as self.term95:

self.term95 = \ -self.f_eth*self.LAMBDA_f*(self.P_eth_ss-1)*(self.Da*self.LAMBDA_f**2*self.L_eth_a*self.L_eth_ss**2* \ self.SIGMA_eth_ss+self.Da*self.LAMBDA_f**3*self.L_eth_ss**2* \ self.SIGMA_eth_ss-self.D_eth_ss*self.LAMBDA_f**2*self.L_eth_a*self.L_eth_ss**2*self.R_eth* \ self.SIGMA_eth_a-self.Da*self.LAMBDA_f**3*self.L_eth_ss**2*self.R_eth*self.SIGMA_eth_a+self.Da* \ self.LAMBDA_f**4*self.L_eth_ss*self.R_eth*self.SIGMA_eth_a+self.D_eth_ss*self.LAMBDA_f**4* \ self.L_eth_a*self.R_eth*self.SIGMA_eth_a+self.D_eth_ss*self.Da*self.L_eth_a*self.L_eth_ss**2* \ self.R_eth+self.Da**2*self.LAMBDA_f*self.L_eth_ss**2*self.R_eth-self.Da**2*self.LAMBDA_f**2* \ self.L_eth_ss*self.R_eth-self.D_eth_ss*self.Da*self.LAMBDA_f**2*self.L_eth_a* \ self.R_eth-self.D_eth_ss*self.Da*self.L_eth_a*self.L_eth_ss**2-self.D_eth_ss*self.Da*self.LAMBDA_f* \ self.L_eth_ss**2)/ \ (self.LAMBDA_eth_ss*(self.Da*self.L_eth_ss+self.D_eth_ss*self.L_eth_a)*(self.LAMBDA_f**2* \ self.SIGMA_eth_a-self.Da)*(self.LAMBDA_f**2*self.SIGMA_eth_ss-self.D_eth_ss))

For calibration purposes these terms are copied into ACE and added up to form the coefficient for . When coefficients for all samples have been determined the multiple linear regression can be performed. The result is the calibrated production rates for ^{36}Cl which can be obtained for different scalings and calibration datasets using the Experiment Browser:

]]>

For ages less than 41 ka, the Channell et al PISO-1500 record is about half that of SINT-2000, which suggests that the differences in these two records could be significant for cosmogenic nuclide dating. Unfortunately the PISO-1500 record only begins at 5 ka BP, and to use it we require a continuous record from the present day. Extrapolation is always problematic, however Channell et al cite a ‘recent’ intensity of 7.5 x 10^{22} Am^{2}, so for this record we use this value as a present day intensity and then linearly interpolate over the 0-5 ka period of missing data. This approach gives the intensity values a similar shape to the recent intensity changes suggested by SINT-2000 (see the above figure).

To examine the effect of this new dataset, we have imported it into ACE and compare results for ^{36}Cl using the original Guyodo and Valet 1999 dataset (described in Data Collections). After following the standard module creation procedures, we make two different experiments, both with Desilets and Zreda scaling (2003). This scaling accounts for changes in geomagnetic intensity:

The difference between the two experiments shown above is the choice of geomagnetic intensity record. After calibrating these experiments, we find that for the Guyodo and Valet record the reduced chi-square is 1.32, suggesting a reasonable match of calibrated with independent inventories:

The same calibration of ^{36}Cl HLSL production rates conducted using the Channell et al 2009 record has a reduced chi-square of 1.71:

so it could be said that all other things being equal, the Guyodo and Valet paleointensity record produces sample ages of the ^{36}Cl calibration data set closer to the independent ages than the Channell et al paleointensity dataset. More important is the fact that calibrated HLSL production rates are systematically lower for Channell et al reconstruction compared to the Guyodo and Valet and Channell reconstruction:

For Calcium Spallation and low-energy Neutron Capture production rates are 5% lower, and for Potassium Spallation the production rate is 11% lower. The Channell et al paleointensity record suggests a weaker geomagnetic field over most of the age span of the ^{36}Cl calibration database (0-49 ka BP), which according to ACE means lower production rates. Why is this? The following plot uses the two experiments above and calculates ages of the calibration database used here to identify where major differences arise:

This plot has been created by dating the calibration database using the two experiments made above, sorting the output first by **id** and then by **age**, and then clicking **plot selected attributes** in the Sample Browser:

The different experiments then plot on top of each other, and only the differences will show up. In this case it appears that the major differences are for the Metor Crater samples MC-1, MC-3, MC-4 and MC-5. As these are the oldest samples in the calibration database (49 ka), it is reasonable to expect that they are most sensitive to changes in the paleointensity record.

These results suggest that to further explore whether the Channell et al (2009) paleointensity record should be used instead of the older Guyodo and Valet (1999) record, the sensitivity of results for other nuclides, scalings and calibration databases should be examined first. As with all other datasets used in ACE, details regarding the dataset publication, source and modification are available in the Data Collections section together with a csv file to download.

]]>

The first version of ACE had a method to calculate a correction factor for snow, but only for the time invariant case:

This is a simple calculation based on Equation 3.75 of Gosse and Phillips 2001 for changes in neutron flux resulting from mass shielding:

Where is the flux in the absence of snow cover, is the flux with a snow cover, is the thickness of snow in g cm^{-2} and is the attenuation coefficient for neutrons (default value of 160 g cm^{-2} in the above panel). The panel above multiplies the snow thickness with the density to get , divides by the attenuation coefficient and finds this number to the power of .

A problem with this is that rarely is snow cover constant with time, and because of the exponential dependence on depth the snow shielding correction factor is not linear with the time history of snow cover. As a result a more realistic snow cover correction term includes the temporal variability of snow cover (Gosse and Phillips 2001 Eq. 3.76):

where is the snow correction factor and is the snow cover during month . The snow cover has units of g cm^{-2} and is the product of snow depth and snow density. ACE now includes a monthly snow cover utility which is useful for users with monthly data comprising snow depth and snow density:

All variables and units are the same as for the time invariant mass shielding correction factor, but are monthly values instead of annual.

]]>