MONTE CARLO SIMULATIONS OF THE THREE-DIMENSIONAL NEUTRON FLUX PROFILE INSIDE THE R.I.N.S.C. REACTOR

This study sought the neutron flux profile inside of the Rhode Island Nuclear Science Center (RINSC) reactor. A computer model was made of the threedimensional reactor core and simulations were run utilizing the Monte Carlo method and actual data specific to this reactor. Heat map plots of the neutron flux show that the results for total fluence and thermal fluence are consistent with what the applicable physics would lead one to expect for the spatial profile of flux (or fluence) for either energy group. Moreover, the results for thermal fluence were parsed and rendered into 2D plots. This enabled comparison to the plots from the flux mapping measurements that followed the conversion to low-enriched uranium (LEU) fuel in 1992 at the RINSC research reactor. The model developed as part of this work will form a foundation for future studies on the effects of fuel aging and burnup on flux characteristics within the experimental neutron irradiation facilities in the RINSC reactor.


LIST OF TABLES
the side length of each core voxel per "x" and "y", and the resolution of the mesh at its margins. (Resolution of the mesh per "z" within the core grid was set to 1 in this parameter study; this is the same study already depicted in Figure 4.) The data vii demonstrate in this plot that even after accounting for core voxel side length, the time cost incurred by the margins of the mesh can be substantial. For example, the finest margin resolution in this parameter study (70) counter-acted the coarsest core voxel side length (1 cm) so greatly that the time cost is almost equal to that of the finest core the side length of each core voxel per "x" and "y", and the resolution of the mesh at its margins. (Resolution of the mesh per "z" within the core grid was set to 1 in this parameter study; this is the same study already depicted in Figure 4 and Figure 5. mesh tally for thermal energies (relative error for thermal fluence per source particle ix history) for the RINSC reactor. This plot shows column F, and the relative error is less than 10% everywhere of interest except for a small region at the top of F-9 (green). . 34 Figure 19: Relative error as a decimal fraction for the MCNP neutron flux type "B" mesh tally for thermal energies (relative error for thermal fluence per source particle   also on the geometry of the reactor. Suppose a plan is in place to irradiate a specimen at a certain height in a certain radiation basket, at a certain power level, based on a measurement recently made at that particular position in the core. Absent a reliable spatial flux profile, there is reason to wonder whether the same desired activation could be achieved at a different position using less power. There may even be reason to question how well the shape of the irradiation target suits the desired outcome. Thus, the profile of the neutron flux in RINSC's reactor promised to be especially useful information for the many applications of neutron activation. Once able to account for the influence of geometry on neutron activation, designers of experiments or of applications would be able to optimize their use of the neutron radiation. This could guide where they place their items in the reactor ( Figure 1). Knowledge of the profile of the neutron flux would also help the RINSC staff. It would serve them in their safety and maintenance tasks to know how they should expect the radiation to be distributed, as the reactor is the largest single source of ionizing radiation in the entire facility. It may additionally benefit them while operating the reactor, as it may unexpectedly become relevant to an application of the reactor for neutron activation.
Deterministic treatments of neutron transport are well-suited to simple geometries: infinite fissile slabs, infinite fissile cylinders, etc [2][3][4] . Introducing reality means introducing boundary conditions and solving not one differential equation but a system thereof [2][3][4] . Some solutions are more approximate than others, and require additional efforts to ensure validity 2,4 . The more complex (and realistic) the geometry, the more complex the neutron transport. With fuel pins, pellets, and wafers plausibly numbering in the hundreds and spanning only centimeters (or less) in some dimensions 2,5 , the concerns of a purely calculus-based approach are substantive, inevitable, and inevitably complex.
Concerns like these motivated the invention of the Monte Carlo method, originally inspired by the need to simulate the behavior of fissionable materials 6,7 .
The Manhattan Project required a reliable probabilistic simulation of fission within the nuclear bomb to ensure a successful test of the prototype 6 . While the interactions of subatomic particles and nuclei can be described by an integro-differential equation, it happens that this governing equation does not lend itself to a purely deterministic approach. Often an analytical solution does not exist or does not fit the given conditions 6,8 . For this reason, the Monte Carlo method introduces an element of probabilistic methods. When a governing equation (in this case and the original, neutron transport) cannot indicate what should happen to one simulated particle of many, the computer makes a random choice guided by actual probabilistic data until it has enough inputs to make the remainder of its outputs based on the governing equation 6,8 . In modeling neutron transport, Monte Carlo software will randomly choose whether each neutron in its simulations is absorbed or scattered by the next nucleus with which it collides (guided by actual probabilistic data), in the event of absorption it will then randomly choose when that nucleus decays, and after all random selections have been made it will calculate momentum and new positions for all of the particles [6][7][8] . Thus, the Monte Carlo method blends probabilistic and deterministic methods by beginning with one and finishing with the other.
Since 1986, Argonne National Laboratory (ANL) has helped model the RINSC reactor using MCNP 9,10 . MCNP, short for "Monte Carlo N-Particle", is a mature tool developed and maintained by Los Alamos National Laboratory, which claims most of the earliest pioneers both in radiation transport and in the Monte Carlo method itself 6,7,11 . One of those many MCNP simulations, was a criticality projection in anticipation of the conversion to low-enrichment uranium (LEU) fuel 10 .
Shortly after the conversion of RINSC's fuel elements from high-enrichment uranium (HEU) to LEU, a team formed of URI physicists and the Rhode Island Atomic Energy Commission mapped the flux profile with actual measurements, at multiple power levels 12 . The flux mapping surveyed the reactor core and many immediate surroundings.
Later, with help from Brookhaven National Laboratory, RINSC staff simulated both the neutron flux and gamma radiation profile in the thermal column, which is used to irradiate samples outside the core 9 .
Early this century, a URI senior mapped the neutron flux in the dry irradiation room 13 .
This work, developing an MCNP model of the RINSC reactor to model the neutron flux inside the core, is new and had not been previously attempted.

METHODOLOGY
The methodology of this study is summarized as follows: • Model the reactor in MCNP.
• In the model, employ a "mesh tally" to get a flux score, broken into subregions by a mesh.  While necessary that it use the 2015 configuration, the model was not truly "the 2015 model" until it asked the right question of MCNP. The 1998 model was made to find the steady-state effective multiplication factor of the reactor 5 ; the goal of this project was to predict the spatial profile of the neutron flux for the reactor.

Changes to Implementation of Geometry
On the recommendation of Jennifer Alwin from Los Alamos, one change made to the model in a departure from the 1998 version was that the plane and infinite cylinder surfaces used in the model to define rectangular prisms and "finite" right cylinders were replaced with the equivalent macrobody surfaces: rectangular parallelepiped (rpp), and right circular cylinder (rcc). This made the input deck simpler for a human being to parse. Macrobodies provide the abstraction of being one finite, enclosed surface, which often renders the geometrical description of a cell much simpler than the equivalent using non-macrobody surfaces 15 . For example, the cell card for the very first wafer of fuel meat defined in the input deck was originally written 5 : Substituting a macrobody surface (labeled "6") leads to the following: The difference is more substantial in the surface block. The original six surfaces corresponding to the aforementioned cell originally appeared as follows: Whereas the macrobody replacement was:

Units
What the makers of MCNP refer to as a "flux tally" is more accurately a fluence tally; it is based on distance traveled by each particle where the formula for flux would use particle speed. Users are instructed to interpret the units of their results one way should their problem be steady-state, and another way otherwise 15 . Whichever kind of tallies the user employs (current, "flux", heating, etc.), the steady-state modification is "per unit source time" 15  However, there was no time rate of particles nor a fixed total of source particles in the context of this study -nor should there have been. In general, the state of a reactor ( ) stipulates the ratio of one generation of neutrons to the preceding generation, not either generation. This study and its results are such that one can "just add time" in future studies. Thus the "flux tally", despite its name, is fluence per starting particle.
Fortunately, the central question was not "what is the flux" but "what is the spatial profile of the flux", and it will be demonstrated that regardless of what a flux tally is or is not it has the same shape, and thus gives the shape.

Mesh Tally
Toward answering the new question, it proved necessary to utilize an MCNP feature known as a "mesh tally" -specifically superimposed mesh tally B, "fmesh".
Whereas the "normal" (non-mesh) tallies largely rely upon describing the region of interest in terms of the same cells or surfaces used in defining the geometry of the problem, mesh tallies require the user to define a mesh 15 . For mesh tallies in general the user has his or her choice of coordinate system from Cartesian, cylindrical, or spherical, and the option to apply a coordinate transformation if doing such is more convenient in defining the mesh 15 . The definition of the mesh is independent of cell and surface labels 15 . Some users may find this independence more convenient than the protocol for "normal" tallies, especially if they think of the regions of interest for their tallies independently of the cell and surface labels.
The "fmesh" tool of MCNP offers as one of its features several choices for the format of the output of each mesh tally 15 . Each format has different advantages which lend themselves best to different use cases. For this study, the "CF" format was used.
The advantage of "CF" over any of the "matrix/array" formats is that it is easy to parse for any researcher who has to "roll his own" post-processing scripts. The advantage over the default columnar format ("COL") concerns two additional columns in "CF", which are the only difference between these two formats 15 . The first of these additional columns is for the volume of each bin 15 . The second of these additional columns holds for each respective bin the product of the volume and the tally result 15 .
A key step in generating tally results (of this kind) at each bin is to divide the pending result by bin volume 15,17 . In some situations, e.g. comparing the results of this study to the results of others' work, it later becomes necessary to un-do the division by bin volume. At best that would mean a careful employment of the "FM" tally multiplier card, if the need is known in advance. Otherwise it may require expending time and effort post-processing to un-do the normalization per bin volume, including the tedious effort of verifying and validating the post-processing. Worse yet, it may require re-running the problem with a different mesh definition. Thus the "CF" option saves human time and effort pre-processing and post-processing.
The mesh defined for the "fmesh" tally was not binned per time, as the focus of this study was the spatial profile. As for energy, the tally was binned for thermal, epithermal, fast, and total. The energy thresholds for these groups were borrowed from MCNP, in which they are hard-coded into the KCODE output: thermal 0.625 eV and less; epithermal from 0.625 eV to 100 keV; and fast 100 keV and higher.
For binning with respect to space the core grid was the main concern. Identical rectangular prism voxels were superimposed onto it, which were square with respect to "x" and "y". The mesh for this study was extended around the rectangular prism of the core grid by a margin of one grid unit outward from the core grid in each direction.
The subdivisions for the margin were copied from those for the core grid along "z".
The way that FMESH is implemented binning by space is normally a matter of specifying into how many bins an interval will be divided 15,17 ; instead in this study formulas were employed to specify voxel side length in centimeters and then convert it to the appropriate binning with respect to "x" and "y".

Correction of model parameters
It is not obvious how thick the voxels of the mesh should be, nor how many total iteration cycles will be needed. Nor is it apparent how many of those iteration cycles should be discarded. Inevitably one must pick a number; one does not know which numbers to pick until one tries.
At first the mesh parameters were to be determined one direction at a time, but the decision to match the "x" dimension and the "y" dimension simplified the process into determining binning for "non-z" and then for "z". For each of these two direction sets, intuitively non-taxing values were arbitrarily chosen for the parameters not of immediate concern, and then a parameter study was performed on the binning for each direction set. The "non-z" parameter study also tested the impact of the binning options for the margin of the mesh. As source particle histories per cycle had not yet been decided, the "z" parameter study also varied the number of source particle histories per cycle, in case the parameter study data might lend themselves to extrapolation. Other parameter values set in these parameter studies would be unacceptable for estimating , but at this stage that was not the goal. Convergence did not matter in these mesh-refining parameter studies, as long as neutrons visited a broad-enough swath of the problem geometry to judge the mesh. Each case of each parameter study was concluded with plots of the mesh tally, and each parameter study was concluded with plots of the time cost of each test run (e.g., Figure 3 through Figure 6). These enabled judgements of plot quality supplemented by time cost. The results of each parameter study were then combined to result in the final binning for the tally mesh. As part of the process for refining the mesh parameters, plot commands were produced and also refined for the sake of assessing the mesh, and these contributed greatly to the final set of commands used for the production run. Figure 3: Scatter plot of a parameter study which varied two independent parameters: the binning of the tally mesh per the "z" direction ("Z_RES"), and the starting histories per cycle ("n/cyc"). The binning at all of the margins in all directions was set to match Z_RES. The data demonstrate that the finer the mesh, the slower the simulation will run (after controlling for starting histories per cycle). Also, the more starting histories per cycle the slower the simulation will run (after controlling for mesh resolution). Figure 4: Scatter plot of a parameter study which varied two independent parameters: the side length of each core voxel per "x" and "y", and the resolution of the mesh at its margins. Both of these contribute independently to the total number of voxels formed by the mesh, which is the independent variable depicted in this plot. (Resolution of the mesh per "z" within the core grid was set to 1 in this parameter study.) The data show that the finer the mesh is made, the slower the simulations run. : Scatter plot of a parameter study which varied two independent parameters: the side length of each core voxel per "x" and "y", and the resolution of the mesh at its margins. (Resolution of the mesh per "z" within the core grid was set to 1 in this parameter study; this is the same study already depicted in Figure 4.) The data demonstrate in this plot that even after accounting for core voxel side length, the time cost incurred by the margins of the mesh can be substantial. For example, the finest margin resolution in this parameter study (70) counter-acted the coarsest core voxel side length (1 cm) so greatly that the time cost is almost equal to that of the finest core voxel side length (0.127 cm) and a margin resolution of 10. (The latter was faster than the former by about 4 seconds.) Figure 6: Scatter plot of a parameter study which varied two independent parameters: the side length of each core voxel per "x" and "y", and the resolution of the mesh at its margins. (Resolution of the mesh per "z" within the core grid was set to 1 in this parameter study; this is the same study already depicted in Figure 4 and Figure 5.) The data demonstrate in this plot that after accounting for margin resolution, finer and finer core voxels slow the simulations. With the geometry finished, and the tally mesh defined, all that remained toward solving the problem was the KCODE card. The first parameter to be refined from its previous placeholder was the number of cycles to be discarded (hereafter labeled "ndiscard", borrowed from Los Alamos for convenience 18 ). In the course of preproduction efforts, ndiscard had been set to two hundred. If only the first fifty cycles needed to be discarded in a production run, discarding the next 150 cycles would constitute an appreciable waste. preventable. Thus ndiscard was the first to be refined, despite the risk of needing to rerefine it. Ultimately, one refinement proved adequate, and the final value for ndiscard was ten.
The next parameter to be refined was the number of source particle histories per iteration ("n_per_cyc" hereafter). For pre-production, Los Alamos recommends as little as one thousand or as many as five thousand for this parameter; in production runs they warn that more realistic values are from ten thousand to one hundred thousand -or even higher 18 . Here the risk of needing to re-refine the parameter was more apparent than before, so the parameter study for n_per_cyc was done with extrapolation foremost in mind. The dependent variables of interest were running time, running time per total cycles, and running time per source particle. Plots for these quantities versus n_per_cyc showed appreciable noise, frustrating extrapolation. In hindsight, the extrapolation effort may not have been worthwhile for this study. Proper use of MCNP (and other Monte Carlo applications) requires that particles reach all the "nooks and crannies" enough not to miss relevant events which would transpire in those hard-to-reach regions 8,15 . Thus, a test run which misses a hard-to-reach region only has so much in common with a production run in which particles reach everywhere, should the hard-to-reach region prove important. An effort such as this may still prove worthwhile to other users of MCNP working with simpler geometries.
Ultimately, the value of ten thousand was settled upon for n_per_cyc. The total number of source particle histories is the product of n_per_cyc and the total number of active cycles 1 , so more cycles can compensate for a low value of n_per_cyc.
No parameter study was needed for the initial guess of the criticality, as the iterative KCODE process of simulating neutron transport is self-correcting 18 . This was left at the very common value of 1. Similarly, no parameter study was needed for the total number of iterative cycles; the production run is "the parameter study".
With production values for relevant input deck parameters settled, an intitial simulation was run. This was followed by continuations with more iterative cycles until results reached acceptable precision. Fortunately, the results lend themselves to visualization in "heat map" form. The three-dimensional data was sliced into two-dimensional plots, with several points throughout the geometry serving as origins. Three plots were made at most of these points, each in one of the cardinal orientations of "xy", "yz", or "xz". If all of the same heat maps used to assess the results were featured in this work, it would be a total of 124 pages of heat maps. Instead, only a select few will appear in this chapter, and several more will be included in an appendix.
MCNP predicts that the beryllium and graphite reflectors are substantial in concentrating the total fluence. Figure 7 shows a border running approximately through the center of the beryllium reflectors. In fact, these appear to be the greatest influence on the shape at all points exterior to them from the reference point of D5. In this way they appear to weigh heavily in explaining the fluence gradient in row 9, and in each radiation basket of row 9. If instead the fuel assemblies were most important, by a wide margin, one may have expected total fluence along row 9 to be most intense in columns C and E with a slight dip in column D, and a more drastic decrease in intensity in columns B and F. Were the reflectors and fuel assemblies of approximately equal importance in determining the shape of the total fluence, one might have expected to see a compromise between the latter and the former. Instead, the reflectors appear to dominate. They dominate both outside and inside; the neutron fluence is very concentrated in D5 despite D5 lacking fuel. While source distribution matters, scattering helped the reflectors to "even out" the neutron transport. In other orientations the same phenomenon holds for total fluence, apparently because of the top and bottom reflectors. The control rods look like they are "squashing" the flux, when in fact they are "slurping it up" for largely the same effect on the shape.   It took more than ten million one hundred thousand starting particle histories to attain reliable results for total fluence in all of the regions of interest, as illustrated in Figure 11 through Figure 13. Figure 11: Relative error as a decimal fraction for the MCNP neutron flux type "B" mesh tally for all energies (relative error for total fluence per source particle history) for the RINSC reactor. Figure 13: Relative error as a decimal fraction for the MCNP neutron flux type "B" mesh tally for all energies (relative error for total fluence per source particle history) for the RINSC reactor.
For thermal fluence the beryllium and graphite reflectors influence the shape, as was the case in the plots for total fluence. However, the thermal fluence plots are consistent with absorption and moderation dominating the neutron transport above all else. The heat maps are "cold" with respect to thermal fluence in the fuel assemblies that consume the thermal neutrons for fission. Per "x" and "y", the sleeves for the control rods are "hottest". This is consistent with the influence of the beryllium reflectors. Also, it is consistent with the water in the control rod sleeves slowing faster neutrons incoming from elsewhere to thermal energies. With respect to "x" and "y", the resulting profile is more spread-out than for total fluence. While D5 is bombarded with more thermal neutrons than row 9, the difference is less dramatic. Comparing the radiation baskets of row 9 to each other, the intra-row profile of the thermal fluence bears a strong resemblance to that for the total fluence. With respect to "z" the "slurping" effect of the shim safety blades is more dramatic for thermal fluence than for total fluence. The shape for total fluence resembles a nectarine with two sharpcornered pieces cut from the top. Thermal fluence looks less like a nectarine and more like an apple.    It so happens that fluences of particular energy designations would require even more starting particle histories to attain the same precision as the total fluence.
The green in some of the error plots requires arguing for or against a judgement call whereas the corresponding plots for total fluence are an objective matter of seeing only orange (5% or less) and cyan (5% to 10%). The results for thermal fluence are arguably "good enough". The region covered reliably for thermal fluence was almost the same as that for total fluence, failing to meet it by a margin on the order of 1 cm (e.g. Figure 18). For many an application of the reactor, the imprecise (green) regions in Figure 18, Figure 19, and Figure 20 are irrelevant. It is implausible (if not impossible) to secure an object in those upper corners of B-9 and F-9, and unlikely to be desirable in the first place.   Results for epithermal fluence were less precise than for the aforementioned groups; the region of the center and fuel assemblies was reliably modeled but precision was lacking for row 9 of the grid. These problems are conveyed in Figure 21 through Figure 23. Figure 22 displays especially well the imprecision of the results for B-9 and F-9. The same figure shows that for each column in the remainder of row 9, an appreciable fraction is not precisely covered (though "appreciable" is subject to the context of applications).   While adequately precise for the center and fuel assemblies, the relative error of the results for fast fluence was even more lacking in precision than for epithermal. The reasons for concern can be seen in Figure 24 through Figure 26. The same logic applies for the fast fluence results as for the epithermal; a comparison of the corresponding plots (e.g. Figure 19 against Figure 25) illustrates that the key difference is the extent of the precision problem. For fast fluence, useful precision does not extend as far from the center of the fuel assemblies as for epithermal.    (Table 1). Ignoring the blue in Figure 26 and instead addressing the yellow, much of the relative error for the fast flux tally reaches 25% in B-9. The standard deviation of a tally is treated as inversely proportional to the square root of the total number of particle histories, and the precision goal for a tally of this kind is 10% or less 17 . Getting satisfactorily precise results from MCNP in this case would likely require another 5.87 million starting particle histories in addition to the 10.1 million already simulated. If that is not "good enough" (by the justification already given in this study for the thermal fluence or perhaps by some other rationale), then one is not free to ignore the blue and from the plot the relative error reaches as high as 50%; more reliable fast fluence would require another 12.5 million starting particle histories in addition to the 10.1 million already simulated.
The convergence plots (Figure 27 and Figure 28) illustrate that whatever the concerns for the fluence of various groups, both and the Shannon entropy for the simulated neutron source converged long before the total fluence.   would continue through column A, albeit more gradually, as the thermal neutron diffusion length for graphite is greater than that for beryllium in common operating conditions 2 . After the graphite of column A, the remaining thermal neutrons reach water, and neutron diffusion theory would predict a rapid exponential decay of the thermal neutron flux, as the diffusion length for light water is much less than that for graphite or beryllium 2 . Thus, it is apparent that the results of this study are consistent with predictions from differential-equation-based theory. Thermal fluence through D5 as a function of height ( Figure 31) was achieved by filtering out data external to D5 with respect to "x" and "y" and merging all remaining subsets of data with "z" in common into one datum per "z" bin. There is a resemblance along the height of the central irradiation basket. However, the two data sets differ at the top and bottom reflectors; the measurements were flat where instead the simulation curves up and then down. The simulation is consistent with diffusion theory 2 ; the measurements (at the reflector heights) were not. The concurrence of diffusion theory with the simulation suggests that the flatness at the reflectors could be the symptom of a practical problem taking the measurements. One possible explanation for the difference is that foils placed at the height of the reflectors had to be placed beside them, as opposed to through them or in their place. Foils along the height of the radiation basket went inside the radiation basket, since it is hollow. This is impossible for the reflectors. The distance from the center line of the core grid to foils fixed beside the top and bottom reflectors may explain the discrepancy.
Coordinates per "x" and "y" for the D5 measurements were not available at the time of this writing. In light of this, comparison to the measurements lends credence to the simulation.

Figure 31: Measurements from the 1992 flux mapping of the RINSC core 12 (blue) co-plotted with the results of this study (red). Each data set has been normalized to its respective maximum, allowing a comparison of spatial profiles. Both sets of data pertain to section D5 of the core grid.
For D9, all but one measurement was taken along the height of the radiation basket, which prevents conjecture on the reflector concerns previously described for D5. The distribution of the simulation results appears wider than that for the measurements; nevertheless a resemblance is apparent in Figure 33.

Figure 33: Measurements from the 1992 flux mapping of the RINSC core 12 (blue) co-plotted with the results of this study (red). Each data set has been normalized to its respective maximum, allowing a comparison of spatial profiles. Both sets of data pertain to section D9 of the core grid.
The fact that the peaks with respect to height do not match exactly for D5 nor for D9 is not surprising, as it is likely that the constant control rod heights in this study do not match the time average of the control rod heights during the flux mapping measurements.
The results of this study were also filtered and merged for row 5 (Figure 35, Figure 36). These results resemble the calculations made by the mapping team ( Figure   35), except at the core center -where instead they are closer to the mapping team's actual measurements (Figure 36).      The thermal data from this study resemble those from the flux mapping conducted after the fuel element conversion, per the information available from the latter and simple variations on the mesh defined for the former.

Future Study
One idea for future study which was outside the scope of this study would be to employ the MCNP model to predict power generation. By necessity, the N in "Monte Carlo N-Particle" is orders of magnitude fewer than would be true; in typical usage millions of simulated particles represent moles of real particles 18 . What happens to each simulated particle will be a loyal representation (within reason) of what could happen to any single real particle 7 . Unfortunately, aggregated simulation results will inevitably understate real aggregate phenomena, owing to the dearth of particles contributing in the simulation. Fortunately, the former can compensate for the latter: simulated aggregate results will be proportional to the real aggregate phenomenon (Eq.1). For any desired quantity Q, the contribution per particle should be the same in both the simulation and reality. For this reason, tallies are normalized per starting particle history by default 19 . (In terms of Eq. 1, MCNP gives , as opposed to ).

= 1
As this pertains to power generated by fission events in a reactor, the tally that one may colloquially refer to as a "power tally" does not predict the power which would have been generated in the reactor; it reports the volumetric density of the simulated power generation, normalized per simulated starting neutron. After the user (or MCNP, on the user's behalf) converts the volumetric power density to power, the results form the left-hand side of Eq. 2. If one wants to predict the real power, one must multiply the "power tally" by the real number of neutrons; if one wants the real number of neutrons, one must divide the real power by the "power tally". MCNP supplies the left-hand side; the user must get the right-hand side some other way.
= 2 Using data from RINSC's control rod calibrations, one can ascertain the power in a real-world situation. One can simulate a change in power to correspond with this scenario, and then compare the power tally to the real power. The immediate result is that one has Nreal for that scenario; the longer-term benefit is that one has the "gain" from the MCNP model to the reactor that it represents: Armed with the model-to-subject gain, one can put the model through arbitrary maneuvers, and then predict the corresponding power in the reactor.
Simulating a change from one state (keff) to another will require chaining together simulations in which the control rod heights are set to different positions, each run utilizing the "source tape" ("srctp") file from the preceding run. If desired, the output can be "stitched" together to show power vs. time as the control rods are moved. To accomplish such a thing, there would be considerable computer scripting work to be done; computer hardware concerns would also need to be examined for feasibility.
With those concerns of "how" resolved, the "what" of model-to-subject gain would require a researcher to test either that the gain is constant or that it fits a pattern such that it could be used as proposed. Should such a practice be established to be reliable, it would be a predictive tool for the RINSC staff that would free them to consider different parameters for proposed procedures by "tinkering" with them in simulations where before the cost of "tinkering" was too much to permit much of it (if any).
Another potential direction for future study would be to simulate burnup of RINSC's fuel. As the fuel assemblies change, it is reasonable to expect a change in the spatial profile of the neutron flux. As to how would it change, one might employ MCNP toward making a prediction.
One of the surprises that emerged during this study was that the model's response to the control rods does not match that of the actual reactor. At control rod heights for which the real reactor was critical when Core 6 began, the model is supercritical, in the interval 1.04793, 1.04930 with 99% confidence. This difference is a compelling topic for future study.
Essentially, this study was a simulation of a flux mapping. In light of that, it is encouraging how closely it matched measurements. While MCNP modeling would never be a substitute for flux mapping, it shows tremendous promise as a supplement.
Between mappings, RINSC will be empowered to make detailed predictions without burning uranium to make those predictions. More appreciable to the fiscally-minded, simulations will have lower labor costs than those of operating the reactor. Thus, while enjoying multiple kinds of economy, RINSC will gain a potent tool to compound their abilities in their mission to advance nuclear science and technology.

APPENDICES
The Challenges in Converting and Modifying an older input deck to a newer version program Largely as a consequence of changes between versions 4C and 6.1.0 of MCNP, the model required modification merely to cease crashing. The necessary model changes ran a spectrum from things which prevented the model from running, to things which led to incorrect output, to things which led to inconveniently-presented output.
As mentioned earlier, in its specification of the materials for the simulation the original model used the cross-section data libraries available at the time corresponding to ENDF/B-VI data. It did not merely use these libraries by default; the original code explicitly specified their use 5 . That cross section data has not only been succeeded by more recent data but has been removed from the default library, so adaptation began with removing the library IDs, which referred to a cross section library no longer included in the basic MCNP package. This removal was merely to eliminate a fatal error 2 causing crashes, rather than to improve output per se, so selecting a more appropriate cross section library was deferred for future studies. In this study library IDs were omitted altogether, causing MCNP to use its default cross section data for each nuclide.
Thus fatal errors concerning missing cross section data were replaced by fatal errors stating that the source was rejected. Investigation of this problem uncovered a typo inherited from the version of the ANL model which had been shared with me.
Specifically, that typo was that the surface reference defining cell 12 said "-20 21" (which translates out-of-context as "interior to surface 20 but exterior to surface 21") where somebody must have meant "20 -21" ("exterior to surface 20 but interior to surface 21"). Thus a change was made from "-20 21" to "20 -21" 3 , and as a result the fuel assemblies were rendered in the geometry setup. It so happened that there would still be problems with the source definition, none of which would be diagnosable without first having solved the problem just described.
After this a blow-by-blow of the error messages and solutions in chronological order would be tedious and confusing. The account thus far should suffice to demonstrate that updating a program or model can unexpectedly deviate from the iterative debugging cycle that one might imagine beforehand. The refinement process meandered as new concerns were discovered while searches were ongoing for previous causes of concern. Some obstacles of note: (1) Some cells in universe 4 overlapped with cell 708. Inserting "#708" into each cell definition in universe 4 (except that of cell 708) solved this problem. Changes explained thus far -other than removing cross section library IDssuggest that version 4C employed "Do What I Mean" features, since apparently it was less stringent in the input which it would accept than later versions. Interestingly, some of these features may have lived on in succeeding versions. At one point while experimenting with the input deck to address lost-neutron problems, KSRC was used to re-define the neutron source with one point in each wafer where a fuel assembly would sit if there were a fuel assembly at D5, and then this author forgot to change it back to how it had been defined previously (KSRC had previously been set so that the two centermost fuel wafers of every fuel assembly each had a KSRC point at their center). There is no fuel meat where these twenty-two points are, but KCODE refines the initially given source into a physically plausible source during its initial, discarded iteration cycles. Apparently the original "KSRC" problem was not a bad initial source, but a bad initial source that was also "un-fixable", and despite the words of the User Manual the "fissile material" requirement can be violated by accident and sometimes the simulation will proceed.
The most appreciable way in which version 4C was apparently different from version 6.1.0 was in geometry specification. Despite changes already explained, the simulations continued to lose ten particles before the third KCODE cycle could be completed. On re-examining the model geometry and the output this author noticed that the lowest-level universes had space for which no cell was defined; all the space within each of the lowest-level universes which went on to fill a higher-level "window" cell had been assigned a cell in its original universe, but not all of the space inside these lowest-level universes had been assigned a cell 5 . These "incomplete" universes thusly found were then "completed" by defining additional cells of water occupying the remaining space in each. In other words, apparently in version 6.1.0 every universe must ascribe every point in its space to a cell -regardless of whether any higher-level cell will ever use that space! This was a surprising problem because the "undefined" space had no physical meaning. In this case the software's requirements for the input exceeded those for a physically valid model; the crux of this lost-neutron problem was MCNP's need for full context in each level of a model before navigating the various levels.
Issues like these teach a simulator to appreciate the tools employed in simulations, and of the sort of incidental differences that can emerge between one version of software and the next. Once the "incomplete universes" were remedied, the This project began on a laptop with 2 GB RAM and one dual-core processor.
The earliest measure taken to address speed was to add 4 GB RAM to the laptop. Soon thereafter the need for even greater speed became apparent, and the project was migrated to a desktop computer at RINSC employing a quad-core processor. Early versions of the model took days to run on the original platform, and the same models took six hours on the new platform. Unfortunately, remote access to the RINSC desktop became a project of its own. This author has no experience maintaining or configuring a server, which left the project dependent at the time on ready-made solutions. Of the ready-made remote access solutions, options were further limited by the project budget to those that were free-of-charge. Fortunately, an account was established on the seven-node Beowulf cluster (one head and six computing nodes) housed at Tyler Hall on the main campus of URI. Migrating to the Tyler Hall cluster also reduced remote access from a project of its own to the most minor of problems, rapidly solved with a free-of-charge SSH client.
Addressing speed concerns was not limited to hardware; usage of the cluster was optimized with respect to time. Migration to a computer cluster also presented the opportunity to build MCNP for Open MPI, so it was decided to test notions of how to optimize the speed -notions relating both to Open MPI and to MCNP. Unfortunately the MPI build of MCNP ultimately proved slower at running the model than the sequential build. Fortunately, the sequential build includes OMP threading, which lent itself greatly to speeding calculations inside of or outside of the PBS/Torque environment available on the cluster. In the course of these speed tests it became clear that the optimal way to run MCNP with respect to speed of calculation was: to submit each job to only one of the computing nodes -never running one job across multiple nodes; to assign each job as many cores as the assigned node had, while telling MCNP to run that many tasks; and also to queue each job on any node to wait for any preceding jobs assigned to the same node to complete. In other words the optimal method of computing was to submit jobs to run one job at a time per computing node, running on all their respective nodes' cores.
Where have all the neutrons gone?
A recurring problem while refining the model was that MCNP would "lose" neutrons during a simulation. This problem began to present itself in the early stages of adaptation from version 4C to 6.1.0 (due to "incomplete" low-level universes), and persisted after updating the core configuration. The default behavior is for MCNP to continue a simulation until a total of ten transported particles (neutrons, in this case) are lost 15 . By way of the Lost Particle Control card (LOST), the default lost particle threshold can be overridden, but LANL discourages doing such: "Losing more than 10 particles is rarely justifiable" 15 . Regardless of how many particles are lost in a run LANL advises MCNP users to understand why exactly a particle was lost.
Before pursuing other measures, a geometry check was performed in the manner recommended by LANL in section 2.12 of the manual, "Geometry Errors" 15 .
To summarize that geometry check: one bombards one's model from all directions with an inward, spherical source and anyplace in the model that a particle is lost is in need of attention 15 . The stronger the source, the less likely for a false negative result to occur 15 . Ultimately this "inward sphere" check was employed multiple times, often with useful results.
In multiple sections of the MCNP user manual, LANL advises the reader that a good diagnostic tool is to train the geometry plotter on the last known position of a lost particle 15 . (An occasionally useful variant of this technique is to point the plotter at the last few locations, not merely the very last). Understanding the loss of neutrons proved challenging nonetheless because details of the event log encumber its interoperability with the plotter. A side effect of the "multi-universe" architecture of the model was that the event log for a lost neutron would reference a non-unique label for the last relevant cell and surface. The final position of the particle was always given in the event log with respect to the local coordinate system of the deepest applicable universe. Meanwhile the geometry plotter takes coordinates as inputs strictly with respect to the global coordinate system. In theory, one can make all of the necessary conversions to the global coordinate system, though unfortunately the increment used throughout the event log was coarser than that used to define the geometry of the model. At times that proved relevant, but ultimately plotting the lastknown whereabouts was tremendously helpful.
While the ideal scenario is to plot the last known whereabouts of a lost neutron and see red dotted lines 15 , plots that show no apparent problem can also be informative. Instead patterns provide hints, as was the case in this study. The very last cause of lost-neutron problems to be rooted out was only rooted out because this author caught on from the (flawless-looking) plots that the problem was consistently happening near the shim safety blades, and then this author remembered that each shim safety blade was rendered in its own universe (at the time), which in turn filled a "real-world" window cell. The shim safety blades were promoted to universe 0 ("the real world"), and the model entirely ceased to lose neutrons.

Side Effects of Complexity
The anecdote at the end of the previous appendix may bother an experienced MCNP user, who may wonder why such a basic usage of universes and filling would cause the program to lose track of neutrons. It would seem that such a defect would defeat the purpose of that feature. There is a missing piece to that puzzle: while simulating transport, MCNP distinguishes between surfaces to a resolution of 1.0e-3 cm and no finer 20 . The detail in the model was so fine near the shim safety blades that it prevented MCNP from recognizing when it was supposed to switch from "the real world" to each blade's universe.
Another interesting challenge during the refinement of the model was that MCNP needed help filling out the grid; specifically, some of the grid elements at extremes of lattices failed to appear as specified in the input deck (e.g. radiation basket, corner post) and instead were water, top to bottom. Ironically, the workaround to realize the intended geometry was more complexity, not less. Several new universes were introduced, in which each previously-intended universe was repeated endlessly per "x" and "y" in a new lattice, and the fill entry for that previously-intended universe in each original lattice was replaced with a reference to the new purpose-made lattice for that universe. For example, the lattice definition for columns F and G had been: One theory as to why this was necessary is that perhaps the problem setup component(s) of MCNP can only hold a bank of so many object references, and perhaps the input deck with the "water pillar" problem required too many object references in one rendering. Further, the lattice workaround herein described may have circumvented that limit through an object reference inside one bank making reference to another bank. A container ship cannot turn as tightly as a speedboat, but if it "zigs" one way before it "zags" the other, it can navigate more difficult circumstances than were obvious; this "water pillar" problem and its solution appear to be analogous to that. Figure 40: "before" (left) and "after" (right) for an instance of the "water pillar" problem. The input decks resulting in these plots were made from the full model and then modified to isolate columns F and G of the core grid.
Both of these problems -and their solutions-demonstrate that it is paramount to appreciate the complexity of advanced tools.

Templates (and Kinds) of Input Decks
An MCNP input deck is a computer model -depending on what one is doing, and how. In the context of this project, not one input deck of the input decks used may be regarded on its own as "the model".
From one perspective the binary restart file is the actual model in any single run of MCNP, and the input deck is its "source code" with MCNP serving as the compiler. However, that simple assertion is based on several assumptions: (1) there is only one input deck, necessarily an "initiate" deck; (2) the particle source is adequately defined inside the "initiate" deck; and (3) the user needs only final results from MCNP, not intermediate results of any kind. After removing these assumptions the most general case is that the binary restart file, and as many source tape files 4 as necessary together constitute the actual model; the "initiate" deck, together with as many "continue" decks as necessary, also combined with any source subroutine files ("source" and possibly "srcdx"), and even further combined with any accessory files incidentally needed at the outset as input by programs or scripts are the source code 5 .
Whittling the general case back down to this study, the "initiate" deck adequately introduced the computer to the problem to be solved (geometry, source definition, etc.), but it took several "continue" runs, each with its own "continue" deck, to finish the job. The "initiate" deck could have done the job entirely on its own, if only this author happened to correctly guess how many iterations it would take for results to satisfactorily converge. Any of the intermediate "continue" runs could have been omitted, if only the second guess for how many iterations had been correct. For planning purposes, an MCNP user will never correctly guess how many iterations are needed -not on the first guess, not on the one-hundred-thirty-first guess. The thing to do is start with an "initiate" run to rule out any flaws in the model, and follow it with reasonably-spaced "continue" runs until the user finds that convergence and other requirements have been met. Also, many a user uses intermediate results to extrapolate when the latest run might finish, or try to discern a pattern with regard to iterations and their temporal cost. A revised "initiate" deck would sufficiently characterize the model, but in deference to the actual procedure the model will be shared in two parts: the "initiate" deck which introduces the computer to the problem, and the final version of the "continue" deck.
Also shared are template input decks of both kinds ("initiate" and "continue").
The "initiate" template is meant to be fed to mcnp_pstudy, which in turn results in a proper MCNP input deck. What appears at first to be a one-case parameter study can be reimagined as an input deck only better: input parameters can be clearly labeled by the user, and the user can adjust incidental details of the input deck (e.g. the value of a parameter).
This second benefit deserves emphasis: small changes in the user's mental model generally are not small changes to the (traditional) input deck. An example is how one adjusts the control rod heights. The naïve approach would be to adjust the surfaces -but not before: checking which surfaces to adjust; checking that the object one wants to move has no surfaces in common with something that one does not want to move; nor checking which other surfaces must be adjusted to prevent a geometry error. Suppose that adjustment is unsatisfactory, and the user wants to move any of the control rods again. The same tedious work as before must be repeated entirely, lest the user introduce a geometry error. The advantage of an mcnp_pstudy template in this situation is that the user can render a unique surface adjustable once, finding and solving whatever conflicts emerge, and thereafter adjust one parameter value without re-investing time and energy in avoiding conflicts.
Another approach would be to assign a unique transformation to each cell one wishes to move, as has been done in other studies modeling other reactors 21 . This also requires assigning the objects of interest their own universes, and "window" cells to be filled with those universes 21,22 . Experience in this study suggests that such a strategy carries the risk that physically valid input may not successfully run (see "Where have all the neutrons gone?"). Particle loss concerns aside, this approach would also benefit from the usage of mcnp_pstudy templating; one can update comments at the top of the file and then adjust the positions on the next line, without diving into the data block to adjust one data card with a somewhat abstract name. (This also applies to the previous approach.) Whether moving control rods or adjusting material compositions, readers who may wish to extend this study may save themselves much frustration by regarding a re-usable, adjustable template for an input deck as "the model".
The "continue" template is not meant for mcnp_pstudy; instead any user is advised to directly copy from the template and then edit the copy directly. ii. Choose between the "c" and "cn" options for the next continue run: 1. Have there been at least two "continue" runs already using "c"?
a. If not, "c" is probably best, because having multiple dumps allows the user to re-start a run from the (n-1)th dump if something is wrong with the nth run.
b. If so, did the last run use "c" or "cn"?
2. Is the user worried that the binary restart file ("runtpe") is nearing a problematic size? a. Yes: "cn" b. No: "c" iii. Invoke another "continue" run: Consequently, the actual procedure used (the simulations generating Figure 7 through Figure 26)  ii. Choose between keep_going.sh ("c") and go-limited.sh ("cn") for the next continue run: 1. Have there been at least two "continue" runs already using keep_going?
a. If not, keep_going is probably best, because having multiple dumps allows the user to restart a run from the (n-1)th dump if something is wrong with the nth run.
b. If so, did the last run use keep_going or golimited?
2. Is the user worried that the binary restart file ("runtpe") is nearing a problematic size? iv. Repeat steps 3 and 4 (i.e., plotting relative error and convergence and reacting accordingly).
b. Yes (satisfactory:) The user is done running simulations.
The scripts mentioned above are shared below.
More plots for relative error for thermal fluence. Figure 48: Relative error as a decimal fraction for the MCNP neutron flux type "B" mesh tally for thermal energies (relative error for thermal fluence per source particle history) for the RINSC reactor. Figure 49: Relative error as a decimal fraction for the MCNP neutron flux type "B" mesh tally for thermal energies (relative error for thermal fluence per source particle history) for the RINSC reactor. Figure 50: Relative error as a decimal fraction for the MCNP neutron flux type "B" mesh tally for thermal energies (relative error for thermal fluence per source particle history) for the RINSC reactor. Figure 51: Relative error as a decimal fraction for the MCNP neutron flux type "B" mesh tally for thermal energies (relative error for thermal fluence per source particle history) for the RINSC reactor. This plot shows column D.
More plots for relative error for epithermal fluence. Figure 52: Relative error as a decimal fraction for the MCNP neutron flux type "B" mesh tally for epithermal energies (relative error for epithermal fluence per source particle history) for the RINSC reactor. Figure 54: Relative error as a decimal fraction for the MCNP neutron flux type "B" mesh tally for epithermal energies (relative error for epithermal fluence per source particle history) for the RINSC reactor. This plot shows column B. Figure 56: Relative error as a decimal fraction for the MCNP neutron flux type "B" mesh tally for fast energies (relative error for fast fluence per source particle history) for the RINSC reactor. Figure 57: Relative error as a decimal fraction for the MCNP neutron flux type "B" mesh tally for fast energies (relative error for fast fluence per source particle history) for the RINSC reactor. This plot shows column F. Figure 58: Relative error as a decimal fraction for the MCNP neutron flux type "B" mesh tally for fast energies (relative error for fast fluence per source particle history) for the RINSC reactor. This plot shows row 9.