21cmFAST¶
A semi-numerical cosmological simulation code for the radio 21cm signal.
This is the official repository for 21cmFAST. As of v3.0.0, it is conveniently wrapped in Python to enable more dynamic code.
New Features in 3.0.0+¶
Robust on-disk caching/writing both for efficiency and simplified reading of previously processed data (using HDF5).
Convenient data objects which simplify access to and processing of the various density and ionization fields.
De-coupled functions mean that arbitrary functionality can be injected into the process.
Improved exception handling and debugging
Comprehensive documentation
Comprehensive test suite.
Strict semantic versioning.
Documentation¶
Full documentation (with examples, installation instructions and full API reference) found at https://21cmfast.readthedocs.org.
Acknowledging¶
If you find 21cmFAST useful in your research please cite at least one of the following (whichever is most suitable to you):
Andrei Mesinger and Steven Furlanetto, “Efficient Simulations of Early Structure Formation and Reionization”, The Astrophysical Journal, Volume 669, Issue 2, pp. 663-675 (2007), https://ui.adsabs.harvard.edu/link_gateway/2007ApJ…669..663M/doi:10.1086/521806
Andrei Mesinger, Steven Furlanetto and Renyue Cen, “21CMFAST: a fast, seminumerical simulation of the high-redshift 21-cm signal”, Monthly Notices of the Royal Astronomical Society, Volume 411, Issue 2, pp. 955-972 (2011), https://ui.adsabs.harvard.edu/link_gateway/2011MNRAS.411..955M/doi:10.1111/j.1365-2966.2010.17731.x
Contents¶
Installation¶
Dependencies¶
We try to have as many of the dependencies automatically installed as possible. However, since 21cmFAST relies on some C libraries, this is not always possible.
The C libraries required are:
gsl
fftw (compiled with floating-point enabled, and –enable-shared)
openmp
A C-compiler with compatibility with the -fopenmp flag.
As it turns out, though these are fairly common libraries, getting them installed in a way that 21cmFAST understands on various operating systems can be slightly non-trivial.
Linux¶
Most linux distros come with packages for the requirements, and also gcc by default, which supports -fopenmp. As long as these packages install into the standard location, a standard installation of 21cmFAST will be automatically possible (see below). If they are installed to a place not on the LD_LIBRARY/INCLUDE paths, then you must use the compilation options (see below) to specify where they are.
Note
there exists the option of installing gsl, fftw and gcc using conda. This is discussed below in the context of MacOSX, where it is often the easiest way to get the dependencies, but it is equally applicable to linux.
MacOSX¶
On MacOSX, obtaining gsl and fftw is typically more difficult, and in addition, the newer native clang does not offer -fopenmp support.
For conda users (which we recommend using), the easiest way to get gsl and fftw is by doing conda install -c conda-forge gsl fftw in your environment.
Note
if you use conda to install gsl and fftw, then you will need to point at their location when installing 21cmFAST (see compiler options below for details). In this case, the installation command should simply be prepended with:
LIB=/path/to/conda/env/lib INC=/path/to/conda/env/include
To get gcc, either use homebrew, or again, conda: conda install -c anaconda gcc. If you get the conda version, you still need to install the headers:
xcode-select --install
On older versions then you need to do:
open /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_<input version>.pkg
For newer versions, you may need to prepend the following command to your pip install command when installing 21cmFAST (see later instructions):
CFLAGS="-isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX<input version>.sdk"
See faqs/installation_faq for more detailed questions on installation. If you are on MacOSX and are having trouble with installation (or would like to share a successful installation strategy!) please see the open issue.
With the dependencies installed, follow the instructions below, depending on whether you are a user or a developer.
For Users¶
Note
conda users may want to pre-install the following packages before running the below installation commands:
conda install numpy scipy click pyyaml cffi astropy h5py
Then, at the command line:
pip install git+git://github.com/21cmFAST/21cmFAST.git
If developing, from the top-level directory do:
pip install -e .
Note the compile options discussed below!
For Developers¶
If you are developing 21cmFAST, we highly recommend using conda to manage your environment, and setting up an isolated environment. If this is the case, setting up a full environment (with all testing and documentation dependencies) should be as easy as (from top-level dir):
conda env create -f environment_dev.yml
Otherwise, if you are using pip:
pip install -e .
pip install -r requirements_dev.txt
And if you would like to also compile documentation:
pip install -r docs/requirements.txt
Compile Options¶
Various options exist to manage compilation via environment variables. Basically,
any variable with “INC” in its name will add to the includes directories, while
any variable with “lib” in its name will add to the directories searched for
libraries. To change the C compiler, use CC
. Finally, if you want to compile
the C-library in dev mode (so you can do stuff like valgrid and gdb with it),
install with DEBUG=True. So for example:
CC=/usr/bin/gcc DEBUG=True GSL_LIB=/opt/local/lib FFTW_INC=/usr/local/include pip install -e .
In addition, the BOXDIR
variable specifies the default directory that any
data produced by 21cmFAST will be cached. This value can be updated at any time by
changing it in the $CFGDIR/config.yml
file, and can be overwritten on a
per-call basis.
While the -e
option will keep your library up-to-date with any (Python)
changes, this will not work when changing the C extension. If the C code
changes, you need to manually run rm -rf build/*
then re-install as above.
Logging in C-Code¶
By default, the C-code will only print to stderr when it encounters warnings or critical errors. However, there exist several levels of logging output that can be switched on, but only at compilation time. To enable these, use the following:
LOG_LEVEL=<log_level> pip install -e .
The <log_level>
can be any non-negative integer, or one of the following
(case-insensitive) identifiers:
NONE, ERROR, WARNING, INFO, DEBUG, SUPER_DEBUG, ULTRA_DEBUG
If an integer is passed, it corresponds to the above levels in order (starting from zero). Be careful if the level is set to 0 (or NONE), as useful error and warning messages will not be printed. By default, the log level is 2 (or WARNING), unless the DEBUG=1 environment variable is set, in which case the default is 4 (or DEBUG). Using very high levels (eg. ULTRA_DEBUG) can print out a lot of information and make the run time much longer, but may be useful in some specific cases.
Design Philosophy and Features for v3+¶
Here we describe in broad terms the design philosophy of the new 21cmFAST
,
and some of its new features.
This is useful to get an initial bearing of how to go about using 21cmFAST
, though
most likely the tutorials will be better for that.
It is also useful for those who have used the “old” 21cmFAST
(versions 2.1 and less)
and want to know why they should use this new version (and how to convert).
In doing so, we’ll go over some of the key features of 21cmFAST
v3+.
To get a more in-depth view of all the options and features available, look at the
very thorough API Reference.
Design Philosophy¶
The goal of v3 of 21cmFAST
is to provide the same computational efficiency and
scientific features of the previous generations, but packaged in a form that adopts the
best modern programming standards, including:
simple installation
comprehensive documentation
comprehensive test suite
more modular code
standardised code formatting
truly open-source and collaborative design (via Github)
Partly to enable these standards, and partly due to the many extra benefits it brings, v3 also has the major distinction of being wrapped entirely in Python. The extra benefits brought by this include:
a native python library interface (eg. get your output box directly as a
numpy
array).better file-writing, into the HDF5 format, which saves metadata along with the box data.
a caching system so that the same data never has to be calculated twice.
reproducibility: know which exact version of
21cmFAST
, with what parameters, produced a given dataset.significantly improved warnings and error checking/propagation.
simplicity for adding new additional effects, and inserting them in the calculation pipeline.
We hope that additional features and benefits will be found by the growing community
of 21cmFAST
developers and users.
How it Works¶
v3 is not a complete rewrite of 21cmFAST
. Most of the C-code of previous versions
is kept, though it has been modularised and modified in many places. The fundamental
routines are the same (barring bugfixes!).
The major programs of the original version (init
, perturb
, ionize
etc.) have
been converted into modular functions in C. Furthermore, most of the global parameters
(and, more often than not, global #define
options) have been modularised and converted
into a series of input “parameter” structs
. These get passed into the functions.
Furthermore, each C function, instead of writing a bunch of files, returns an output
struct
containing all the stuff it computed.
Each of these functions and structs are wrapped in Python using the cffi
package.
CFFI compiles the C code once upon installation. Due to the fact that parameters are
now passed around to the different functions, rather than being global defines, we no
longer need to re-compile every time an option is changed. Python itself can handle
changing the parameters, and can use the outputs in whatever way the user desires.
To maintain continuity with previous versions, a CLI interface is provided (see below) that acts in a similar fashion to previous versions.
When 21cmFAST
is installed, it automatically creates a configuration directory in
the user’s home: ~/.21cmfast
. This houses a number of important configuration
options; usually default values of parameters. At this stage, the location of this
directory is not itself configurable. The config directory contains example
configuration files for the CLI interface (see below), which can also be copied anywhere
on disk and modified. Importantly, the config.yml
file in this directory specifies
some of the more important options for how 21cmFAST
behaves by default.
One such option is boxdir
, which specifies the directory in which 21cmFAST
will
cache results (see below for details). Finally, the config directory houses several data
tables which are used to accelerate several calculations. In principle
these files are over-writeable, but they should only be touched if one knows very well
what they are doing.
Finally, 21cmFAST
contains a more robust cataloguing/caching method. Instead of
saving data with a selection of the dependent parameters written into the filename –
a method which is prone to error if a parameter which is not part of that selection is
modified – 21cmFAST
writes all data into a configurable central directory with a hash
filename unique to all parameters upon which the data depends. Each kind of dataset has
attached methods which efficiently search this central directory for matching data to be
read when necessary.
Several arguments are available for all library functions which produce such datasets
that control this output. In this way, the data that is being retrieved is always
reliably produced with the desired parameters, and users need not concern themselves
with how and where the data is saved – it can be retrieved merely by creating an empty
object with the desired parameters and calling .read()
, or even better, by calling
the function to produce the given dataset, which will by default just read it in if
available.
CLI¶
The CLI interface always starts with the command 21cmfast
, and has a number of
subcommands. To list the available subcommands, use:
$ 21cmfast --help
To get help on any subcommand, simply use:
$ 21cmfast <subcommand> --help
Any subcommand which runs some aspect of 21cmFAST
will have a --config
option,
which specifies a configuration file (by default this is
~/.21cmfast/runconfig_example.yml
). This config file specifies the parameters of the
run. Furthermore, any particular parameter that can be specified in the config file can
be alternatively specified on the command line by appending the command with the
parameter name, eg.:
$ 21cmfast init --config=my_config.yml --HII_DIM=40 hlittle=0.7 --DIM 100 SIGMA_8 0.9
The above command shows the numerous ways in which these parameters can be specified (with or without leading dashes, and with or without “=”).
The CLI interface, while simple to use, does have the limitation that more complex
arguments than can be passed to the library functions are disallowed. For example,
one cannot pass a previously calculated initial conditions box to the perturb
command. However, if such a box has been calculated with the default option to write it
to disk, then it will automatically be found and used in such a situation, i.e. the
following will not re-calculate the init box:
$ 21cmfast init
$ 21cmfast perturb redshift=8.0
This means that almost all of the functionality provided in the library is accessible via the CLI.
Tutorials and FAQs¶
The following introductory tutorials will help you get started with 21cmFAST
:
Running and Plotting Coeval Cubes¶
The aim of this tutorial is to introduce you to how 21cmFAST
does the most basic operations: producing single coeval cubes, and visually verifying them. It is a great place to get started with 21cmFAST
.
[1]:
%matplotlib inline
import matplotlib.pyplot as plt
# We change the default level of the logger so that
# we can see what's happening with caching.
import logging, sys, os
logger = logging.getLogger('21cmFAST')
logger.setLevel(logging.INFO)
import py21cmfast as p21c
# For plotting the cubes, we use the plotting submodule:
from py21cmfast import plotting
# For interacting with the cache
from py21cmfast import cache_tools
[2]:
print(f"Using 21cmFAST version {p21c.__version__}")
Using 21cmFAST version 3.0.0.dev2
Clear the cache so that we get the same results for the notebook every time (don’t worry about this for now). Also, set the default output directory to _cache/
:
[3]:
p21c.config['direc'] = '_cache'
cache_tools.clear_cache(direc="_cache")
2020-02-29 15:10:11,068 | INFO | Removing PerturbedField_d3184aab64bde877b82cf02bb2993cd0_r12345.h5
2020-02-29 15:10:11,072 | INFO | Removing InitialConditions_6f0eb48c62c36acef23416d5d0fbcf3b_r12345.h5
2020-02-29 15:10:11,102 | INFO | Removing BrightnessTemp_e1bc72a4dbe403d8285dc7d5d9296033_r12345.h5
2020-02-29 15:10:11,106 | INFO | Removing PerturbedField_a26a39863127d77625aa3cec5ea97941_r12345.h5
2020-02-29 15:10:11,111 | INFO | Removing IonizedBox_775ac06d5351d74136a42b5faacdc2d5_r12345.h5
2020-02-29 15:10:11,117 | INFO | Removing BrightnessTemp_9ad800878a70431b47d3ba2b5e6d5fd9_r12345.h5
2020-02-29 15:10:11,297 | INFO | Removing Coeval_z8.0_a3c7dea665420ae9c872ba2fab1b3d7d_r12345.h5
2020-02-29 15:10:11,346 | INFO | Removing IonizedBox_cf2130a24099a2b4b0428560e1653efc_r12345.h5
2020-02-29 15:10:11,352 | INFO | Removing IonizedBox_455465ec84d9bd239f55ab7325d3e9ea_r12345.h5
2020-02-29 15:10:11,357 | INFO | Removing PerturbedField_be9da3b74bf987914d7fa6bae65e5e39_r12345.h5
2020-02-29 15:10:11,362 | INFO | Removing BrightnessTemp_c46f2617c10930bff143cd51e7099f25_r12345.h5
2020-02-29 15:10:11,365 | INFO | Removed 11 files from cache.
Basic Usage¶
The simplest (and typically most efficient) way to produce a coeval cube is simply to use the run_coeval
method. This consistently performs all steps of the calculation, re-using any data that it can without re-computation or increased memory overhead.
[4]:
coeval8, coeval9, coeval10 = p21c.run_coeval(
redshift = [8.0, 9.0, 10.0],
user_params = {"HII_DIM": 100, "BOX_LEN": 100},
cosmo_params = p21c.CosmoParams(SIGMA_8=0.8),
astro_params = p21c.AstroParams({"HII_EFF_FACTOR":20.0}),
random_seed=12345
)
There are a number of possible inputs for run_coeval
, which you can check out either in the API reference or by calling help(p21c.run_coeval)
. Notably, the redshift
must be given: it can be a single number, or a list of numbers, defining the redshift at which the output coeval cubes will be defined.
Other params we’ve given here are user_params
, cosmo_params
and astro_params
. These are all used for defining input parameters into the backend C code (there’s also another possible input of this kind; flag_options
). These can be given either as a dictionary (as user_params
has been), or directly as a relevant object (like cosmo_params
and astro_params
). If creating the object directly, the parameters can be passed individually or via a single dictionary. So there’s a
lot of flexibility there! Nevertheless we encourage you to use the basic dictionary. The other ways of passing the information are there so we can use pre-defined objects later on. For more information about these “input structs”, see the API docs.
We’ve also given a direc
option: this is the directory in which to search for cached data (and also where cached data should be written). Throughout this notebook we’re going to set this directly to the _cache
folder, which allows us to manage it directly. By default, the cache location is set in the global configuration in ~/.21cmfast/config.yml
. You’ll learn more about caching further on in this tutorial.
Finally, we’ve given a random seed. This sets all the random phases for the simulation, and ensures that we can exactly reproduce the same results on every run.
The output of run_coeval
is a list of Coeval
instances, one for each input redshift (it’s just a single object if a single redshift was passed, not a list). They store everything related to that simulation, so that it can be completely compared to other simulations.
For example, the input parameters:
[5]:
print("Random Seed: ", coeval8.random_seed)
print("Redshift: ", coeval8.redshift)
print(coeval8.user_params)
Random Seed: 12345
Redshift: 8.0
UserParams(BOX_LEN:100, DIM:300, HII_DIM:100, HMF:1, POWER_SPECTRUM:0, USE_FFTW_WISDOM:False, USE_RELATIVE_VELOCITIES:False)
This is where the utility of being able to pass a class instance for the parameters arises: we could run another iteration of coeval cubes, with the same user parameters, simply by doing p21c.run_coeval(user_params=coeval8.user_params, ...)
.
Also in the Coeval
instance are the various outputs from the different steps of the computation. You’ll see more about what these steps are further on in the tutorial. But for now, we show that various boxes are available:
[6]:
print(coeval8.hires_density.shape)
print(coeval8.brightness_temp.shape)
(300, 300, 300)
(100, 100, 100)
Along with these, full instances of the output from each step are available as attributes that end with “struct”. These instances themselves contain the numpy
arrays of the data cubes, and some other attributes that make them easier to work with:
[7]:
coeval8.brightness_temp_struct.global_Tb
[7]:
17.622644
By default, each of the components of the cube are cached to disk (in our _cache/
folder) as we run it. However, the Coeval
cube itself is not written to disk by default. Writing it to disk incurs some redundancy, since that data probably already exists in the cache directory in seperate files.
Let’s save to disk. The save method by default writes in the current directory (not the cache!):
[8]:
filename = coeval8.save(direc='_cache')
The filename of the saved file is returned:
[9]:
print(os.path.basename(filename))
Coeval_z8.0_a3c7dea665420ae9c872ba2fab1b3d7d_r12345.h5
Such files can be read in:
[10]:
new_coeval8 = p21c.Coeval.read(filename, direc='.')
Some convenient plotting functions exist in the plotting
module. These can work directly on Coeval
objects, or any of the output structs (as we’ll see further on in the tutorial). By default the coeval_sliceplot
function will plot the brightness_temp
, using the standard traditional colormap:
[11]:
fig, ax = plt.subplots(1,3, figsize=(14,4))
for i, (coeval, redshift) in enumerate(zip([coeval8, coeval9, coeval10], [8,9,10])):
plotting.coeval_sliceplot(coeval, ax=ax[i], fig=fig);
plt.title("z = %s"%redshift)
plt.tight_layout()

Any 3D field can be plotted, by setting the kind
argument. For example, we could alternatively have plotted the dark matter density cubes perturbed to each redshift:
[12]:
fig, ax = plt.subplots(1,3, figsize=(14,4))
for i, (coeval, redshift) in enumerate(zip([coeval8, coeval9, coeval10], [8,9,10])):
plotting.coeval_sliceplot(coeval, kind='density', ax=ax[i], fig=fig);
plt.title("z = %s"%redshift)
plt.tight_layout()

To see more options for the plotting routines, see the API Documentation.
Coeval
instances are not cached themselves – they are containers for data that is itself cached (i.e. each of the _struct
attributes of Coeval
). See the api docs for more detailed information on these.
You can see the filename of each of these structs (or the filename it would have if it were cached – you can opt to not write out any given dataset):
[13]:
coeval8.init_struct.filename
[13]:
'InitialConditions_6f0eb48c62c36acef23416d5d0fbcf3b_r12345.h5'
You can also write the struct anywhere you’d like on the filesystem. This will not be able to be automatically used as a cache, but it could be useful for sharing files with colleagues.
[14]:
coeval8.init_struct.save(fname='my_init_struct.h5')
This brief example covers most of the basic usage of 21cmFAST
(at least with Coeval
objects – there are also Lightcone
objects for which there is a separate tutorial).
For the rest of the tutorial, we’ll cover a more advanced usage, in which each step of the calculation is done independently.
Advanced Step-by-Step Usage¶
Most users most of the time will want to use the high-level run_coeval
function from the previous section. However, there are several independent steps when computing the brightness temperature field, and these can be performed one-by-one, adding any other effects between them if desired. This means that the new 21cmFAST
is much more flexible. In this section, we’ll go through in more detail how to use the lower-level methods.
Each step in the chain will receive a number of input-parameter classes which define how the calculation should run. These are the user_params
, cosmo_params
, astro_params
and flag_options
that we saw in the previous section.
Conversely, each step is performed by running a function which will return a single object. Every major function returns an object of the same fundamental class (an OutputStruct
) which has various methods for reading/writing the data, and ensuring that it’s in the right state to receive/pass to and from C. These are the objects stored as init_box_struct
etc. in the Coeval
class.
As we move through each step, we’ll outline some extra details, hints and tips about using these inputs and outputs.
Initial Conditions¶
The first step is to get the initial conditions, which defines the cosmological density field before any redshift evolution is applied.
[15]:
initial_conditions = p21c.initial_conditions(
user_params = {"HII_DIM": 100, "BOX_LEN": 100},
cosmo_params = p21c.CosmoParams(SIGMA_8=0.8),
random_seed=54321
)
We’ve already come across all these parameters as inputs to the run_coeval
function. Indeed, most of the steps have very similar interfaces, and are able to take a random seed and parameters for where to look for the cache. We use a different seed than in the previous section so that all our boxes are “fresh” (we’ll show how the caching works in a later section).
These initial conditions have 100 cells per side, and a box length of 100 Mpc. Note again that they can either be passed as a dictionary containing the input parameters, or an actual instance of the class. While the former is the suggested way, one benefit of the latter is that it can be queried for the relevant parameters (by using help
or a post-fixed ?
), or even queried for defaults:
[16]:
p21c.CosmoParams._defaults_
[16]:
{'SIGMA_8': 0.8102,
'hlittle': 0.6766,
'OMm': 0.30964144154550644,
'OMb': 0.04897468161869667,
'POWER_INDEX': 0.9665}
(these defaults correspond to the Planck15 cosmology contained in Astropy).
So what is in the initial_conditions
object? It is what we call an OutputStruct
, and we have seen it before, as the init_box_struct
attribute of Coeval
. It contains a number of arrays specifying the density and velocity fields of our initial conditions, as well as the defining parameters. For example, we can easily show the cosmology parameters that are used (note the non-default \(\sigma_8\) that we passed):
[17]:
initial_conditions.cosmo_params
[17]:
CosmoParams(OMb:0.04897468161869667, OMm:0.30964144154550644, POWER_INDEX:0.9665, SIGMA_8:0.8, hlittle:0.6766)
A handy tip is that the CosmoParams
class also has a reference to a corresponding Astropy cosmology, which can be used more broadly:
[18]:
initial_conditions.cosmo_params.cosmo
[18]:
FlatLambdaCDM(name="Planck15", H0=67.7 km / (Mpc s), Om0=0.31, Tcmb0=2.725 K, Neff=3.05, m_nu=[0. 0. 0.06] eV, Ob0=0.049)
Merely printing the initial conditions object gives a useful representation of its dependent parameters:
[19]:
print(initial_conditions)
InitialConditions(UserParams(BOX_LEN:100, DIM:300, HII_DIM:100, HMF:1, POWER_SPECTRUM:0, USE_FFTW_WISDOM:False, USE_RELATIVE_VELOCITIES:False);
CosmoParams(OMb:0.04897468161869667, OMm:0.30964144154550644, POWER_INDEX:0.9665, SIGMA_8:0.8, hlittle:0.6766);
random_seed:54321)
(side-note: the string representation of the object is used to uniquely define it in order to save it to the cache… which we’ll explore soon!).
To see which arrays are defined in the object, access the fieldnames
(this is true for all OutputStruct
objects):
[20]:
initial_conditions.fieldnames
[20]:
['lowres_density',
'lowres_vx',
'lowres_vy',
'lowres_vz',
'lowres_vx_2LPT',
'lowres_vy_2LPT',
'lowres_vz_2LPT',
'hires_density',
'lowres_vcb',
'hires_vcb']
The coeval_sliceplot
function also works on OutputStruct
objects (as well as the Coeval
object as we’ve already seen). It takes the object, and a specific field name. By default, the field it plots is the first field in fieldnames
(for any OutputStruct
).
[21]:
plotting.coeval_sliceplot(initial_conditions, "hires_density");

Perturbed Field¶
After obtaining the initial conditions, we need to perturb the field to a given redshift (i.e. the redshift we care about). This step clearly requires the results of the previous step, which we can easily just pass in. Let’s do that:
[22]:
perturbed_field = p21c.perturb_field(
redshift = 8.0,
init_boxes = initial_conditions
)
Note that we didn’t need to pass in any input parameters, because they are all contained in the initial_conditions
object itself. The random seed is also taken from this object.
Again, the output is an OutputStruct
, so we can view its fields:
[23]:
perturbed_field.fieldnames
[23]:
['density', 'velocity']
This time, it has only density and velocity (the velocity direction is chosen without loss of generality). Let’s view the perturbed density field:
[24]:
plotting.coeval_sliceplot(perturbed_field, "density");

It is clear here that the density used is the low-res density, but the overall structure of the field looks very similar.
Ionization Field¶
Next, we need to ionize the box. This is where things get a little more tricky. In the simplest case (which, let’s be clear, is what we’re going to do here) the ionization occurs at the saturated limit, which means we can safely ignore the contribution of the spin temperature. This means we can directly calculate the ionization on the density/velocity fields that we already have. A few more parameters are needed here, and so two more “input parameter dictionaries” are available,
astro_params
and flag_options
. Again, a reminder that their parameters can be viewed by using eg. help(p21c.AstroParams)
, or by looking at the API docs.
For now, let’s leave everything as default. In that case, we can just do:
[25]:
ionized_field = p21c.ionize_box(
perturbed_field = perturbed_field
)
2020-02-29 15:10:43,902 | INFO | Existing init_boxes found and read in (seed=54321).
That was easy! All the information required by ionize_box
was given directly by the perturbed_field
object. If we had also passed a redshift explicitly, this redshift would be checked against that from the perturbed_field
and an error raised if they were incompatible:
Let’s see the fieldnames:
[26]:
ionized_field.fieldnames
[26]:
['first_box', 'xH_box', 'Gamma12_box', 'z_re_box', 'dNrec_box']
Here the first_box
field is actually just a flag to tell the C code whether this has been evolved or not. Here, it hasn’t been, it’s the “first box” of an evolutionary chain. Let’s plot the neutral fraction:
[27]:
plotting.coeval_sliceplot(ionized_field, "xH_box");

Brightness Temperature¶
Now we can use what we have to get the brightness temperature:
[28]:
brightness_temp = p21c.brightness_temperature(ionized_box=ionized_field, perturbed_field=perturbed_field)
This has only a single field, brightness_temp
:
[29]:
plotting.coeval_sliceplot(brightness_temp);

The Problem¶
And there you have it – you’ve computed each of the four steps (there’s actually another, spin_temperature
, that you require if you don’t assume the saturated limit) individually.
However, some problems quickly arise. What if you want the perturb_field
, but don’t care about the initial conditions? We know how to get the full Coeval
object in one go, but it would seem that the sub-boxes have to each be computed as the input to the next.
A perhaps more interesting problem is that some quantities require evolution: i.e. a whole bunch of simulations at a string of redshifts must be performed in order to obtain the current redshift. This is true when not in the saturated limit, for example. That means you’d have to manually compute each redshift in turn, and pass it to the computation at the next redshift. While this is definitely possible, it becomes difficult to set up manually when all you care about is the box at the final redshift.
py21cmfast
solves this by making each of the functions recursive: if perturb_field
is not passed the init_boxes
that it needs, it will go and compute them, based on the parameters that you’ve passed it. If the previous spin_temp
box required for the current redshift is not passed – it will be computed (and if it doesn’t have a previous spin_temp
it will be computed, and so on).
That’s all good, but what if you now want to compute another perturb_field
, with the same fundamental parameters (but at a different redshift)? Since you didn’t ever see the init_boxes
, they’ll have to be computed all over again. That’s where the automatic caching comes in, which is where we turn now…
Using the Automatic Cache¶
To solve all this, 21cmFAST
uses an on-disk caching mechanism, where all boxes are saved in HDF5 format in a default location. The cache allows for reading in previously-calculated boxes automatically if they match the parameters that are input. The functions used at every step (in the previous section) will try to use a cached box instead of calculating a new one, unless its explicitly asked not to.
Thus, we could do this:
[30]:
perturbed_field = p21c.perturb_field(
redshift = 8.0,
user_params = {"HII_DIM": 100, "BOX_LEN": 100},
cosmo_params = p21c.CosmoParams(SIGMA_8=0.8),
)
plotting.coeval_sliceplot(perturbed_field, "density");
2020-02-29 15:10:45,367 | INFO | Existing z=8.0 perturb_field boxes found and read in (seed=12345).

Note that here we pass exactly the same parameters as were used in the previous section. It gives a message that the full box was found in the cache and immediately returns. However, if we change the redshift:
[31]:
perturbed_field = p21c.perturb_field(
redshift = 7.0,
user_params = {"HII_DIM": 100, "BOX_LEN": 100},
cosmo_params = p21c.CosmoParams(SIGMA_8=0.8),
)
plotting.coeval_sliceplot(perturbed_field, "density");
2020-02-29 15:10:45,748 | INFO | Existing init_boxes found and read in (seed=12345).

Now it finds the initial conditions, but it must compute the perturbed field at the new redshift. If we had changed the initial parameters as well, it would have to calculate everything:
[32]:
perturbed_field = p21c.perturb_field(
redshift = 8.0,
user_params = {"HII_DIM": 50, "BOX_LEN": 100},
cosmo_params = p21c.CosmoParams(SIGMA_8=0.8),
)
plotting.coeval_sliceplot(perturbed_field, "density");

This shows that we don’t need to perform the previous step to do any of the steps, they will be calculated automatically.
Now, let’s get an ionized box, but this time we won’t assume the saturated limit, so we need to use the spin temperature. We can do this directly in the ionize_box
function, but let’s do it explicitly. We will use the auto-generation of the initial conditions and perturbed field. However, the spin temperature is an evolved field, i.e. to compute the field at \(z\), we need to know the field at \(z+\Delta z\). This continues up to some redshift, labelled z_heat_max
, above which
the spin temperature can be defined directly from the perturbed field.
Thus, one option is to pass to the function a previous spin temperature box, to evolve to this redshift. However, we don’t have a previous spin temperature box yet. Of course, the function itself will go and calculate that box if it’s not given (or read it from cache if it’s been calculated before!). When it tries to do that, it will go to the one before, and so on until it reaches z_heat_max
, at which point it will calculate it directly.
To facilitate this recursive progression up the redshift ladder, there is a parameter, z_step_factor
, which is a multiplicate factor that determines the previous redshift at each step.
We can also pass the dependent boxes explicitly, which provides the parameters necessary.
WARNING: THIS IS THE MOST TIME-CONSUMING STEP OF THE CALCULATION!
[34]:
spin_temp = p21c.spin_temperature(
perturbed_field = perturbed_field,
zprime_step_factor=1.05,
)
2020-02-29 15:11:38,347 | INFO | Existing init_boxes found and read in (seed=521414794440).
[35]:
plotting.coeval_sliceplot(spin_temp, "Ts_box");

Let’s note here that each of the functions accepts a few of the same arguments that modifies how the boxes are cached. There is a write
argument, which if set to False
, will disable writing that box to cache (and it is passed through the recursive heirarchy). There is also regenerate
, which if True
, forces this box and all its predecessors to be re-calculated even if they exist in the cache. Then there is direc
, which we have seen before.
Finally note that by default, random_seed
is set to None
. If this is the case, then any cached dataset matching all other parameters will be read in, and the random_seed
will be set based on the file read in. If it is set to an integer number, then the cached dataset must also match the seed. If it is None
, and no matching dataset is found, a random seed will be autogenerated.
Now if we calculate the ionized box, ensuring that it uses the spin temperature, then it will also need to be evolved. However, due to the fact that we cached each of the spin temperature steps, these should be read in accordingly:
[36]:
ionized_box = p21c.ionize_box(
spin_temp = spin_temp,
zprime_step_factor=1.05,
)
2020-02-29 15:12:55,794 | INFO | Existing init_boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,814 | INFO | Existing z=34.2811622461279 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,827 | INFO | Existing z=34.2811622461279 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,865 | INFO | Existing z=32.60110690107419 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,880 | INFO | Existing z=32.60110690107419 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,906 | INFO | Existing z=31.00105419149923 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,919 | INFO | Existing z=31.00105419149923 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,948 | INFO | Existing z=29.4771944680945 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,963 | INFO | Existing z=29.4771944680945 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,991 | INFO | Existing z=28.02589949342333 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,005 | INFO | Existing z=28.02589949342333 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,033 | INFO | Existing z=26.643713803260315 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,051 | INFO | Existing z=26.643713803260315 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,079 | INFO | Existing z=25.32734647929554 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,094 | INFO | Existing z=25.32734647929554 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,127 | INFO | Existing z=24.073663313614798 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,141 | INFO | Existing z=24.073663313614798 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,168 | INFO | Existing z=22.879679346299806 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,182 | INFO | Existing z=22.879679346299806 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,205 | INFO | Existing z=21.742551758380767 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,219 | INFO | Existing z=21.742551758380767 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,403 | INFO | Existing z=20.659573103219778 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,418 | INFO | Existing z=20.659573103219778 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,620 | INFO | Existing z=19.62816486020931 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,635 | INFO | Existing z=19.62816486020931 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,784 | INFO | Existing z=18.645871295437438 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,793 | INFO | Existing z=18.645871295437438 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,931 | INFO | Existing z=17.71035361470232 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,941 | INFO | Existing z=17.71035361470232 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,085 | INFO | Existing z=16.81938439495459 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,095 | INFO | Existing z=16.81938439495459 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,243 | INFO | Existing z=15.970842280909132 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,254 | INFO | Existing z=15.970842280909132 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,399 | INFO | Existing z=15.162706934199171 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,408 | INFO | Existing z=15.162706934199171 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,544 | INFO | Existing z=14.393054223046828 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,554 | INFO | Existing z=14.393054223046828 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,691 | INFO | Existing z=13.66005164099698 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,700 | INFO | Existing z=13.66005164099698 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,832 | INFO | Existing z=12.961953943806646 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,840 | INFO | Existing z=12.961953943806646 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,970 | INFO | Existing z=12.297098994101567 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,978 | INFO | Existing z=12.297098994101567 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,106 | INFO | Existing z=11.663903803906255 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,114 | INFO | Existing z=11.663903803906255 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,244 | INFO | Existing z=11.060860765625003 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,254 | INFO | Existing z=11.060860765625003 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,394 | INFO | Existing z=10.486534062500002 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,402 | INFO | Existing z=10.486534062500002 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,529 | INFO | Existing z=9.939556250000003 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,538 | INFO | Existing z=9.939556250000003 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,674 | INFO | Existing z=9.418625000000002 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,682 | INFO | Existing z=9.418625000000002 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,810 | INFO | Existing z=8.922500000000001 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,819 | INFO | Existing z=8.922500000000001 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,947 | INFO | Existing z=8.450000000000001 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,956 | INFO | Existing z=8.450000000000001 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:59,086 | INFO | Existing z=8.0 perturb_field boxes found and read in (seed=521414794440).
[37]:
plotting.coeval_sliceplot(ionized_box, "xH_box");

Great! So again, we can just get the brightness temp:
[38]:
brightness_temp = p21c.brightness_temperature(
ionized_box = ionized_box,
perturbed_field = perturbed_field,
spin_temp = spin_temp
)
Now lets plot our brightness temperature, which has been evolved from high redshift with spin temperature fluctuations:
[39]:
plotting.coeval_sliceplot(brightness_temp);

We can also check what the result would have been if we had limited the maximum redshift of heating. Note that this recalculates all previous spin temperature and ionized boxes, because they depend on both z_heat_max
and zprime_step_factor
.
[40]:
ionized_box = p21c.ionize_box(
spin_temp = spin_temp,
zprime_step_factor=1.05,
z_heat_max = 20.0
)
brightness_temp = p21c.brightness_temperature(
ionized_box = ionized_box,
perturbed_field = perturbed_field,
spin_temp = spin_temp
)
plotting.coeval_sliceplot(brightness_temp);
2020-02-29 15:13:08,824 | INFO | Existing init_boxes found and read in (seed=521414794440).
2020-02-29 15:13:08,840 | INFO | Existing z=19.62816486020931 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:11,438 | INFO | Existing z=18.645871295437438 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:11,447 | INFO | Existing z=19.62816486020931 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:14,041 | INFO | Existing z=17.71035361470232 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:14,050 | INFO | Existing z=18.645871295437438 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:16,667 | INFO | Existing z=16.81938439495459 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:16,675 | INFO | Existing z=17.71035361470232 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:19,213 | INFO | Existing z=15.970842280909132 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:19,222 | INFO | Existing z=16.81938439495459 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:21,756 | INFO | Existing z=15.162706934199171 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:21,764 | INFO | Existing z=15.970842280909132 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:24,409 | INFO | Existing z=14.393054223046828 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:24,417 | INFO | Existing z=15.162706934199171 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:26,938 | INFO | Existing z=13.66005164099698 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:26,947 | INFO | Existing z=14.393054223046828 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:29,504 | INFO | Existing z=12.961953943806646 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:29,517 | INFO | Existing z=13.66005164099698 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:32,163 | INFO | Existing z=12.297098994101567 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:32,171 | INFO | Existing z=12.961953943806646 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:34,704 | INFO | Existing z=11.663903803906255 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:34,712 | INFO | Existing z=12.297098994101567 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:37,257 | INFO | Existing z=11.060860765625003 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:37,266 | INFO | Existing z=11.663903803906255 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:39,809 | INFO | Existing z=10.486534062500002 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:39,817 | INFO | Existing z=11.060860765625003 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:42,378 | INFO | Existing z=9.939556250000003 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:42,387 | INFO | Existing z=10.486534062500002 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:44,941 | INFO | Existing z=9.418625000000002 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:44,950 | INFO | Existing z=9.939556250000003 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:47,518 | INFO | Existing z=8.922500000000001 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:47,528 | INFO | Existing z=9.418625000000002 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:50,077 | INFO | Existing z=8.450000000000001 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:50,086 | INFO | Existing z=8.922500000000001 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:52,626 | INFO | Existing z=8.0 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:52,762 | INFO | Existing brightness_temp box found and read in (seed=521414794440).

As we can see, it’s very similar!
Running and Plotting LightCones¶
This tutorial follows on from the coeval cube tutorial, and provides an introduction to creating lightcones with 21cmFAST
. If you are new to 21cmFAST
you should go through the coeval cube tutorial first.
There are two ways of creating lightcones in 21cmFAST
: manual and automatic. The manual way involves evolving a coeval simulation through redshift and saving slices of it into a lightcone array. The advantage of this method is that one can precisely choose the redshift nodes to simulate and decide on interpolation methods. However, in this tutorial, we will focus on the single function that is included to do this for you: run_lightcone
.
The function takes a few different arguments, most of which will be familiar to you if you’ve gone through the coeval tutorial. All simulation parameters can be passed (i.e. user_params
, cosmo_params
, flag_options
and astro_params
). As an alternative to the first two, an InitialConditions
and/or PerturbField
box can be passed.
Furthermore, the evolution can be managed with the zprime_step_factor
and z_heat_max
arguments.
Finally, the final minimum redshift of the lightcone is set by the redshift
argument, and the maximum redshift of the lightcone is defined by the max_redshift
argument (note that this is not the maximum redshift evaluated, which is controlled by z_heat_max
, merely the maximum saved into the returned lightcone).
You can specify which 3D quantities are interpolated as lightcones, and which should be saved as global parameters.
Let’s see what it does. We won’t use the spin temperature, just to get a simple toy model:
[1]:
import py21cmfast as p21c
from py21cmfast import plotting
import os
print(f"21cmFAST version is {p21c.__version__}")
21cmFAST version is 3.0.0.dev2
[2]:
lightcone = p21c.run_lightcone(
redshift = 7.0,
max_redshift = 12.0,
user_params = {"HII_DIM":150, "BOX_LEN": 600},
lightcone_quantities=("brightness_temp", 'density'),
global_quantities=("brightness_temp", 'density', 'xH_box'),
direc='_cache'
)
[3]:
plotting.lightcone_sliceplot(lightcone);

[4]:
plotting.lightcone_sliceplot(lightcone, "density")
[4]:
(<Figure size 432x288 with 2 Axes>,
<matplotlib.axes._subplots.AxesSubplot at 0x7f5e93aaa4d0>)

Simple!
You can also save lightcones:
[5]:
filename = lightcone.save(direc='_cache')
[6]:
print(os.path.basename(filename))
LightCone_z7.0_da45d92043dfdc0c14f34f3ded434358_r287478667967.h5
[1]:
import numpy as np
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import py21cmfast as p21c
#import logging
#logger = logging.getLogger("21cmFAST")
#logger.setLevel(logging.INFO)
random_seed = 1993
EoR_colour = matplotlib.colors.LinearSegmentedColormap.from_list('mycmap',\
[(0, 'white'),(0.33, 'yellow'),(0.5, 'orange'),(0.68, 'red'),\
(0.83333, 'black'),(0.9, 'blue'),(1, 'cyan')])
plt.register_cmap(cmap=EoR_colour)
This result was obtained using 21cmFAST at commit 2bb4807c7ef1a41649188a3efc462084f2e3b2e0
Fiducial and lightcones¶
Let’s fix the initial condition for this tutorial.
[2]:
output_dir = '/home/yqin/aida/hybrid/mini-halos/'
HII_DIM = 128
BOX_LEN = 250
# USE_FFTW_WISDOM make FFT faster
user_params = {"HII_DIM":HII_DIM, "BOX_LEN": BOX_LEN, "USE_FFTW_WISDOM": True}
initial_conditions = p21c.initial_conditions(user_params=user_params,random_seed=random_seed, direc=output_dir)
Let’s run a ‘fiducial’ model and see its lightcones
Note that the reference model has
pow(10, "F_STAR7_MINI") = pow(10, "F_STAR10") / pow(1000,ALPHA_STAR) * 10 # 10 times enhancement
pow(10, "F_ESC7_MINI" ) = pow(10, "F_ESC10" ) / pow(1000,ALPHA_ESC ) / 10 # 0.1 times enhancement to balance the 10 times enhanced Ngamma
pow(10, "L_X_MINI" ) = pow(10, "L_X")
1 - "F_H2_SHIELD" = 1
[3]:
# the lightcones we want to plot later together with their color maps and min/max
lightcone_quantities = ('brightness_temp','Ts_box','xH_box',"dNrec_box",'z_re_box','Gamma12_box','J_21_LW_box',"density")
cmaps = [EoR_colour,'Reds','magma','magma','magma','cubehelix','cubehelix','viridis']
vmins = [-150, 1e1, 0, 0, 5, 0, 0, -1]
vmaxs = [ 30, 1e3, 1, 2, 9, 1,10, 1]
# set necessary flags for using minihalos and astro parameter
astro_params_fid = {'ALPHA_ESC': 0.0, 'F_ESC10': -1.222, 'F_ESC7_MINI' : -2.222,
'ALPHA_STAR': 0.5,'F_STAR10': -1.25, 'F_STAR7_MINI': -1.75,
'L_X': 40.5, 'L_X_MINI': 40.5, 'NU_X_THRESH': 500.0, 'F_H2_SHIELD': 0.0}
flag_options_fid = {"INHOMO_RECO":True, 'USE_MASS_DEPENDENT_ZETA':True, 'USE_TS_FLUCT':True, 'USE_MINI_HALOS':True}
lightcone_fid = p21c.run_lightcone(
redshift = 5.5,
init_box = initial_conditions,
flag_options = flag_options_fid,
astro_params = astro_params_fid,
lightcone_quantities=lightcone_quantities,
global_quantities=lightcone_quantities,
random_seed = random_seed,
direc = output_dir
)
fig, axs = plt.subplots(len(lightcone_quantities),1,
figsize=(getattr(lightcone_fid, lightcone_quantities[0]).shape[2]*0.01,
getattr(lightcone_fid, lightcone_quantities[0]).shape[1]*0.01*len(lightcone_quantities)))
for ii, lightcone_quantity in enumerate(lightcone_quantities):
axs[ii].imshow(getattr(lightcone_fid, lightcone_quantity)[1],
vmin=vmins[ii], vmax=vmaxs[ii],cmap=cmaps[ii])
axs[ii].text(1, 0.05, lightcone_quantity,horizontalalignment='right',verticalalignment='bottom',
transform=axs[ii].transAxes,color = 'red',backgroundcolor='white',fontsize = 15)
axs[ii].xaxis.set_tick_params(labelsize=0)
axs[ii].yaxis.set_tick_params(labelsize=0)
plt.tight_layout()
fig.subplots_adjust(hspace = 0.01)

# varying parameters
let’s vary paremeters that describe mini-halos and see the impact to the global signal
We keep other parameters fixed and vary one of following by a factor of 0.1, 0.5, 2 and 10:
pow(10, "F_STAR7_MINI")
pow(10, "F_ESC7_MINI")
pow(10, "L_X_MINI")
1 - "F_H2_SHIELD"
We also have a NOmini model where mini-halos are not included
[8]:
#defining those color, linstyle, blabla
linestyles = ['-', '-',':','-.','-.',':']
colors = ['gray','black','#e41a1c','#377eb8','#e41a1c','#377eb8']
lws = [1,3,2,2,2,2]
textss = ['varying '+r'$f_{*,7}^{\rm mol}$',\
'varying '+r'$f_{\rm esc}^{\rm mol}$',\
'varying '+r'$L_{\rm x}^{\rm mol}$',\
'varying '+r'$1-f_{\rm H_2}^{\rm shield}$']
factorss = [[0, 1, 0.1, 0.5, 2, 10],] * len(textss)
labelss = [['NOmini', 'reference', 'x0.1', 'x0.5', 'x2', 'x10'],] * len(textss)
Note that I’ve run these simulations in parallel before this tutorial. With these setup, each took ~6h to finish. Here, running means read the cached outputs.
global properties¶
[86]:
global_quantities = ('brightness_temp','Ts_box','xH_box',"dNrec_box",'z_re_box','Gamma12_box','J_21_LW_box',"density")
#choose some to plot...
plot_quantities = ('brightness_temp','Ts_box','xH_box',"dNrec_box",'Gamma12_box','J_21_LW_box')
ymins = [-120, 1e1, 0, 0, 0, 0]
ymaxs = [ 30, 1e3, 1, 1, 1,10]
fig, axss = plt.subplots(len(plot_quantities), len(labelss),
sharex=True, figsize=(4*len(labelss),2*len(global_quantities)))
for pp, texts in enumerate(textss):
labels = labelss[pp]
factors = factorss[pp]
axs = axss[:,pp]
for kk, label in enumerate(labels):
flag_options = flag_options_fid.copy()
astro_params = astro_params_fid.copy()
factor = factors[kk]
if label == 'NOmini':
flag_options.update({'USE_MINI_HALOS': False})
else:
flag_options.update({'USE_MINI_HALOS': True})
if pp == 0:
astro_params.update({'F_STAR7_MINI': astro_params_fid['F_STAR7_MINI']+np.log10(factor)})
elif pp == 1:
astro_params.update({'F_ESC7_MINI': astro_params_fid['F_ESC7_MINI']+np.log10(factor)})
elif pp == 2:
astro_params.update({'L_X_MINI': astro_params_fid['L_X_MINI']+np.log10(factor)})
else:
if factor > 1: continue # can't do negative F_H2_SHIELD
astro_params.update({'F_H2_SHIELD': 1. - (1. - astro_params_fid['F_H2_SHIELD']) * factor})
if label == 'reference':
lightcone = lightcone_fid
else:
lightcone = p21c.run_lightcone(
redshift = 5.5,
init_box = initial_conditions,
flag_options = flag_options,
astro_params = astro_params,
global_quantities=global_quantities,
random_seed = random_seed,
direc = output_dir
)
freqs = 1420.4 / (np.array(lightcone.node_redshifts) + 1.)
for jj, global_quantity in enumerate(plot_quantities):
axs[jj].plot(freqs, getattr(lightcone, 'global_%s'%global_quantity.replace('_box','')),
color=colors[kk],linestyle=linestyles[kk], label = labels[kk],lw=lws[kk])
axs[0].text(0.01, 0.99, texts,horizontalalignment='left',verticalalignment='top',
transform=axs[0].transAxes,fontsize = 15)
for jj, global_quantity in enumerate(plot_quantities):
axs[jj].set_ylim(ymins[jj], ymaxs[jj])
axs[-1].set_xlabel('Frequency/MHz',fontsize=15)
axs[-1].xaxis.set_tick_params(labelsize=15)
axs[0].set_xlim(1420.4 / (35 + 1.), 1420.4 / (5.5 + 1.))
zlabels = np.array([ 6, 7, 8, 10, 13, 18, 25, 35])
ax2 = axs[0].twiny()
ax2.set_xlim(axs[0].get_xlim())
ax2.set_xticks(1420.4 / (zlabels + 1.))
ax2.set_xticklabels(zlabels.astype(np.str))
ax2.set_xlabel("redshift",fontsize=15)
ax2.xaxis.set_tick_params(labelsize=15)
ax2.grid(False)
if pp == 0:
axs[0].legend(loc='lower right', ncol=2,fontsize=13,fancybox=True,frameon=True)
for jj, global_quantity in enumerate(plot_quantities):
axs[jj].set_ylabel('global_%s'%global_quantity.replace('_box',''),fontsize=15)
axs[jj].yaxis.set_tick_params(labelsize=15)
else:
for jj, global_quantity in enumerate(plot_quantities):
axs[jj].set_ylabel('global_%s'%global_quantity.replace('_box',''),fontsize=0)
axs[jj].yaxis.set_tick_params(labelsize=0)
plt.tight_layout()
fig.subplots_adjust(hspace = 0.0,wspace=0.0)
[86]:

21-cm power spectra¶
[87]:
# define functions to calculate PS, following py21cmmc
from powerbox.tools import get_power
def compute_power(
box,
length,
n_psbins,
log_bins=True,
ignore_kperp_zero=True,
ignore_kpar_zero=False,
ignore_k_zero=False,
):
# Determine the weighting function required from ignoring k's.
k_weights = np.ones(box.shape, dtype=np.int)
n0 = k_weights.shape[0]
n1 = k_weights.shape[-1]
if ignore_kperp_zero:
k_weights[n0 // 2, n0 // 2, :] = 0
if ignore_kpar_zero:
k_weights[:, :, n1 // 2] = 0
if ignore_k_zero:
k_weights[n0 // 2, n0 // 2, n1 // 2] = 0
res = get_power(
box,
boxlength=length,
bins=n_psbins,
bin_ave=False,
get_variance=False,
log_bins=log_bins,
k_weights=k_weights,
)
res = list(res)
k = res[1]
if log_bins:
k = np.exp((np.log(k[1:]) + np.log(k[:-1])) / 2)
else:
k = (k[1:] + k[:-1]) / 2
res[1] = k
return res
def powerspectra(brightness_temp, n_psbins=50, nchunks=10, min_k=0.1, max_k=1.0, logk=True):
data = []
chunk_indices = list(range(0,brightness_temp.n_slices,round(brightness_temp.n_slices / nchunks),))
if len(chunk_indices) > nchunks:
chunk_indices = chunk_indices[:-1]
chunk_indices.append(brightness_temp.n_slices)
for i in range(nchunks):
start = chunk_indices[i]
end = chunk_indices[i + 1]
chunklen = (end - start) * brightness_temp.cell_size
power, k = compute_power(
brightness_temp.brightness_temp[:, :, start:end],
(BOX_LEN, BOX_LEN, chunklen),
n_psbins,
log_bins=logk,
)
data.append({"k": k, "delta": power * k ** 3 / (2 * np.pi ** 2)})
return data
[96]:
# do 5 chunks but only plot 1 - 4, the 0th has no power for minihalo models where xH=0
nchunks = 4
fig, axss = plt.subplots(nchunks, len(labelss), sharex=True,sharey=True,figsize=(4*len(labelss),3*(nchunks)),subplot_kw={"xscale":'log', "yscale":'log'})
for pp, texts in enumerate(textss):
labels = labelss[pp]
factors = factorss[pp]
axs = axss[:,pp]
for kk, label in enumerate(labels):
flag_options = flag_options_fid.copy()
astro_params = astro_params_fid.copy()
factor = factors[kk]
if label == 'NOmini':
flag_options.update({'USE_MINI_HALOS': False})
else:
flag_options.update({'USE_MINI_HALOS': True})
if pp == 0:
astro_params.update({'F_STAR7_MINI': astro_params_fid['F_STAR7_MINI']+np.log10(factor)})
elif pp == 1:
astro_params.update({'F_ESC7_MINI': astro_params_fid['F_ESC7_MINI']+np.log10(factor)})
elif pp == 2:
astro_params.update({'L_X_MINI': astro_params_fid['L_X_MINI']+np.log10(factor)})
else:
if factor > 1: continue # can't do negative F_H2_SHIELD
astro_params.update({'F_H2_SHIELD': 1. - (1. - astro_params_fid['F_H2_SHIELD']) * factor})
if label == 'reference':
lightcone = lightcone_fid
else:
lightcone = p21c.run_lightcone(
redshift = 5.5,
init_box = initial_conditions,
flag_options = flag_options,
astro_params = astro_params,
global_quantities=global_quantities,
random_seed = random_seed,
direc = output_dir
)
PS = powerspectra(lightcone)
for ii in range(nchunks):
axs[ii].plot(PS[ii+1]['k'], PS[ii+1]['delta'], color=colors[kk],linestyle=linestyles[kk], label = labels[kk],lw=lws[kk])
if pp == len(textss)-1 and kk == 0:
axs[ii].text(0.99, 0.01, 'Chunk-%02d'%(ii+1),horizontalalignment='right',verticalalignment='bottom',
transform=axs[ii].transAxes,fontsize = 15)
axs[0].text(0.01, 0.99, texts,horizontalalignment='left',verticalalignment='top',
transform=axs[0].transAxes,fontsize = 15)
axs[-1].set_xlabel("$k$ [Mpc$^{-3}$]",fontsize=15)
axs[-1].xaxis.set_tick_params(labelsize=15)
if pp == 0:
for ii in range(nchunks):
axs[ii].set_ylim(2e-1, 2e2)
axs[ii].set_ylabel("$k^3 P$", fontsize=15)
axs[ii].yaxis.set_tick_params(labelsize=15)
else:
for ii in range(nchunks-1):
axs[ii].set_ylim(2e-1, 2e2)
axs[ii].set_ylabel("$k^3 P$", fontsize=0)
axs[ii].yaxis.set_tick_params(labelsize=0)
axss[0,0].legend(loc='lower left', ncol=2,fontsize=13,fancybox=True,frameon=True)
plt.tight_layout()
fig.subplots_adjust(hspace = 0.0,wspace=0.0)
[96]:

Now you know how minihalo can shape the 21-cm signal!
[ ]:
If you’ve covered the tutorials and still have questions about “how to do stuff” in
21cmFAST
, consult the FAQs:
Installation FAQ¶
Errors with “recompile with -fPIC” for FFTW¶
Make sure you have installed FFTW with --enable-shared
. On Ubuntu, you will
also need to have libfftw3-dev
installed.
Miscellaneous FAQs¶
My run seg-faulted, what should I do?¶
Since 21cmFAST
is written in C, there is the off-chance that something
catastrophic will happen, causing a segfault. Typically, if this happens, Python will
not print a traceback where the error occurred, and finding the source of such errors
can be difficult. However, one has the option of using the standard library
faulthandler. Specifying
-X faulthandler
when invoking Python will cause a minimal traceback to be printed
to stderr
if a segfault occurs.
Configuring 21cmFAST¶
21cmFAST
has a configuration file located at ~/.21cmfast/config.yml
. This file
specifies some options to use in a rather global sense, especially to do with I/O.
You can directly edit this file to change how 21cmFAST
behaves for you across
sessions.
For any particular function call, any of the options may be overwritten by supplying
arguments to the function itself.
To set the configuration for a particular session, you can also set the global config
instance, for example:
>>> import py21cmfast as p21
>>> p21.config['regenerate'] = True
>>> p21.run_lightcone(...)
All functions that use the regenerate keyword will now use the value you’ve set in the config. Sometimes, you may want to be a little more careful – perhaps you want to change the configuration for a set of calls, but have it change back to the defaults after that. We provide a context manager to do this:
>>> with p21.config.use(regenerate=True):
>>> p21.run_lightcone()
>>> print(p21.config['regenerate']) # prints "True"
>>> print(p21.config['regenerate']) # prints "False"
To make the current configuration permanent, simply use the write
method:
>>> p21.config['direc'] = 'my_own_cache'
>>> p21.config.write()
Global Parameters¶
There are a bunch of “global” parameters that are used throughout the C code. These are
parameters that are deemed to be constant enough to not expose them through the
regularly-used input structs, but nevertheless may necessitate modification from
time-to-time. These are accessed through the global_params
object:
>>> from py21cmfast import global_params
Help on the attributes can be obtained via help(global_params)
or
in the docs. Setting the
attributes (which affects them everywhere throughout the code) is as simple as, eg:
>>> global_params.Z_HEAT_MAX = 30.0
If you wish to use a certain parameter for a fixed portion of your code (eg. for a single run), it is encouraged to use the context manager, eg.:
>>> with global_params.use(Z_HEAT_MAX=10):
>>> run_lightcone(...)
API Reference¶
py21cmfast¶
Input parameter classes. |
|
Output class objects. |
|
The main wrapper for the underlying 21cmFAST C-code. |
|
Simple plotting functions for 21cmFAST objects. |
|
A set of tools for reading/writing/querying the in-built cache. |
Contributing¶
Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given.
Bug reports/Feature Requests/Feedback/Questions¶
It is incredibly helpful to us when users report bugs, unexpected behaviour, or request features. You can do the following:
When doing any of these, please try to be as succinct, but detailed, as possible, and use a “Minimum Working Example” whenever applicable.
Documentation improvements¶
21cmFAST
could always use more documentation, whether as part of the
official 21cmFAST
docs, in docstrings, or even on the web in blog posts,
articles, and such. If you do the latter, take the time to let us know about it!
High-Level Steps for Development¶
This is an abbreviated guide to getting started with development of 21cmFAST
,
focusing on the discrete high-level steps to take. See our
notes for developers
for more details about how to get around the 21cmFAST
codebase and other
technical details.
There are two avenues for you to develop 21cmFAST
. If you plan on making significant
changes, and working with 21cmFAST
for a long period of time, please consider
becoming a member of the 21cmFAST GitHub organisation (by emailing any of the owners
or admins). You may develop as a member or as a non-member.
The difference between members and non-members only applies to the first step of the development process.
Note that it is highly recommended to work in an isolated python environment with
all requirements installed from requirements_dev.txt
. This will also ensure that
pre-commit hooks will run that enforce the black
coding style. If you do not
install these requirements, you must manually run black before committing your changes,
otherwise your changes will likely fail continuous integration.
As a member:
Clone the repo:
git clone git@github.com:21cmFAST/21cmFAST.git
As a non-member:
First fork 21cmFAST (look for the “Fork” button), then clone the fork locally:
git clone git@github.com:your_name_here/21cmFAST.git
The following steps are the same for both members and non-members:
Install a fresh new isolated environment. This can be either a basic
virtualenv
or aconda
env (suggested). So either:virtualenv ~/21cmfast source ~/21cmfast/bin/activate
or:
conda create -n 21cmfast python=3 conda activate 21cmfast
Install the development requirements for the project. If using the basic virtualenv:
pip install -r requirements_dev.txt
or if using conda (suggested):
conda env update -f environment.yml
Install pre-commit hooks:
pre-commit install
Create a branch for local development:
git checkout -b name-of-your-bugfix-or-feature
Now you can make your changes locally. Note: as a member, you _must_ do step 5. If you make changes on master, you will _not_ be able to push them.
When you’re done making changes, run all the checks, doc builder and spell checker with tox one command:
tox
Commit your changes and push your branch to GitHub:
git add . git commit -m "Your detailed description of your changes." git push origin name-of-your-bugfix-or-feature
Note that if the commit step fails due to a pre-commit hook, most likely the act of running the hook itself has already fixed the error. Try doing the add and commit again (up, up, enter). If it’s still complaining, manually fix the errors and do the same again.
Submit a pull request through the GitHub website.
Pull Request Guidelines¶
If you need some code review or feedback while you’re developing the code just make the pull request. You can mark the PR as a draft until you are happy for it to be merged.
Tips¶
To run a subset of tests:
tox -e envname -- py.test -k test_myfeature
To run all the test environments in parallel (you need to pip install detox
):
detox
Developer Documentation¶
If you are new to developing 21cmFAST
, please read the Contributing
section first, which outlines the general concepts for contributing to development,
and provides a step-by-step walkthrough for getting setup.
This page lists some more detailed notes which may be helpful through the
development process.
Compiling for debugging¶
When developing, it is usually a good idea to compile the underlying C code in DEBUG
mode. This may allow extra print statements in the C, but also will allow running the C
under valgrind
or gdb
. To do this:
$ DEBUG=True pip install -e .
See Installation for more installation options.
Developing the C Code¶
In this section we outline how one might go about modifying/extending the C code and ensuring that the extension is compatible with the wrapper provided here. It is critical that you run all tests after modifying _anything_ (and see the section below about running with valgrind). When changing C code, before testing, ensure that the new C code is compiled into your environment by running:
$ rm -rf build
$ pip install .
Note that using a developer install (-e) is not recommended as it stores compiled objects in the working directory which don’t get updated as you change code, and can cause problems later.
There are two main purposes you may want to write some C code:
An external plugin/extension which uses the output data from 21cmFAST.
Modifying the internal C code of 21cmFAST.
21cmFAST currently provides no support for external plugins/extensions. It is entirely
possible to write your own C code to do whatever you want with the output data, but we
don’t provide any wrapping structure for you to do this, you will need to write your
own. Internally, 21cmFAST uses the cffi library to aid the wrapping of the C code into
Python. You don’t need to do the same, though we recommend it. If your desired
“extension” is something that needs to operate in-between steps of 21cmFAST, we also
provide no support for this, but it is possible, so long as the next step in the
chain maintains its API. You would be required to re-write the low-level wrapping
function _preceding_ your inserted step as well. For instance, if you had written a
self-contained piece of code that modified the initial conditions box, adding some
physical effect which is not already covered, then you would need to write a low-level
wrapper _and_ re-write the initial_conditions
function to modify the box before
returning it. We provide no easy “plugin” system for doing this currently. If your
external code is meant to be inserted _within_ a basic step of 21cmFAST, this is
currently not possible. You will instead have to modify the source code itself.
Modifying the C-code of 21cmFAST should be relatively simple. If your changes are
entirely internal to a given function, then nothing extra needs to be done. A little
more work has to be done if the modifications add/remove input parameters or the output
structure. If any of the input structures are modified (i.e. an extra parameter
added to it), then the corresponding class in py21cmfast.wrapper
must be modified,
usually simply to add the new parameter to the _defaults_
dict with a default value.
For instance, if a new variable some_param
was added to the user_params
struct
in the ComputeInitialConditions
C function, then the UserParams
class in
the wrapper would be modified, adding some_param=<default_value>
to its _default_
dict. If the default value of the parameter is dependent on another parameter, its
default value in this dict can be set to None
, and you can give it a dynamic
definition as a Python @property
. For example, the DIM
parameter of
UserParams
is defined as:
@property
def DIM(self):
if self._some_param is None:
return self._DIM or 4 * self.HII_DIM
Note the underscore in _DIM
here: by default, if a dynamic property is defined for
a given parameter, the _default_
value is saved with a prefixed underscore. Here we
return either the explicitly set DIM
, or 4 by the HII_DIM
. In addition, if the
new parameter is not settable – if it is completely determined by other parameters –
then don’t put it in _defaults_
at all, and just give it a dynamic definition.
If you modify an output struct, which usually house a number of array quantities
(often float pointers, but not necessarily), then you’ll again need to modify the
corresponding class in the wrapper. In particular, you’ll need to add an entry for that
particular array in the _init_arrays
method for the class. The entry consists of
initialising that array (usually to zeros, but not necessarily), and setting its proper
dtype. All arrays should be single-pointers, even for multi-dimensional data. The latter
can be handled by initalising the array as a 1D numpy array, but then setting its shape
attribute (after creation) to the appropriate n-dimensional shape (see the
_init_arrays
method for the InitialConditions
class for examples of this).
Modifying the global_params
struct should be relatively straightforward, and no
changes in the Python are necessary. However, you may want to consider adding the new
parameter to relevant _filter_params
lists for the output struct wrapping classes in
the wrapper. These lists control which global parameters affect which output structs,
and merely provide for more accurate caching mechanisms.
C Function Standards¶
The C-level functions are split into two groups – low-level “private” functions, and higher-level “public” or “API” functions. All API-level functions are callable from python (but may also be called from other C functions). All API-level functions are currently prototyped in 21cmFAST.h.
To enable consistency of error-checking in Python (and a reasonable standard for any kind of code), we enforce that any API-level function must return an integer status. Any “return” objects must be modified in-place (i.e. passed as pointers). This enables Python to control the memory access of these variables, and also to receive proper error statuses (see below for how we do exception handling). We also adhere to the convention that “output” variables should be passed to the function as its last argument(s). In the case that _only_ the last argument is meant to be “output”, there exists a simple wrapper _call_c_simple in wrapper.py that will neatly handle the calling of the function in an intuitive pythonic way.
Running with Valgrind¶
If any changes to the C code are made, it is ideal to run tests under valgrind, and
check for memory leaks. To do this, install valgrind
(we have tested v3.14+),
which is probably available via your package manager. We provide a
suppression file for valgrind
in the devel/
directory of the main repository.
It is ideal if you install a development-version of python especially for running these tests. To do this, download the version of python you want and then configure/install with:
$ ./configure --prefix=<your-home>/<directory> --without-pymalloc --with-pydebug --with-valgrind
$ make; make install
Construct a virtualenv
on top of this installation, and create your environment,
and install all requirements.
If you do not wish to run with a modified version of python, you may continue with your
usual version, but may get some extra cruft in the output. If running with Python
version > 3.6, consider running with environment variable PYTHONMALLOC=malloc
(see https://stackoverflow.com/questions/20112989/how-to-use-valgrind-with-python ).
The general pattern for using valgrind with python is:
$ valgrind --tool=memcheck --track-origins=yes --leak-check=full --suppressions=devel/valgrind-suppress-all-but-c.supp <python script>
One useful command is to run valgrind over the test suite (from the top-level repo directory):
$ valgrind --tool=memcheck --track-origins=yes --leak-check=full --suppressions=devel/valgrind-suppress-all-but-c.supp pytest
While we will attempt to keep the suppression file updated to the best of our knowledge so that only relevant leaks and errors are reported, you will likely have to do a bit of digging to find the relevant parts.
Valgrind will likely run very slowly, and sometimes you will know already which exact tests are those which may have problems, or are relevant to your particular changes. To run these:
$ valgrind --tool=memcheck --track-origins=yes --leak-check=full --suppressions=devel/valgrind-suppress-all-but-c.supp pytest -v tests/<test_file>::<test_func>
Producing Integration Test Data¶
There are bunch of so-called “integration tests”, which rely on previously-produced data. To produce this data, run python tests/produce_integration_test_data.py.
Furthermore, this data should only be produced with good reason – the idea is to keep it static while the code changes, to have something steady to compare to. If a particular PR fixes a bug which affects a certain tests’ data, then that data should be re-run, in the context of the PR, so it can be explained.
Logging in C¶
The C code has a header file logging.h
. The C code should never contain bare
print-statements – everything should be formally logged, so that the different levels
can be printed to screen correctly. The levels are defined in logging.h
, and include
levels such as INFO
, WARNING
and DEBUG
. Each level has a corresponding macro
that starts with LOG_
. Thus to log run-time information to stdout, you would use
LOG_INFO("message");
. Note that the message does not require a final newline character.
Exception handling in C¶
There are various places that things can go wrong in the C code, and they need to be
handled gracefully so that Python knows what to do with it (rather than just quitting!).
We use the simple cexcept.h
header file from http://www.nicemice.net/cexcept/ to
enable a simple form of exception handling. That file itself should not be edited.
There is another header – exceptions.h
– that defines how we use exceptions
throughout 21cmFAST
. Any time an error arises that can be understood, the developer
should add a Throw <ErrorKind>;
line. The ErrorKind
can be any of the kinds
defined in exceptions.h
(eg. GSLError
or ValueError
). These are just integers.
Any C function that has a header in 21cmFAST.h
– i.e. any function that is callable
directly from Python – must be globally wrapped in a Try {} Catch(error_code) {}
block. See
GenerateICs.c
for an example. Most of the code should be in the Try
block.
Anything that does a Throw
at any level of the call stack within that Try
will
trigger a jump to the Catch
. The error_code
is the integer that was thrown.
Typically, one will perhaps want to do some cleanup here, and then finally return the
error code.
Python knows about the exit codes it can expect to receive, and will raise Python
exceptions accordingly. From the python side, two main kinds of exceptions could be
raised, depending on the error code returned from C. The lesser exception is called a
ParameterError
, and is supposed to indicate an error that happened merely because
the parameters that were input to the calculation were just too extreme to handle.
In the case of something like an automatic Monte Carlo algorithm that’s iterating over
random parameters, one would usually want to just keep going at this point, because
perhaps it just wandered too far in parameter space.
The other kind of error is a FatalCError
, and this is where things went truly wrong,
and probably will do for any combination of parameters.
If you add a kind of Exception in the C code (to exceptions.h
), then be sure to add
a handler for it in the _process_exitcode
function in wrapper.py
.
Authors¶
Brad Greig - github.com/BradGreig
Andrei Mesinger - github.com/andreimesinger
Steven Murray - github.com/steven-murray
Contributors¶
Changelog¶
v3.0.0.dev¶
Complete overhaul of 21cmFAST, including a robust python-wrapper and interface, caching mechanisms, and public repository with continuous integration. Changes and equations for minihalo features in this version are found in https://arxiv.org/abs/2003.04442
Added¶
Updated the radiation source model: (i) all radiation fields including X-rays, UV ionizing, Lyman Werner and Lyman alpha are considered from two seperated population namely atomic-cooling (ACGs) and minihalo-hosted molecular-cooling galaxies (MCGs); (ii) the turn-over masses of ACGs and MCGs are estimated with cooling efficiency and feedback from reionization and lyman werner suppression (Qin et al. 2020). This can be switched on using new
flag_options
USE_MINI_HALOS
.Updated kinetic temperature of the IGM with fully ionized cells following equation 6 of McQuinn (2015) and partially ionized cells having the volume-weightied temperature between the ionized (volume: 1-xHI; temperature T_RE ) and neutral components (volume: xHI; temperature: temperature of HI). This is stored in IonizedBox as temp_kinetic_all_gas. Note that Tk in TsBox remains to be the kinetic temperature of HI.
Tests: many unit tests, and also some regression tests.
CLI: run 21cmFAST boxes from the command line, query the cache database, and produce plots for standard comparison runs.
Documentation: Jupyter notebook demos and tutorials, FAQs, installation instructions.
Plotting routines: a number of general plotting routines designed to plot coeval and lightcone slices.
New power spectrum option (
POWER_SPECTRUM=5
) that uses a CLASS-based transfer function. WARNING: If POWER_SPECTRUM==5 the cosmo parameters cannot be altered, they are set to the Planck2018 best-fit values for now (until CLASS is added): (omegab=0.02237, omegac= 0.120, hubble=0.6736 (the rest are irrelevant for the transfer functions, but in case: A_s=2.100e-9, n_s=0.9649, z_reio = 11.357)New
user_params
optionUSE_RELATIVE_VELOCITIES
, which produces initial relative velocity cubes (option implemented, but not the actual computation yet).Configuration management.
global params now has a context manager for changing parameters temporarily.
Vastly improved error handling: exceptions can be caught in C code and propagated to Python to inform the user of what’s going wrong.
Ability to write high-level data (
Coeval
andLightcone
objects) directly to file in a simple portable format.
Changed¶
POWER_SPECTRUM
option moved fromglobal_params
touser_params
.Default cosmology updated to Planck18.
v2.0.0¶
All changes and equations for this version are found in https://arxiv.org/abs/1809.08995.
Changed¶
Updated the ionizing source model: (i) the star formation rates and ionizing escape fraction are scaled with the masses of dark matter halos and (ii) the abundance of active star forming galaxies is exponentially suppressed below the turn-over halo mass, M_{turn}, according to a duty cycle of exp(−M_{turn}/M_{h}), where M_{h} is a halo mass.
Removed the mean free path parameter, R_{mfp}. Instead, directly computes inhomogeneous, sub-grid recombinations in the intergalactic medium following the approach of Sobacchi & Mesinger (2014)
v1.2.0¶
Added¶
Support for a halo mass dependent ionizing efficiency: zeta = zeta_0 (M/Mmin)^alpha, where zeta_0 corresponds to HII_EFF_FACTOR, Mmin –> ION_M_MIN, alpha –> EFF_FACTOR_PL_INDEX in ANAL_PARAMS.H
v1.12.0¶
Added¶
Code ‘redshift_interpolate_boxes.c’ to interpolate between comoving cubes, creating comoving light cone boxes.
Enabled openMP threading for SMP machines. You can specify the number of threads (for best performace, do not exceed the number of processors) in INIT_PARAMS.H. You do not need to have an SMP machine to run the code. NOTE: YOU SHOULD RE-INSTALL FFTW to use openMP (see INSTALL file)
Included a threaded driver file ‘drive_zscroll_reion_param.c’ set-up to perform astrophysical parameter studies of reionization
Included explicit support for WDM cosmologies; see COSMOLOGY.H. The prescription is similar to that discussed in Barkana+2001; Mesinger+2005, madifying the (i) transfer function (according to the Bode+2001 formula; and (ii) including the effective pressure term of WDM using a Jeans mass analogy. (ii) is approximated with a sharp cuttoff in the EPS barrier, using 60* M_J found in Barkana+2001 (the 60 is an adjustment factor found by fitting to the WDM collapsed fraction).
A Gaussian filtering step of the PT fields to perturb_field.c, in addition to the implicit boxcar smoothing. This avoids having”empty” density cells, i.e. delta=-1, with some small loss in resolution. Although for most uses delta=-1 is ok, some Lya forest statistics do not like it.
Added treatment of the risidual electron fraction from X-ray heating when computing the ionization field. Relatedly, modified Ts.c to output all intermediate evolution boxes, Tk and x_e.
Added a missing factor of Omega_b in Ts.c corresponding to eq. 18 in MFC11. Users who used a previous version should note that their results just effecively correspond to a higher effective X-ray efficiency, scaled by 1/Omega_baryon.
Normalization optimization to Ts.c, increasing performace on arge resolution boxes
Fixed¶
GSL interpolation error in kappa_elec_pH for GSL versions > 1.15
Typo in macro definition, which impacted the Lya background calculation in v1.11 (not applicable to earlier releases)
Outdated filename sytax when calling gen_size_distr in drive_xHIscroll
Redshift scrolling so that drive_logZscroll_Ts.c and Ts.c are in sync.
Changed¶
Output format to avoid FFT padding for all boxes
Filename conventions to be more explicit.
Small changes to organization and structure
v1.1.0¶
Added¶
Wrapper functions mod_fwrite() and mod_fread() in Cosmo_c_progs/misc.c, which should fix problems with the library fwrite() and fread() for large files (>4GB) on certain operating systems.
Included print_power_spectrum_ICs.c program which reads in high resolution initial conditions and prints out an ASCII file with the associated power spectrum.
Parameter in Ts.c for the maximum allowed kinetic temperature, which increases stability of the code when the redshift step size and the X-ray efficiencies are large.
Fixed¶
Oversight adding support for a Gaussian filter for the lower resolution field.