Usage¶
Light Curves¶
The core of this package is the lightcurve.LC
object. This is an extension of the astropy.table.Table
object, which contains
tabular data, in this case broadband photometric observations. You can read light curve data from a file using the
standard astropy.table.Table.read()
method. I’m going to read an example light curve of SN 2016bkv to be used from here on:
from lightcurve_fitting.lightcurve import LC
filename = 'example/SN2016bkv.txt'
lc = LC.read(filename)
print(lc)
The following column names are used by the package, although the light curve can have extra columns [alternative column names are given in square brackets]:
MJD (required): modified Julian date of the observation [mjd, JD, jd (JD/jd are converted)]
mag (required): magnitude of the observation [Magnitude, Mag, ab_mag, PSFmag, MAG, omag, magnitude, apparent_mag]
dmag (required): uncertainty on the magnitude [Magnitude_Error, magerr, MagErr, mag_err, e_mag, Error, err, PSFerr, MAGERR, e_omag, e_magnitude, apparent_mag_err, Mag_Err, emag, error]
filter (required): name of the filter [filter, filt Filter, band, FLT, Band] (see Filters below)
nondet: True if the magnitude is an upper limit, False otherwise [Is_Limit, UL, l_omag, upper_limit, upperlimit]
flux: the spectral flux density (\(F_ν\), arbitrary units) of the observation [FLUXCAL]
dflux: uncertainty on the flux [FLUXCALERR]
phase: time since a reference date (e.g., peak or explosion) in rest-frame days [Phase, PHASE]
absmag: absolute magnitude of the observation
lum: the spectral luminosity density (\(L_ν\), in watts/hertz) of the observation
dlum: the uncertainty on the spectral luminosity density
telescope: the name of the telescope/instrument where this observation was carried out [Telescope, Tel, tel+inst]
source: the data source, either a telescope/instrument name or a literature reference [Source]
The LC.meta
attribute contains information needed to calculate absolute magnitudes, luminosities, and rest-frame phases:
dm: the distance modulus
ebv: the selective extinction due to dust in the Milky Way
host_ebv: the selective extinction due to dust in the host galaxy (applied at the target redshift)
redshift: the redshift (also used to calculate distance if the distance modulus is not given)
lc.meta['dm'] = 30.79
lc.meta['ebv'] = 0.016
lc.meta['host_ebv'] = 0.
lc.meta['redshift'] = 0.002
The LC
object has several methods for converting between the columns above,
as well as a method for plotting the light curve in a single command:
lc.calcAbsMag()
lc.calcPhase()
lc.plot()
Filters¶
The filters
submodule defines a Filter
object that stores information about the broadband filters: transmission function, photometric system, and styles for plotting.
You mostly won’t have to touch this module, unless you are adding or modifying filters.
The 'filter'
column in a LC
object contains Filter
objects, rather than strings.
However, you can use filter names directly in most places, including the LC.where()
method, and they will be parsed into Filter
objects.
For example, lc.where(filter='r')
will return photometry points in bands labeled both ‘r’ and ‘rp’ in your input file.
If you ever need direct access to the Filter
objects by name, you can use the filter lookup dictionary.
from lightcurve_fitting.filters import filtdict
g = filtdict['g']
print(g)
Bolometric Light Curves¶
You can make a bolometric light curve and color curves from the photometry table with the bolometric
module.
from lightcurve_fitting.bolometric import calculate_bolometric, plot_bolometric_results, plot_color_curves
t = calculate_bolometric(lc, outpath='./SN2016bkv_bolometric', colors=['B-V', 'g-r', 'r-i'])
print(t)
plot_bolometric_results(t)
plot_color_curves(t)
The light curve is divided into SEDs at individual epochs, defined by the res
and also_group_by
arguments to calculate_bolometric and/or specified manually using the `()
’epoch’`` column. Then each epoch is processed in five different ways:
Fitting the Planck function using
scipy.optimize.curve_fit()
. This is very fast but may not give reliable uncertainties. The columnstemp
,radius
,dtemp
, anddradius
come from this fit.
The Stefan-Bolzmann law gives the total bolometric luminosity,
L_bol
anddL_bol
. The old convention oflum
anddlum
is still be useable, but should be considered obsolete.Integrating the Planck function between \(U\) and \(I\) band (observed) gives
L
, the pseudobolometric luminosity. The old convention ofL_opt
is still be useable, but should be considered obsolete.Fitting the Planck function using an MCMC routine. This is slower, depending on how many walkers (
nwalkers
) and steps (burnin_steps
andsteps
) you use, but gives more robust uncertainties. The columnstemp_mcmc
,radius_mcmc
,dtemp_mcmc0
,dtemp_mcmc1
,dradius_mcmc0
,dradius_mcmc1
come from this fit. My convention for non-Gaussian uncertainties is that 0 is the lower uncertainty and 1 is the upper uncertainty.
The Stefan-Bolzmann law gives the total bolometric luminosity,
L_bol_mcmc
,dL_bol_mcmc0
anddL_bol_mcmc1
.Integrating the Planck function between \(U\) and \(I\) band (observed) gives
L_mcmc
,dL_mcmc0
, anddL_mcmc1
, the pseudobolometric luminosity.Directly integrating the observed SED, assuming 0 flux outside of \(U\) to \(I\). Use this if you do not want to assume the SED is a blackbody. This yields the column
L_int
.
The MCMC routine saves a corner plot for each fit in the folder you specify (outpath
).
I highly recommend looking through these to make sure the fits converged.
If they didn’t, try adjusting the number of burn-in steps (burnin_steps
).
To save the table, give save_table_as='filename.table'
as an argument to calculate_bolometric()
.
To save the plot, give save_plot_as='filename.pdf'
as an argument to plot_bolometric_results()
.
Beware of the units I’m using:
Temperatures are in kilokelvins (kK).
Radii are in thousands of solar radii (\(1000R_\odot\)).
Luminosities are in watts (W). \(1\,\mathrm{W} = 10^7\,\mathrm{erg}\,\mathrm{s}^{-1}\)
Optionally, you can calculate colors at each epoch by giving the argument colors
to calculate_bolometric()
. These get saved in the same output table in four columns per color, e.g., for \(B-V\):
the color itself,
B-V
,the uncertainty on the color,
d(B-V)
,whether the color is a lower limit,
lolims(B-V)
(i.e., \(B\) was an upper limit), andwhether the color is an upper limit,
uplims(B-V)
(i.e., \(V\) was an upper limit).
Intrinsic Scatter¶
You can include an intrinsic scatter term (\(\sigma\)) in your MCMC fits by setting use_sigma=True
. \(\sigma\) is added in quadrature to the photometric uncertainty on each point (\(\sigma_i\)). If you choose sigma_type='relative'
, \(\sigma\) will be in units of the individual photometric uncertainties, i.e.,
If you choose sigma_type='absolute'
, \(\sigma\) will be in units of the median photometric uncertainty (\(\bar\sigma\)), i.e.,
For bolometric light curve fitting, you can also set a maximum for this intrinsic scatter using the sigma_max
keyword (default: 10). (For model fitting, you can set a maximum using the priors
keyword.)
Model Fitting¶
The models
and fitting
submodules allow you to fit analytical models to the observed data.
Right now, there are three classes of models: BaseCompanionShocking
, which is the SiFTO Type Ia supernova template [C08] plus a shock component from [K10]; BaseShockCooling
, which is the [SW17] model for shock cooling in a core-collapse supernova; and ShockCooling4
, which is the updated shock cooling model of [MSW23].
The variations on these classes are as follows:
CompanionShocking
uses factors on the r and i SiFTO models and a factor on the U shock component. This was used in my paper on SN 2017cbv [H17].
CompanionShocking2
uses time offsets for the U and i SiFTO models. This was used in my paper on SN 2021aefx [H22a].
CompanionShocking3
is the same asCompanionShocking2
but includes viewing angle dependence. This was used in my paper on SN 2023bee [H23a].
ShockCooling
is formulated in terms of physical parameters \(v_s, M_\mathrm{env}, f_ρ M, R\).
ShockCooling2
is formulated in terms of scaling parameters \(T_1, L_1, t_\mathrm{tr}\). This was used in my paper on SN 2016bkv [H18].
ShockCooling3
is the same asShockCooling
but with \(d_L\) and \(E(B-V)\) as free parameters. (Therefore it fits the flux instead of the luminosity.) This was used in my paper on SN 2021yja [H22b].
ShockCooling4
is the updated shock cooling model of [MSW23]. This was used in my paper on SN 2023ixf [H23b].
Note on the shock cooling models:
There are degeneracies between many of the physical parameters that make them difficult to fit independently.
This led us to fit develop the ShockCooling2
model just to see if the model could fit the data at all.
Since it did not fit well, we concluded that the physical parameters we could have obtained by fitting the ShockCooling
model were irrelevant.
However, in order to measure, for example, the progenitor radius, one must use the ShockCooling
model.
from lightcurve_fitting.models import ShockCooling2, UniformPrior
from lightcurve_fitting.fitting import lightcurve_mcmc, lightcurve_corner
# Fit only the early light curve
lc_early = lc.where(MJD_min=57468., MJD_max=57485.)
# Define the priors and initial guesses
priors = [
UniformPrior(0., 100.),
UniformPrior(0., 100.),
UniformPrior(0., 100.),
UniformPrior(57468., 57468.7),
]
p_lo = [20., 2., 20., 57468.5]
p_up = [50., 5., 50., 57468.7]
# Initialize the model
model = ShockCooling2(lc_early)
# Run the fit
sampler = lightcurve_mcmc(lc_early, model, priors=priors, p_lo=p_lo, p_up=p_up,
nwalkers=10, nsteps=100, nsteps_burnin=100, show=True)
# Plot the results
fig, ax_corner, ax_model = lightcurve_corner(lc_early, model, sampler.flatchain)
Another note on the shock cooling models: The shock cooling models are only valid for temperatures above 0.7 eV = 8120 K [SW17], so you should check that you have not included observations where the model goes below that. If you have, you should rerun the fit without those points. If you used the [RW11] option, the model fails even earlier, but you will have to check that manually.
p_mean = sampler.flatchain.mean(axis=0)
t_max = model.t_max(p_mean)
print(t_max)
if lc_early['MJD'].max() > t_max:
print('Warning: your model is not valid for all your observations')
Note that you can add an Intrinsic Scatter to your model fits as well.
Defining New Models (Advanced)¶
If you want to define a new model, all you need to do is subclass the Model
class.
Implement the model in the Model.evaluate()
method, which takes an array of times and an array of filters as the first two arguments, followed by the physical parameters of the model.
If there are keyword arguments (parameters that are not fit for) that need to be specified, you may have to override the Model.__init__()
method.
You must also provide input_names
and units
as class variables.
Calibrating Spectra to Photometry¶
The speccal
module (somewhat experimental right now) can be used to calibrate spectra to observed photometry.
from lightcurve_fitting.speccal import calibrate_spectra
spectra_filenames = ['blah.fits', 'blah.txt', 'blah.dat']
calibrate_spectra(spectra_filenames, lc, show=True)
Each spectrum is multiplied by the filter transmission function and integrated to produce a synthetic flux measurement.
Each magnitude in the light curve is also converted to flux.
The ratios of these two flux measurements (for each filter) are fit with a polynomial (order 0 by default).
Multiplying by this best-fit polynomial calibrates the spectrum to the photometry.
Each calibrated spectrum is saved to a text file with the prefix photcal_
.
I recommend using show=True
to visualize the process.