7. Flux sampling¶
7.1. Basic usage¶
The easiest way to get started with flux sampling is using the
sample
function in the flux_analysis
submodule. sample
takes
at least two arguments: a cobra model and the number of samples you want
to generate.
In [1]:
from cobra.test import create_test_model
from cobra.flux_analysis import sample
model = create_test_model("textbook")
s = sample(model, 100)
s.head()
Out[1]:
ACALD | ACALDt | ACKr | ACONTa | ACONTb | ACt2r | ADK1 | AKGDH | AKGt2r | ALCD2x | ... | RPI | SUCCt2_2 | SUCCt3 | SUCDi | SUCOAS | TALA | THD2 | TKT1 | TKT2 | TPI | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | -0.577302 | -0.149662 | -0.338001 | 10.090870 | 10.090870 | -0.338001 | 0.997694 | 4.717467 | -0.070599 | -0.427639 | ... | -2.255649 | 6.152278 | 6.692068 | 821.012351 | -4.717467 | 2.230392 | 133.608893 | 2.230392 | 2.220236 | 5.263234 |
1 | -0.639279 | -0.505704 | -0.031929 | 10.631865 | 10.631865 | -0.031929 | 4.207702 | 6.763224 | -0.024720 | -0.133575 | ... | -0.769792 | 14.894313 | 14.929989 | 521.410118 | -6.763224 | 0.755207 | 66.656770 | 0.755207 | 0.749341 | 7.135499 |
2 | -1.983410 | -0.434676 | -0.408318 | 11.046294 | 11.046294 | -0.408318 | 5.510960 | 7.240802 | -0.501086 | -1.548735 | ... | -0.088852 | 15.211972 | 15.807554 | 756.884622 | -7.240802 | 0.065205 | 42.830676 | 0.065205 | 0.055695 | 8.109647 |
3 | -1.893551 | -0.618887 | -0.612598 | 8.879426 | 8.879426 | -0.612598 | 6.194372 | 5.382222 | -0.563573 | -1.274664 | ... | -1.728387 | 15.960829 | 17.404437 | 556.750972 | -5.382222 | 1.714682 | 37.386748 | 1.714682 | 1.709171 | 7.003325 |
4 | -1.759520 | -0.321021 | -0.262520 | 10.801480 | 10.801480 | -0.262520 | 4.815146 | 9.236588 | -0.359817 | -1.438499 | ... | -2.840577 | 12.379023 | 13.141259 | 440.132011 | -9.236588 | 2.822071 | 0.292885 | 2.822071 | 2.814629 | 6.205245 |
5 rows × 95 columns
By default sample uses the optgp
method based on the method
presented here as it
is suited for larger models and can run in parallel. By default the
sampler uses a single process. This can be changed by using the
processes
argument.
In [2]:
print("One process:")
%time s = sample(model, 1000)
print("Two processes:")
%time s = sample(model, 1000, processes=2)
One process:
CPU times: user 7.91 s, sys: 4.09 s, total: 12 s
Wall time: 6.52 s
Two processes:
CPU times: user 288 ms, sys: 495 ms, total: 783 ms
Wall time: 3.52 s
Alternatively you can also user Artificial Centering Hit-and-Run for
sampling by setting the method to achr
. achr
does not support
parallel execution but has good convergence and is almost Markovian.
In [3]:
s = sample(model, 100, method="achr")
In general setting up the sampler is expensive since initial search directions are generated by solving many linear programming problems. Thus, we recommend to generate as many samples as possible in one go. However, this might require finer control over the sampling procedure as described in the following section.
7.2. Advanced usage¶
7.2.1. Sampler objects¶
The sampling process can be controlled on a lower level by using the sampler classes directly.
In [4]:
from cobra.flux_analysis.sampling import OptGPSampler, ACHRSampler
Both sampler classes have standardized interfaces and take some
additional argument. For instance the thinning
factor. “Thinning”
means only recording samples every n iterations. A higher thinning
factors mean less correlated samples but also larger computation times.
By default the samplers use a thinning factor of 100 which creates
roughly uncorrelated samples. If you want less samples but better mixing
feel free to increase this parameter. If you want to study convergence
for your own model you might want to set it to 1 to obtain all iterates.
In [5]:
achr = ACHRSampler(model, thinning=10)
OptGPSampler
has an additional processes
argument specifying how
many processes are used to create parallel sampling chains. This should
be in the order of your CPU cores for maximum efficiency. As noted
before class initialization can take up to a few minutes due to
generation of initial search directions. Sampling on the other hand is
quick.
In [6]:
optgp = OptGPSampler(model, processes=4)
7.2.2. Sampling and validation¶
Both samplers have a sample function that generates samples from the
initialized object and act like the sample
function described above,
only that this time it will only accept a single argument, the number of
samples. For OptGPSampler
the number of samples should be a multiple
of the number of processes, otherwise it will be increased to the
nearest multiple automatically.
In [7]:
s1 = achr.sample(100)
s2 = optgp.sample(100)
You can call sample
repeatedly and both samplers are optimized to
generate large amount of samples without falling into “numerical traps”.
All sampler objects have a validate
function in order to check if a
set of points are feasible and give detailed information about
feasibility violations in a form of a short code denoting feasibility.
Here the short code is a combination of any of the following letters:
- “v” - valid point
- “l” - lower bound violation
- “u” - upper bound violation
- “e” - equality violation (meaning the point is not a steady state)
For instance for a random flux distribution (should not be feasible):
In [8]:
import numpy as np
bad = np.random.uniform(-1000, 1000, size=len(model.reactions))
achr.validate(np.atleast_2d(bad))
Out[8]:
array(['le'], dtype='<U3')
And for our generated samples:
In [9]:
achr.validate(s1)
Out[9]:
array(['v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v',
'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v',
'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v',
'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v',
'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v',
'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v',
'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v',
'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v'], dtype='<U3')
Even though most models are numerically stable enought that the sampler
should only generate valid samples we still urge to check this.
validate
is pretty fast and works quickly even for large models and
many samples. If you find invalid samples you do not necessarily have to
rerun the entire sampling but can exclude them from the sample
DataFrame.
In [10]:
s1_valid = s1[achr.validate(s1) == "v"]
len(s1_valid)
Out[10]:
100
7.2.3. Batch sampling¶
Sampler objects are made for generating billions of samples, however
using the sample
function might quickly fill up your RAM when
working with genome-scale models. Here, the batch
method of the
sampler objects might come in handy. batch
takes two arguments, the
number of samples in each batch and the number of batches. This will
make sense with a small example.
Let’s assume we want to quantify what proportion of our samples will grow. For that we might want to generate 10 batches of 50 samples each and measure what percentage of the individual 100 samples show a growth rate larger than 0.1. Finally, we want to calculate the mean and standard deviation of those individual percentages.
In [11]:
counts = [np.mean(s.Biomass_Ecoli_core > 0.1) for s in optgp.batch(100, 10)]
print("Usually {:.2f}% +- {:.2f}% grow...".format(
np.mean(counts) * 100.0, np.std(counts) * 100.0))
Usually 10.90% +- 3.83% grow...
7.3. Adding constraints¶
Flux sampling will respect additional contraints defined in the model. For instance we can add a constraint enforcing growth in asimilar manner as the section before.
In [12]:
co = model.problem.Constraint(model.reactions.Biomass_Ecoli_core.flux_expression, lb=0.1)
model.add_cons_vars([co])
Note that this is only for demonstration purposes. usually you could set the lower bound of the reaction directly instead of creating a new constraint.
In [13]:
s = sample(model, 10)
print(s.Biomass_Ecoli_core)
0 0.118106
1 0.120205
2 0.206187
3 0.198633
4 0.206575
5 0.119032
6 0.119231
7 0.127219
8 0.120086
9 0.182586
Name: Biomass_Ecoli_core, dtype: float64
As we can see our new constraint was respected.