7. Flux sampling

7.1. Basic usage

The easiest way to get started with flux sampling is using the sample function in the flux_analysis submodule. sample takes at least two arguments: a cobra model and the number of samples you want to generate.

[1]:
from cobra.test import create_test_model
from cobra.sampling import sample

model = create_test_model("textbook")
s = sample(model, 100)
s.head()
[1]:
ACALD ACALDt ACKr ACONTa ACONTb ACt2r ADK1 AKGDH AKGt2r ALCD2x ... RPI SUCCt2_2 SUCCt3 SUCDi SUCOAS TALA THD2 TKT1 TKT2 TPI
0 -2.060626 -0.766231 -1.746726 6.136642 6.136642 -1.746726 13.915541 2.174506 -0.242290 -1.294395 ... -6.117270 33.457990 34.319917 704.483302 -2.174506 6.109618 0.230408 6.109618 6.106540 3.122076
1 -1.518217 -1.265778 -0.253608 9.081331 9.081331 -0.253608 7.194475 5.979050 -0.225992 -0.252439 ... -5.072733 39.902893 40.343192 718.488475 -5.979050 4.991843 0.137019 4.991843 4.959315 4.172389
2 -3.790368 -1.292543 -0.457502 9.340755 9.340755 -0.457502 23.435794 1.652395 -0.333891 -2.497825 ... -0.674220 0.153276 1.506968 844.889698 -1.652395 0.673601 9.198001 0.673601 0.673352 7.770955
3 -5.173189 -4.511308 -2.333962 7.364836 7.364836 -2.333962 11.725401 2.504044 -0.051420 -0.661881 ... -0.681200 7.506732 9.110446 885.755585 -2.504044 0.656561 7.514520 0.656561 0.646653 8.450394
4 -6.787036 -5.645414 -1.521566 6.373250 6.373250 -1.521566 4.823373 3.452123 -0.126943 -1.141621 ... -0.510598 9.307459 10.941500 749.854462 -3.452123 0.474878 6.235982 0.474878 0.460514 8.908012

5 rows × 95 columns

By default sample uses the optgp method based on the method presented here as it is suited for larger models and can run in parallel. By default the sampler uses a single process. This can be changed by using the processes argument.

[2]:
print("One process:")
%time s = sample(model, 1000)
print("Two processes:")
%time s = sample(model, 1000, processes=2)
One process:
CPU times: user 19.7 s, sys: 918 ms, total: 20.6 s
Wall time: 16.1 s
Two processes:
CPU times: user 1.31 s, sys: 154 ms, total: 1.46 s
Wall time: 8.76 s

Alternatively you can also user Artificial Centering Hit-and-Run for sampling by setting the method to achr. achr does not support parallel execution but has good convergence and is almost Markovian.

[3]:
s = sample(model, 100, method="achr")

In general setting up the sampler is expensive since initial search directions are generated by solving many linear programming problems. Thus, we recommend to generate as many samples as possible in one go. However, this might require finer control over the sampling procedure as described in the following section.

7.2. Advanced usage

7.2.1. Sampler objects

The sampling process can be controlled on a lower level by using the sampler classes directly.

[4]:
from cobra.sampling import OptGPSampler, ACHRSampler

Both sampler classes have standardized interfaces and take some additional argument. For instance the thinning factor. “Thinning” means only recording samples every n iterations. A higher thinning factors mean less correlated samples but also larger computation times. By default the samplers use a thinning factor of 100 which creates roughly uncorrelated samples. If you want less samples but better mixing feel free to increase this parameter. If you want to study convergence for your own model you might want to set it to 1 to obtain all iterates.

[5]:
achr = ACHRSampler(model, thinning=10)

OptGPSampler has an additional processes argument specifying how many processes are used to create parallel sampling chains. This should be in the order of your CPU cores for maximum efficiency. As noted before class initialization can take up to a few minutes due to generation of initial search directions. Sampling on the other hand is quick.

[6]:
optgp = OptGPSampler(model, processes=4)

7.2.2. Sampling and validation

Both samplers have a sample function that generates samples from the initialized object and act like the sample function described above, only that this time it will only accept a single argument, the number of samples. For OptGPSampler the number of samples should be a multiple of the number of processes, otherwise it will be increased to the nearest multiple automatically.

[7]:
s1 = achr.sample(100)

s2 = optgp.sample(100)

You can call sample repeatedly and both samplers are optimized to generate large amount of samples without falling into “numerical traps”. All sampler objects have a validate function in order to check if a set of points are feasible and give detailed information about feasibility violations in a form of a short code denoting feasibility. Here the short code is a combination of any of the following letters:

  • “v” - valid point

  • “l” - lower bound violation

  • “u” - upper bound violation

  • “e” - equality violation (meaning the point is not a steady state)

For instance for a random flux distribution (should not be feasible):

[8]:
import numpy as np

bad = np.random.uniform(-1000, 1000, size=len(model.reactions))
achr.validate(np.atleast_2d(bad))
[8]:
array(['le'], dtype='<U3')

And for our generated samples:

[9]:
achr.validate(s1)
[9]:
array(['v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v',
       'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v',
       'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v',
       'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v',
       'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v',
       'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v',
       'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v',
       'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v', 'v'], dtype='<U3')

Even though most models are numerically stable enought that the sampler should only generate valid samples we still urge to check this. validate is pretty fast and works quickly even for large models and many samples. If you find invalid samples you do not necessarily have to rerun the entire sampling but can exclude them from the sample DataFrame.

[10]:
s1_valid = s1[achr.validate(s1) == "v"]
len(s1_valid)
[10]:
100

7.2.3. Batch sampling

Sampler objects are made for generating billions of samples, however using the sample function might quickly fill up your RAM when working with genome-scale models. Here, the batch method of the sampler objects might come in handy. batch takes two arguments, the number of samples in each batch and the number of batches. This will make sense with a small example.

Let’s assume we want to quantify what proportion of our samples will grow. For that we might want to generate 10 batches of 50 samples each and measure what percentage of the individual 100 samples show a growth rate larger than 0.1. Finally, we want to calculate the mean and standard deviation of those individual percentages.

[11]:
counts = [np.mean(s.Biomass_Ecoli_core > 0.1) for s in optgp.batch(100, 10)]
print("Usually {:.2f}% +- {:.2f}% grow...".format(
    np.mean(counts) * 100.0, np.std(counts) * 100.0))
Usually 14.50% +- 2.16% grow...

7.3. Adding constraints

Flux sampling will respect additional contraints defined in the model. For instance we can add a constraint enforcing growth in asimilar manner as the section before.

[12]:
co = model.problem.Constraint(model.reactions.Biomass_Ecoli_core.flux_expression, lb=0.1)
model.add_cons_vars([co])

Note that this is only for demonstration purposes. usually you could set the lower bound of the reaction directly instead of creating a new constraint.

[13]:
s = sample(model, 10)
print(s.Biomass_Ecoli_core)
0    0.124471
1    0.151331
2    0.108145
3    0.144076
4    0.110480
5    0.109024
6    0.111399
7    0.139682
8    0.103511
9    0.116880
Name: Biomass_Ecoli_core, dtype: float64

As we can see our new constraint was respected.