metrics
>>> import perceval as pcvl
>>> dist_a = pcvl.BSDistribution({pcvl.BasicState([1, 0]): 0.4, pcvl.BasicState([0, 1]): 0.6})
>>> dist_b = pcvl.BSDistribution({pcvl.BasicState([1, 0]): 0.3, pcvl.BasicState([0, 1]): 0.7})
>>> print(pcvl.tvd_dist(dist_a, dist_b))
0.1
>>> print(pcvl.kl_divergence(dist_a, dist_b))
0.022582421084357485
Perceval provides ways to compare BSDistribution with mathematical metrics.
- perceval.utils.dist_metrics.kl_divergence(ideal_dist, est_dist)
Computes the Kullback-Leibler (KL) divergence of a model (simulated/observed) BSDistribution with respect to an ideal BSDistribution. Our computation ignores states absent from the estimated distribution or have null probabilities.
- Parameters:
ideal_dist (
BSDistribution
) – Ideal BSDistribution (known from theory or an ideal computation)est_dist (
BSDistribution
) – Estimated BSDistribution (simulated or observed from experiment)
- Return type:
float
- Returns:
KL divergence of the estimated distribution relative to the ideal.
- perceval.utils.dist_metrics.tvd_dist(dist_lh, dist_rh)
Computes the Total Variation Distance (TVD) between two input BSDistributions.
- Parameters:
dist_lh (
BSDistribution
) – First BSDistributiondist_rh (
BSDistribution
) – Second BSDistribution
- Return type:
float
- Returns:
total variation distance between the two BSDistributions (value between 0 and 1)