## Jensen-Shannon Distance Example

The Jensen-Shannon distance measures the difference between two probability distributions. For example, suppose P = [0.36, 0.48, 0.16] and Q = [0.30, 0.50, 0.20]. The Jenson-Shannon distance between the two probability distributions is 0.0508. If two distributions are the same, the Jensen-Shannon distance between them is 0. Jensen-Shannon distance is based on the Kullback-Leibler divergence. In words, to compute Jensen-Shannon between P and Q, you first compute M as the average of P and Q and then Jensen-Shannon is the square root of the average of KL(P,M) and KL(Q,M). In symbols:

JS(P,Q) = sqrt( [KL(P,M) + KL(Q,M)] / 2 )
where M = (P + Q) / 2

So the key to computing JS is understanding how to compute KL.

KL(P,Q) = Sum_i(p[i] * ln(p[i] * q[i]))

Here’s a worked example of Jensen-Shannon distance:

```P = [0.36, 0.48, 0.16]
Q = [0.30, 0.50, 0.20]

M = 1/2 * (P + Q)
= [0.33, 0.49, 0.18]

KL(P,M) = 0.36 * ln(0.36 / 0.33) +
0.48 * ln(0.48 / 0.49) +
0.16 * ln(0.16 / 0.18)
= 0.002582

KL(Q,M) = 0.30 * ln(0.30 / 0.33) +
0.50 * ln(0.50 / 0.49) +
0.20 * ln(0.20 / 0.18)
= 0.0002580

JS(P,Q) = sqrt[ (KL(P,M) + KL(Q,M)) / 2 ]
= sqrt[ (0.002582 + 0.0002580) / 2 ]
= 0.050803
```

The Jensen-Shannon distance is symmetric, meaning JS(P,Q) = JS(Q,P). This is in contrast to Kullback-Leibler divergence which is not symmetric, meaning KL(P,Q) != KL(Q,P) in general.

Jensen-Shannon is not used very often. I’m not sure why this is so.

The Python scipy code library has an implementation of Jensen-Shannon distance but JS is easy to compute from scratch using a program-defined function if you want to avoid an external dependency.

Jensen-Shannon and Kullback-Leibler are both examples of mathematical f-divergence functions that calculate the difference between two probability distributions. Other f-divergences include Hellinger distance, Pearson Chi-Square divergence, and alpha divergence.

Jensen-Shannon distance combines two simple ideas: the average of two probability distributions and Kullback-Leibler divergence. The result is useful. I don’t know much about jewelry, but I’ve always thought that semi-precious stones like onyx and malachite are far more interesting than fancy gems like diamonds and rubies. And the combination of simple stones can be very appealing (to my eye anyway). Left: A necklace made from lapis lazuli and turquoise. Center: A necklace made from coral and onyx. Right: A necklace made from malachite and amethyst. ```# jensen_shannon_distance.py
# example of Jensen-Shannon distance

import numpy as np
from scipy.spatial import distance

def KL(p, q):
# Kullback-Leibler "from q to p"
# p and q are np array prob distributions
n = len(p)
sum = 0.0
for i in range(n):
sum += p[i] * np.log(p[i] / q[i])
return sum

def JS(p, q):
m = 0.5 * (p + q)  # avg of P and Q
left = KL(p, m)
right = KL(q, m)
return np.sqrt((left + right) / 2)

def main():
print("\nBegin Jensen-Shannon distance demo ")
np.set_printoptions(precision=4, suppress=True, sign=" ")

p = np.array([0.36, 0.48, 0.16], dtype=np.float32)
q = np.array([0.30, 0.50, 0.20], dtype=np.float32)

print("\nThe P distribution is: ")
print(p)
print("The Q distribution is: ")
print(q)

js_pq = JS(p, q)
js_qp = JS(q, p)

print("\nJS(P,Q) dist = %0.6f " % js_pq)
print("JS(Q,P) dist = %0.6f " % js_qp)

js_sp = distance.jensenshannon(p, q)
print("\nJS(P,Q) dist using scipy = %0.6f " % js_sp)

print("\nEnd demo ")

if __name__ == "__main__":
main()
```
This entry was posted in Miscellaneous. Bookmark the permalink.