The Learning Parity With Noise (LPN) Problem

The learning parity with noise (LPN) problem is simple but fascinating (to me anyway). To understand LPN you have to first understand what parity is, and then what the regular learning parity (without noise) problem is.

The parity of a string of bits is 0 if the number of 1-bits is even, and the parity is 1 if the number of 1-bits is odd. For example, if x = (0, 1, 0) the parity is 1. If x = (0, 1, 1, 0, 1, 0, 1) the parity is 0.

The parity of a string of bits can be computed by the bit dot product of the string with itself. If x = (0, 1, 0), then = (0 * 0) xor (1 * 1) xor (0 * 0) = 0 xor 1 xor 0 = 1.

The regular learning parity (without noise) is best explained by an example. Suppose you are working with d = 4 bits. All possible bit patterns and their parity are:

 x        f(x)
--------------
0 0 0 0    0
0 0 0 1    1
0 0 1 0    1
0 0 1 1    0

0 1 0 0    1
0 1 0 1    0
0 1 1 0    0
0 1 1 1    1

1 0 0 0    1
1 0 0 1    0
1 0 1 0    0
1 0 1 1    1

1 1 0 0    0
1 1 0 1    1
1 1 1 0    1
1 1 1 1    0

The goal of the (regular) learning parity problem is, given this table, find a function f(x) that generates the results. This example is just the full parity function. A more general example is when you don’t have all possible bit patterns. For example, suppose the possible input patterns are just the first four and last two patterns above:

 x        f(x)
--------------
0 0 0 0    0
0 0 0 1    1
0 0 1 0    1
0 0 1 1    0

1 1 1 0    1
1 1 1 1    0

This is still just the parity function but it would be harder to learn because you have fewer examples to learn from.

The regular learning parity problem is mildly interesting from a machine learning perspective because if the number of bits, d, is large then a neural network cannot learn the function (roughly greater than 30 bits, depending on the neural network architecture). However, the regular parity problem can be easily learned using classical Gaussian elimination.

OK, now the learning parity with noise problem. One example of the LPN problem with p = 0.25 looks like this:

 x        f(x)
--------------
0 0 0 0    0
0 0 0 1    1
0 0 1 0    0  *
0 0 1 1    1  *

0 1 0 0    1
0 1 0 1    0
0 1 1 0    0
0 1 1 1    1

1 0 0 0    0  *
1 0 0 1    0
1 0 1 0    0
1 0 1 1    1

1 1 0 0    0
1 1 0 1    0  *
1 1 1 0    1
1 1 1 1    0

The bit patterns marked with a * character have their normal parity values flipped. There were 4 out of 16 = 0.25 of the patterns randomly affected. In other words, the table has had noise added to it with p = 0.25.

So, what’s the point of LPN? It is speculated (but not entirely proven) that the LPN problem is NP-hard. This means, loosely, that you can’t find the f(x) function without examining all possible input patterns. Again, so what? The implication is that LPN might be useful in cryptography algorithms because LPN cannot (probably) be solved even with quantum computing. Put another way, LPN might be useful as part of a post quantum (PQ) cryptography system.


Left: Well-known and prolific artist Karol Bok has a very distinctive style, usually characterized by a radial design around a female model’s head. Right: As a tribute to Bok, artist Tatiana Larina made a painting in Bok’s style (based on a photograph by Tony Zhou) — sort of learning Bok style with noise.

This entry was posted in Machine Learning. Bookmark the permalink.

1 Response to The Learning Parity With Noise (LPN) Problem

  1. Shaowen Geng says:

    Hi, James. I’m really interesting in solving LPN via deep learning. Why the upper bound is roughly 30 bits? I want to know how to solve the instance with large dimension(using deep learning). Thank you!

Comments are closed.