June 18, 2021

SpywareNews.com

Dedicated Forum to help removing adware, malware, spyware, ransomware, trojans, viruses and more!

Quantum Reduction of Finding Short Code Vectors to the Decoding Problem. (arXiv:2106.02747v1 [cs.CR])

We give a quantum reduction from finding short codewords in a random linear
code to decoding for the Hamming metric. This is the first time such a
reduction (classical or quantum) has been obtained. Our reduction adapts to
linear codes Stehl'{e}-Steinfield-Tanaka-Xagawa’ re-interpretation of Regev’s
quantum reduction from finding short lattice vectors to solving the Closest
Vector Problem. The Hamming metric is a much coarser metric than the Euclidean
metric and this adaptation has needed several new ingredients to make it work.
For instance, in order to have a meaningful reduction it is necessary in the
Hamming metric to choose a very large decoding radius and this needs in many
cases to go beyond the radius where decoding is unique. Another crucial step
for the analysis of the reduction is the choice of the errors that are being
fed to the decoding algorithm. For lattices, errors are usually sampled
according to a Gaussian distribution. However, it turns out that the Bernoulli
distribution (the analogue for codes of the Gaussian) is too much spread out
and can not be used for the reduction with codes. Instead we choose here the
uniform distribution over errors of a fixed weight and bring in orthogonal
polynomials tools to perform the analysis and an additional amplitude
amplification step to obtain the aforementioned result.

We give a quantum reduction from finding short codewords in a random linear
code to decoding for the Hamming metric. This is the first time such a
reduction (classical or quantum) has been obtained. Our reduction adapts to
linear codes Stehl'{e}-Steinfield-Tanaka-Xagawa’ re-interpretation of Regev’s
quantum reduction from finding short lattice vectors to solving the Closest
Vector Problem. The Hamming metric is a much coarser metric than the Euclidean
metric and this adaptation has needed several new ingredients to make it work.
For instance, in order to have a meaningful reduction it is necessary in the
Hamming metric to choose a very large decoding radius and this needs in many
cases to go beyond the radius where decoding is unique. Another crucial step
for the analysis of the reduction is the choice of the errors that are being
fed to the decoding algorithm. For lattices, errors are usually sampled
according to a Gaussian distribution. However, it turns out that the Bernoulli
distribution (the analogue for codes of the Gaussian) is too much spread out
and can not be used for the reduction with codes. Instead we choose here the
uniform distribution over errors of a fixed weight and bring in orthogonal
polynomials tools to perform the analysis and an additional amplitude
amplification step to obtain the aforementioned result.