I. Introduction
GAUSSIAN measure is employed in Cryptography based on lattices as an approach that solves the Shortest Vector problem (SVP) which states that finding a short vector problem (SVP) which states that finding a short vector in a given secret basis that has the same norm as the shortest vector in a lattice is computationally difficult to approximate. This is because a basis is defined by the norm of its longest vector rather than the norm of the shortest vector. A solution to this problem is done by providing the best approximation to error vectors modulo lattice Z(Learning with errors) that lie in a uniform distribution when given a normal distribution with narrow width. This uniform distribution belongs to the Euclidean space Rn that is unbounded. A lattice is an n-dimensional space that is isomorphic to an Euclidean space and also its discrete subgroup. The basis lies in the lattice while the coefficient of the basis lies in the Euclidean space. The aim of polynomial time adversary is to seek the best approximation at different values of the standard deviation of the distribution. If the vectors are sampled accurately, a computational bounded adversary would be incapacitated to distinguish a simulated distribution of the Ciphertext from a uniform distribution. Consequently, intractability in distinguishing a Ciphertext from a plaintext when given a fixed message in a Chosen Ciphertext attack game. The Gaussian sampling algorithm involves an efficient reduction algorithm to reduce the lattice basis before a Monte Carlo based sampling method is employed to generate the secret vectors. The complexity of the lattice reduction algorithm depends on the magnitude of the uniform error vectors. The lattice points have large Euclidean norms to impose difficulty on a computationally bounded adversary to approximate an exponentially bounded uniform distribution is on the input of a security parameter. This process is enhanced by raising the standard deviation of the distribution which transfers the probability density to the vectors of interest which is discretized and in the process increase their Euclidean norms [4]. The Euclidean norm of two vectors sampled from a uniform distribution are orthogonal to each other by a distance implying that the vectors are adjacent to each other in space. The angle of separation of the vectors is sampled from an interval which satisfies condition of accurate approximation. This implies that vectors must be orthogonal to each other in space in order to converge exponentially to uniform distribution when given a normal distribution. In order to employ a ‘cutting’ mechanism to remove portions of the tail of the normal distribution, the Euclidean norm of the sampled vectors should be greater than the standard deviation of the distribution. If the distribution is uniform, the probability mass would be constant thereby distributing the norm evenly among the vectors. The tail of the distribution is the point where there the discrete probability density magnitude is negligible while the continuous probability mass is infinite and intractable sampling at this point and is bounded thus . A smoothing parameter is defined to remove this point and make cryptanalysis possible. This implies that as the security increases in magnitude, the database of adversarial queries drops exponentially. In another variant of the Learning with errors(Ring-LWE), the coefficient of the error vectors are orthogonal to the canonical embedding by a factor. The significance of precision of the security parameter has been studied in literature [16]. The lattice reduction reduces the basis to its Gram-Schmidt orthogonalization by using the LLL algorithm [11] which has the objective of constructing a basis with short euclidean norms and orthonormal vectors.The LLL reduction has some drawbacks which has been mentioned in literature. They employ integers which have enormous precision and floating point arithmetic numbers which consume storage [17]. It also suffers from complexity and lack of constant time implementation [21] and finally difficulty in generating samples because of the canonical embedding is made up of non-integer vectors [2]. In this paper, we propose solution to its complexity problem by applying Dimensionality reduction, in which an embedding is defined that transforms the basis from a point of high dimension to low dimension and introduces rows of independent vectors, alternative to orthogonal vectors in literature. This will offer tighter security proof and approximation.