On the Exact Variance of Tsallis Entanglement Entropy in a Random Pure State

The Tsallis entropy is a useful one-parameter generalization to the standard von Neumann entropy in quantum information theory. In this work, we study the variance of the Tsallis entropy of bipartite quantum systems in a random pure state. The main result is an exact variance formula of the Tsallis entropy that involves finite sums of some terminating hypergeometric functions. In the special cases of quadratic entropy and small subsystem dimensions, the main result is further simplified to explicit variance expressions. As a byproduct, we find an independent proof of the recently proven variance formula of the von Neumann entropy based on the derived moment relation to the Tsallis entropy.


Introduction
Classical information theory is the theory behind the modern development of computing, communication, data compression, and other fields. As its classical counterpart, quantum information theory aims at understanding the theoretical underpinnings of quantum science that will enable future quantum technologies. One of the most fundamental features of quantum science is the phenomenon of quantum entanglement. Quantum states that are highly entangled contain more information about different parts of the composite system.
As a step to understand quantum entanglement, we choose to study the entanglement property of quantum bipartite systems. The quantum bipartite model, proposed in the seminal work of Page [1], is a standard model for describing the interaction of a physical object with its environment for various quantum systems. In particular, we wish to understand the degree of entanglement as measured by the entanglement entropies of such systems. The statistical behavior of entanglement entropies can be understood from their moments. In principle, the knowledge of all integer moments determines uniquely the distribution of the considered entropy as it is supported in a finite interval (cf. (5) below). This is also known as Hausdorff's moment problem [2,3]. In practice, a finite number of moments can be utilized to construct approximations to the distribution of the entropy, where the higher moments describe the tail distribution that provides crucial information such as whether the mean entropy is a typical value [4]. Of particular importance is the second moment (variance) that governs the fluctuation of the entropy around the mean value. With the first two moments, one could already construct an upper bound to the probability of finding a state with entropy lower than the mean entropy by using the concentration of measure techniques [4].
The existing knowledge in the literature is mostly focused on the von Neumann entropy [1,[4][5][6][7][8][9][10], where its first three exact moments are known. In this work, we consider the Tsallis entropy [11], which is a one-parameter generalization of the von Neumann entropy. The Tsallis entropy enjoys certain advantages in describing quantum entanglement. For example, it overcomes the inability of the von Neumann entropy to model systems with long-range interactions [12]. The Tsallis entropy also has the unique nonadditivity (also known as nonextensivity) property, whose physical relevance to quantum systems has been increasingly identified [13]. In the literature, the mean value of the Tsallis entropy was derived by Malacarne-Mendes-Lenzi [12]. The focus of this work is to study its variance.
The paper is organized as follows. In Section 2, we introduce the quantum bipartite model and the entanglement entropies. In Section 3, an exact variance formula of the Tsallis entropy in terms of finite sums of terminating hypergeometric functions is derived, which is the main result of this paper. As a byproduct, we provide in Appendix A another proof to the recently proven [4,10] Vivo-Pato-Oshanin's conjecture [9] on the variance of the von Neumann entropy. In Section 4, the derived variance formula of the Tsallis entropy is further simplified to explicit expressions in the special cases of quadratic entropy and small subsystem dimensions. We summarize the main results and point out a possible approach to study the higher moments in Section 5.

Bipartite System and Entanglement Entropy
We consider a composite quantum system consisting of two subsystems A and B of Hilbert space dimensions m and n, respectively. The Hilbert space H A+B of the composite system is given by the tensor product of the Hilbert spaces of the subsystems, H A+B = H A ⊗ H B . The random pure state (as opposed to the mixed state) of the composite system is written as a linear combination of the random coefficients x i,j and the complete basis i A and j B of The corresponding density matrix ρ = |ψ ψ| has the natural constraint tr(ρ) = 1. This implies that the m × n random coefficient matrix X = (x i,j ) satisfies: Without loss of generality, it is assumed that m ≤ n. The reduced density matrix ρ A of the smaller subsystem A admits the Schmidt decomposition where λ i is the i th largest eigenvalue of XX † . The conservation of probability (1) now implies the constraint ∑ m i=1 λ i = 1. The probability measure of the random coefficient matrix X is the Haar measure, where the entries are uniformly distributed over all the possible values satisfying the constraint (1). The resulting eigenvalue density of XX † is (see, e.g., [1]), where δ(·) is the Dirac delta function and the constant: The random matrix ensemble (2) is also known as the (unitary) fixed-trace ensemble. The above-described quantum bipartite model is useful in modeling various quantum systems. For example, in [1], the subsystem A is a black hole, and the subsystem B is the associated radiation field. In another example [14], the subsystem A is a set of spins, and the subsystem B represents the environment of a heat bath. The degree of entanglement of quantum systems can be measured by the entanglement entropy, which is a function of the eigenvalues of XX † . The function should monotonically increase from the separable state (λ 1 = 1, λ 2 = · · · = λ m = 0) to the maximally-entangled state (λ 1 = λ 2 = . . . λ m = 1/m). The most well-known entanglement entropy is the von Neumann entropy: which achieves the separable state and maximally-entangled state when S = 0 and when S = ln m, respectively. A one-parameter generalization of the von Neumann entropy is the Tsallis entropy [11]: which, by l'Hôpital's rule, reduces to the von Neumann entropy (4) when the non-zero real parameter q approaches one. The Tsallis entropy (5) achieves the separable state and maximally-entangled state when T = 0 and T = m q−1 − 1 /(q − 1)m q−1 , respectively. In some aspects, the Tsallis entropy provides a better description of the entanglement. For example, it overcomes the inability of the von Neumann entropy to model systems with long-range interactions [12]. The Tsallis entropy also has a definite concavity for any q, i.e., being convex for q < 0 and concave for q > 0. We also point out that by studying the moments of the Tsallis entropy (5) first, one may recover the moments of the von Neumann entropy (4) in a relatively simpler manner as opposed to directly working with the von Neumann entropy. The advantage of this indirect approach has been very recently demonstrated in the works [4,15]. In the same spirit, we will also provide in Appendix A another proof to the variance of the von Neumann entropy starting from the relation to the Tsallis entropy.
In the literature, the first moment of the von Neumann entropy E f [S] (the subscript f emphasizes that the expectation is taken over the fixed-trace ensemble (2)) was conjectured by Page [1]. Page's conjecture was proven independently by Foong and Kanno [5], Sánchez-Ruiz [6], Sen [7], and Adachi-Toda-Kubotani [8]. Recently, an expression for the variance of the von Neumann entropy V f [S] was conjectured by Vivo-Pato-Oshanin (VPO) [9], which was subsequently proven by the author [10]. Bianchi and Donà [4] provided an independent proof to VPO's conjecture very recently, where they also derived the third moment. For the Tsallis entropy, the first moment E f [T] was derived by Malacarne-Mendes-Lenzi [12]. The task of the present work is to study the variance of the Tsallis entropy V f [T].

Exact Variance of the Tsallis Entropy
Similar to the case of the von Neumann entropy [1,10], the starting point of the calculation is to convert the moments defined over the fixed-traced ensemble (2) to the well-studied Laguerre ensemble, whose correlation functions are explicitly known. Before discussing the moments conversion approach, we first set up necessary definitions relevant to the Laguerre ensemble. By construction (1), the random coefficient matrix X is naturally related to a Wishart matrix YY † as: where Y is an m × n (m ≤ n) matrix of independently and identically distributed complex Gaussian entries (complex Ginibre matrix). The density of the eigenvalues 0 < θ m < · · · < θ 1 < ∞ of YY † equals [16]: where c is the same as in (3), and the above ensemble is known as the Laguerre ensemble. The trace of the Wishart matrix: follows a gamma distribution with the density [9]: The relation (6) induces the change of variables: that leads to a well-known relation (see, e.g., [1]) among the densities (2), (7), and (9) as: This implies that r is independent of each λ i , i = 1, . . . , m, since their densities factorize. For the von Neumann entropy (4), the relation (11) has been exploited to convert the first two moments [1,10] from the fixed-trace ensemble (2) to the Laguerre ensemble (7). The moments conversion was an essential starting point in proving the conjectures of Page [1,6] and Vivo-Pato-Oshanin [10]. We now show that the moments conversion approach can be also applied to study the Tsallis entropy. We first define: as the induced Tsallis entropy of the Laguerre ensemble (7). Here, for the convenience of the discussion, we have defined the induced entropy, which may not have the physical meaning of an entropy. Using the change of variables (10), the kth power of the Tsallis entropy (5) can be written as: and thus, we have: The expectation on the left-hand side is computed as: where the multiplication of an appropriate constant 1 = r h mn+qi (r) dr in (16) along with the fact that r −qi h mn+qi (r) = Γ(mn)h mn (r)/Γ(mn + qi) lead to (17), and the last equality (18) is established by the change of measures (11). Inserting (18) into (14), the kth moment of the Tsallis entropy (5) is written as a sum involving the first k moments of the induced Tsallis entropy (12) as: With the above relation (19), the computation of moments over the less tractable correlation functions of the fixed-trace ensemble (2) is now converted to the one over the Laguerre ensemble (7), which will be calculated explicitly. In particular, computing the variance V f and k = 2, where the first moment relation (20) has also appeared in [12]. It is seen from (21) that the essential task now is to compute E g [L] and E g L 2 . Before proceeding to the calculation, we point out that in the limit q → 1, the derived second moments relation (21) leads to a new proof to the recently proven variance formula of the von Neumann entropy [10] with details provided in the Appendix A.
The computation of E g [L] and E g L 2 involves one and two arbitrary eigenvalue densities, denoted respectively by g 1 (x 1 ) and g 2 (x 1 , x 2 ), of the Laguerre ensemble as: In general, the joint density of N arbitrary eigenvalues g N (x 1 , . . . , x N ) is related to the N-point correlation function: as [16] is the matrix determinant and the symmetric function K(x i , x j ) is the correlation kernel. In particular, we have: and the correlation kernel K(x i , x j ) of the Laguerre ensemble can be explicitly written as [16]: where: with: the (generalized) Laguerre polynomial being of degree k. The Laguerre polynomials satisfy the orthogonality relation [16]: where δ kl is the Kronecker delta function. It is known that the one-point correlation function admits a more convenient representation as [6,16]: We also need the following integral identity, due to Schrödinger [17], that generalizes the integral (30) to: With the above preparation, we now proceed to the calculation of E g [L] and E g L 2 . Inserting (25) and (31) into (22) and defining further: one obtains by using (32) that: which is valid for q > −1. The first moment expression in the above form has been obtained in [12], and we continue to show that it can be compactly written as a terminating hypergeometric function of the unit argument. Indeed, since: we have: where the second equality follows from the change of variable k → m − 1 − k, and (38) is obtained by repeated use of the identity: with (a) n = Γ(a + n)/Γ(a) being Pochhammer's symbol; and (39) is obtained by the series definition of the hypergeometric function: that reduces to a finite sum if one of the parameters a i is a negative integer. Inserting (39) into (20), we arrive at a compact expression for the first moment of the Tsallis entropy as: We now calculate E g L 2 . Inserting (25) and (26) into (23), one has: where: and we have used the result (39) with the fact that: The integral I 1 can be read off from the steps that led to (39) by replacing q with 2q as: Inserting (27) the integral I 2 is written as: where by using (32) and (40), we obtain: and similarly: Finally, by inserting (47), (49), (52), and (53) into (43), we arrive at: where the symmetric function L(i, j) = L(j, i) is: With the derived first two moments (39) and (54) and the relations (20) and (21), an exact variance formula of the Tsallis entropy is obtained.

Special Cases
Though the derived results (39) and (54) may not be further simplified for an arbitrary m, n, and q, we will show that explicit variance expressions can be obtained in some special cases of practical relevance.

Quadratic Entropy q = 2
In the special case q = 2, the Tsallis entropy (5) reduces to the quadratic entropy: which was first considered in physics by Fermi [12]. The quadratic entropy (56) is the only entropy among all possible q values that satisfies the information invariance and continuity criterion [18]. By the series representations (38) and (51), the first two moments in the case q = 2 are directly computed as: E g L 2 = mn mn 3 + 2m 2 n 2 + 4n 2 + m 3 n + 10mn + 4m 2 + 2 .
By (20) and (21), we immediately have: which lead to the variance of Tsallis entropy for q = 2 as: Finally, we note that explicit variance expressions for other positive integer values of q can be similarly obtained.

Subsystems of Dimensions m = 2 and m = 3
We now consider the cases when dimensions m of the smaller subsystems are small. This is a relevant scenario for subsystems consisting of, for example, only a few entangled particles [14]. For m = 2 with any n and q, the series representations (38) and (51) directly lead to the results: In the same manner, for m = 3 with any n and q, we obtain: The corresponding variances are obtained by keeping in mind the relations (20) and (21). For m ≥ 4, explicit variance expressions can be similarly calculated. However, it does not seem promising to find an explicit variance formula valid for any m, n, and q.

Summary and Perspectives on Higher Moments
We studied the exact variance of the Tsallis entropy, which is a one-parameter (q) generalization of the von Neumann entropy. The main result is an exact variance expression (54) valid for q > −1 as finite sums of terminating hypergeometric functions. For q = 1, we find a short proof to the variance formula of the degenerate case of the von Neumann entropy in the Appendix. For other special cases of the practical importance of q = 2, m = 2, and m = 3, explicit variance expressions have been obtained in (61), (63), and (65), respectively.
We end this paper with some perspectives on the higher moments of the Tsallis entropy. In principle, the higher moments can be calculated by integrating over the correlation kernel (27) as demonstrated for the first two moments. In practice, the calculation becomes progressively complicated as the order of moments increases. Here, we outline an alternative path that may systematically lead to the moments of any order in a recursive manner.
We focus on the induced Tsallis entropy L as defined in (12) since the moments conversion is available (19). The starting point is the generating function of L: which is a two-parameter (t and q) deformation of the Laguerre ensemble (7). Compared to the weight function w(x) = x n−m e −x of the Laguerre ensemble, the deformation induces a new weight function: which generalizes the Toda deformation [19] w(x) = x n−m e −x+tx with the parameter q. The basic idea to produce the moments systematically is to find some differential and difference equations of the generating function τ m (t, q). The theory of integrable systems [16] may provide the possibility to obtain differential equations for the Hankel determinant (67) with respect to continuous variables t and q, as well as difference equations with respect to the discrete variable m. In particular, when q is a positive integer, the deformation (68) is known as multi-time Toda deformation [19], where much of the integrable structure is known [19].
Funding: This research received no external funding.

Conflicts of Interest:
The author declares no conflict of interest.

Appendix A. A New Proof to the Variance Formula of the von Neumann Entropy
Vivo, Pato, and Oshanin recently conjectured that the variance of the von Neumann entropy (4) in a random pure state (2) is [9]: where: is the trigamma function. The conjecture was proven in [4,10], and here, we provide another proof starting from the relation (21), To resolve the indeterminacy in the limit q → 1, we apply twice l'Hôpital's rule on both sides of the above equation: where f = d f / dq. Define an induced von Neumann entropy of the Laguerre ensemble (7): with R 1 further denoted by R = R 1 ; the right-hand side of (A4) can be evaluated by using the following facts: and the definitions of the digamma function ψ 0 (x) = d ln Γ(x)/ dx and the trigamma function (A2) that give: as: In (A14), the first two moments of r are given by: which are obtained from the k th moment expression (cf. (9)): The first two moments of the induced von Neumann entropy over the Laguerre ensemble E g [R] and E g R 2 in (A14) have been computed in [6] and [7] as: and in [10] as: E g R 2 = mn(m + n)ψ 1 (n) + mn(mn + 1)ψ 2 0 (n) + m m 2 n + mn + m + 2n + 1 ψ 0 (n) + 1 4 m(m + 1) m 2 + m + 2 , (A18) respectively. The remaining task is to calculate E g [rR], E g [R 2 ], and E g [rR 2 ] in (A14). This relies on the repeated use of the change of variables (10) and measures (11), which exploit the independence between r and λ. Indeed, we have: = E r r 2 ln r − E g r 2 S (A20) = Γ(mn + 2) Γ(mn) ψ 0 (mn + 2) − E r r 2 E f [S] (A21) = mn(mn + 1) ψ 0 (n) where (A21) is obtained by (11) and the identity: and (A22) is obtained by (A16) and the mean formula of the von Neumann entropy [1,[5][6][7][8]]: E f [S] = ψ 0 (mn + 1) − ψ 0 (n) − m + 1 2n .