Solving Second-Order Linear Differential Equations with Random Analytic Coefﬁcients about Regular-Singular Points

: In this contribution, we construct approximations for the density associated with the solution of second-order linear differential equations whose coefﬁcients are analytic stochastic processes about regular-singular points. Our analysis is based on the combination of a random Fröbenius technique together with the random variable transformation technique assuming mild probabilistic conditions on the initial conditions and coefﬁcients. The new results complete the ones recently established by the authors for the same class of stochastic differential equations, but about regular points. In this way, this new contribution allows us to study, for example, the important randomized Bessel differential equation.


Introduction and Motivation
As a main difference with respect to deterministic (or classical) differential equations, solving a random differential equation (RDE) does not only consist of determining, exactly or approximately, its solution stochastic process (SP), say X(t), but also of computing its main statistical properties such as the mean and the variance. Even more, the approximation of the first probability density function (1-PDF), say f 1 (x, t), of the solution SP is a more ambitious and useful goal since from it, one can calculate, via its integration, the mean and the variance of X(t), as well as its higher one-dimensional statistical moments (such as asymmetry, kurtosis, etc.), E (X(t)) k = R x k f 1 (x, t) dx, k = 1, 2, 3, . . . , provided they exist. Moreover, given a fixed time instantt, the 1-PDF permits calculating key information as the probability that the solution varies in any specific set of interest, say [a, b], just via its integration, The computation of the 1-PDF in the setting of RDEs and their applications is currently a cutting-edge topic for which a number of interesting advances have been achieved in the recent literature. Here, we highlight the following contributions that are strongly related to the goal of the present paper [1][2][3][4]. The main aim of this paper is to contribute to advance the field of RDEs by extending important classical results to the stochastic context. Specifically, we address the problem of constructing reliable approximations for the 1-PDF of the solution SP to second-order linear differential equations whose coefficients are analytic SPs depending on a random variable (RV) about a regular-singular point and whose initial conditions (ICs) are also RVs. Therefore, we are dealing with what is often called RDEs having a finite degree of randomness [5] (p. 37). In this manner, we complete the analysis performed in our previous contribution [6], where we dealt with the ordinary point case. It is important to emphasize that the analysis of second-order linear random differential equations has been performed in previous contributions using the random mean square calculus [5] (Ch. 4) (see [7][8][9][10][11][12] for the Airy, Hermite, Legendre, Laguerre, Chebyshev, and Bessel equations, respectively) and using other approaches like the random homotopy method [13,14], the Adomian method [15], the differential transformation method [16], the variational iteration method [17], etc. However, in all these contributions, only approximations for the two first statistical moments (mean and variance) of the solution were calculated. In contrast, in [6] and in the present paper, we deal with the general form of the aforementioned second-order linear RDEs, and furthermore, we provide additional key information of the solution via the computation of approximations for the 1-PDF that, as has been previously indicated, permits calculating not only the mean and the variance, but also higher moments of the solution, as well as further relevant information as the probability that the solution lies in specific sets of interest. To the best of our knowledge, these results for the 1-PDF of second-order linear RDEs about regular-singular points are new, and then, they contribute to advance the setting of this important class of RDEs. At this point, is important to underscore the main differences between our previous contribution [6] and the present paper. In [6], we provided a comprehensive study of second-order linear differential equations with random analytic coefficients about ordinary points. Now, we propose a natural continuation of [6] by extending the analysis for the same class of differential equations, but about singular-regular points, whose mathematical nature is completely distinct. Our aim is to complete the stochastic analysis for this important type of differential equation inspired by the well-known extension of the deterministic setting, the first one dealing with ordinary points and secondly with singular-regular points, by applying the Fröbenius theorem. It is important to point out that apparently, the problems have a strong similarity, but they are applied to differential equations of a different nature. In this regard, this new contribution allows us to study, for example, the important randomized Bessel differential equation, which does not fall within the mathematical setting of the previous contribution [6].
For the sake of completeness, below, we summarize the main results about the deterministic theory of second-order linear differential equations about ordinary and regular-singular points. These results will be very useful for the subsequent development. Let us then consider the second-order linear differential equation: where coefficients a i (t), i = 0, 1, 2 are analytic functions at a certain point, say t 0 , i.e., they admit convergent Taylor series expansions about t 0 (in practice, these coefficients are often polynomials, which are analytic everywhere). To study when this equation admits a power series solution centered at the point t 0 , say where the coefficients x i , i = 0, 1, 2, . . . must be determined so that this series satisfies the differential equation), it is convenient to recast Equation (2) in its standard or canonical form: where p(t) = a 1 (t)/a 0 (t) and q(t) = a 2 (t)/a 0 (t). The key question is how must we pick the center of the previous expansion, t 0 , because this choice fully determines the region of convergence of the power series. To this end, t 0 is classified as an ordinary or a singular point. Specifically, t 0 is called an ordinary point of the differential Equation (3) (or equivalently (2)) if coefficients p(t) and q(t) are both analytic at t 0 , otherwise t 0 is termed a singular point. Recall that the quotient of analytic functions is also an analytic function provided the denominator is distinct from zero. Therefore, if a 0 (t 0 ) = 0 in (2) (and using that their coefficients are analytic functions about t 0 ), then t 0 is an ordinary point. In the particular case that a i (t), i = 0, 1, 2 are polynomials without common factors, then t 0 is an ordinary point of (2) if and only if a 0 (t 0 ) = 0. It is well known that when t 0 is an ordinary point of differential Equation (2), then its general solution can be expressed as where η 1 , η 2 are free constants and x 1 (t) and x 2 (t) are two linearly independent series solutions of (2) centered at the point t 0 . These series are also analytic about t 0 , i.e., they converge in a certain common interval centered at t 0 : 0 ≤ |t − t 0 | < ρ (ρ = min(ρ 1 , ρ 2 ) being ρ i the radius of convergence of x i (t), i = 1, 2, respectively). In the important case that coefficients a i (t), i = 0, 1, 2 are polynomials, the radius of convergence ρ > 0 of both series is at least as great as the distance from t 0 to the nearest root of a 0 (t) = 0. As a consequence, if the leading coefficient a 0 (t) is constant, then a power series solution expanded about any point can be found, and this power series will converge on the whole real line. In [6], we solved, in the probabilistic sense previously explained, the randomization of Equation (3) by assuming that coefficients p(t) = p(t; A) and q(t) = q(t; A) depend on a common RV, denoted by A, together with two random ICs fixed at the ordinary point t 0 , namely The study of Equation (3) (or equivalently, (2)) about a singular point requires further distinguishing between regular-singular points and irregular-singular points. In this paper, we shall deal with the former case, which happens when p(t) approaches infinity no more rapidly than 1/(t − t 0 ) and q(t) no more rapidly than 1/(t − t 0 ) 2 , as t → t 0 . In other words, p(t) and q(t) have only weak singularities at t 0 , i.e., writing Equation (3) in the form: where: thenp(t) andq(t) are analytic about t 0 . Otherwise, the point t 0 is called an irregular-singular point.
In the case thatp(t) and/orq(t) defined in (4) become indeterminate forms at t 0 , the situation is determined by the limits: Ifp 0 = 0 =q 0 , then t = t 0 may be an ordinary point of the differential equation (t − t 0 ) 2ẍ (t) + (t − t 0 )p(t)ẋ(t) +q(t)x(t) = 0 (or equivalently, dividing it by t − t 0 and taking into account (5), of the differential Equation (3), i.e.,ẍ(t) + p(t)ẋ(t) + q(t)x(t) = 0). Otherwise, if both limits in (6) exist and are finite (and distinct form zero), then t = t 0 is a regular-singular point, while if either limit fails to exist or is infinite, then t = t 0 is an irregular-singular point. As has been previously underlined, the most common case in applications, in dealing with differential equations of the form (4), is whenp(t) and q(t) are both polynomials. In such a case,p 0 andq 0 are simply the coefficients of the terms (t − t 0 ) 0 of these polynomials, if they are expressed in powers of t − t 0 , so t 0 is a regular-singular point. In dealing with the case that t 0 is a regular-singular point, once the differential Equation (2) is written in the form (4), we are formally working in a deleted (or punctured) neighborhood of t 0 , say 0 < |t − t 0 | < ρ, ρ > 0. The solution of (4) (equivalently of (2)) is then sought via generalized power series (often called Fröbenius series) centered at the point t 0 , x(t) = |t − t 0 | r ∑ n≥0 x n (t − t 0 ) n , 0 < |t − t 0 | < ρ, being x 0 = 0 and where r is a (real or complex) value to be determined, as well as coefficients x n , n = 0, 1, 2, . . . , by imposing that this series satisfies the differential Equation (4) (equivalently of (2)). This leads to the fact that parameter r must satisfy the following quadratic equation: This equation is called the indicial equation of differential Equation (8), and its two roots, r 1 and r 2 (possibly equal), are called the exponents of the differential equation at the regular-singular point t 0 , which is obtained after multiplying (4) by (t − t 0 ) 2 . The full solution of differential Equation (8) (or equivalently, (2)) is given in the following theorem in terms of the nature of r 1 and r 2 .
Theorem 1. (Fröbenius method) [18] (p. 240). Let us consider the differential Equation (2) whose coefficients are analytic about t 0 , and assume thatp(t) andq(t) defined in (4) and (5) are analytic about t 0 . Let us assume that t 0 is a regular-singular point of the differential Equation (2). Let r 1 and r 2 be the roots of the indicial equation associated with (8) at the point t 0 : wherep 0 andq 0 are defined in (6). Without loss of generality, we assume that Re(r 1 ) ≥ Re(r 2 ), where Re(·) stands for the real part. Then, the differential Equation (2) has two linearly independent solutions, x 1 (t) and x 2 (t), in a certain deleted neighborhood centered at t 0 , of the following form: (a) If r 1 − r 2 is not a non-negative integer (i.e., r 1 − r 2 = 0, 1, 2, . . . ), where x 1,0 = 0 and x 2,0 = 0. (b) If r 1 − r 2 is a positive integer (i.e., r 1 − r 2 = 1, 2, . . .), where x 1,0 = 0, x 2,0 = 0, and c is a constant, which may or may not be distinct from zero. (c) If r 1 = r 2 , As was previously indicated, in this contribution, the objective is to extend the analysis performed in [6] for the aforementioned randomization of differential Equation (3) (equivalently of (2)) about an ordinary point t 0 , to the case that t 0 is a regular-singular point. Specifically, we will consider the following second-order linear RDE: together with the following random ICs: fixed at an initial instant t 1 belonging to a certain deleted interval centered at t 0 ,T 0 = {t : 0 < |t − t 0 | < ρ}, ρ > 0. Then, we will construct approximations of the 1-PDF, f 1 (x, t), of the SP, X(t), to this random initial value problem (IVP) via a random Fröbenius series centered at the regular-singular point t 0 . In (10), p(t; A) and q(t; A) are SPs satisfying certain hypotheses, which will be stated later, which depend on the common RV denoted by A. Here, we assume that we are working on a probability space (Ω, F , P), which is complete, to which the real RVs A, Y 0 , and Y 1 belong. Furthermore, we assume that these three RVs have a joint density, i.e., the random vector (A, Y 0 , Y 1 ) is absolutely continuous.
As usual, observe that in (10) and (11), we are writing RVs with capital letters, for instance, Z : Ω −→ D Z ⊂ R (here, D Z is referred to as the range of Z). Moreover, each realization of an RV, say Z, will be denoted by z(ω), ω ∈ Ω or simply z = z(ω) ∈ D Z . To be as general as possible, in the subsequent analysis, A, Y 0 , and Y 1 are assumed to be dependent absolutely continuous RVs, and f A,Y 0 ,Y 1 (a, y 0 , y 1 ) will denote their joint PDF. Notice that, if A, Y 0 , and Y 1 are independent RVs, then their PDF can be factorized as the product of their respective marginal PDFs, i.e., Based on our previous contribution [6], together with Theorem 1 and further reasons that will be apparent later, hereinafter, we will assume the following hypotheses: H1: f A,Y 0 ,Y 1 (a, y 0 , y 1 ) is continuous in the second component and bounded, i.e., In addition, we will assume that the SPsp(t; are analytic about (t 0 ; a 0 (ω)) for every a 0 (ω) ∈ D A , ω ∈ Ω, i.e., H2: There exists a common neighborhood Np ,q (t 0 ; a 0 (ω)) ⊂T 0 × D A where: Here,T 0 denotes a deleted interval centered at t 0 that has been previously defined. Recall that here, the SPp(t; are analytic in a common neighborhood that will be denoted by Np ,q (t 0 ; a 0 (ω)). In practice, this neighborhood is determined intersecting the domains of analyticity , a characterization was stated of analyticity of second-order SPs (those having finite variance) in terms of the analyticity of the correlation function. Moreover, to ensure that the IVP (10)-(11) has a unique solution, both SPs p(t; A) and q(t; A) are assumed to satisfy all the needed conditions. (see [5] (Th. 5.1.2), for instance).
According to the Fröbenius method (see Theorem 1) and under the analyticity condition assumed in Hypothesis H2, the solution SP of RDE (10) about a regular-singular point, t 0 , can be written as a linear combination of two uniformly convergent independent random series, X 1 (t; A) and X 2 (t; A), where t ∈T 0 , and the coefficients K 0 (A, Y 0 , Y 1 ) and K 1 (A, Y 0 , Y 1 ) can be obtained in terms of the random ICs given in (11), which are established at the time instant t 1 ∈T 0 . In the subsequent development, we will take advantage of the above representation along with the application of the random variable transformation (RVT) technique [5] (p. 25), [6] (Th. 1), to construct the 1-PDF, f N 1 (x, t), corresponding to approximations, X N (t), of the solution SP, X(t), about the regular-singular point t 0 . We shall provide sufficient conditions so that The RVT method has been successfully applied to obtain exact or approximate representations of the 1-PDF of the solution SP to random ordinary/partial differential and difference equations and systems [1][2][3][4]. In particular, the RVT technique has also been applied to conduct the study for the random IVP (10) and (11) in the case that t 0 is an ordinary point [6]. Thus, the present contribution can be considered as a natural continuation of our previous paper [6]. This justifies that in our subsequent development, we directly refer to [6] when applying a number of technical results already stated in the above-mentioned contribution. In this manner, we particularly avoid repeating the multidimensional random variable transformation technique [6] (Th. 1), Poincare's expansion [6] (Th. 2), as well as several results related to uniform convergence that can be inferred from classical real Analysis [6] (Prop. [1][2][3][4]. For the sake of clarity, we point out that we keep identical the notation in both contributions. These results were also extensively applied in [6,[19][20][21]. The paper is organized as follows. In Section 2, the 1-PDF, f N 1 (x, t), of the approximate solution SP, X N (t), to the random IVP (10) and (11) is formally constructed. This function is obtained by applying the RVT method to X N (t), which follows from truncating the random generalized power series solution derived after applying the Fröbenius method stated in Theorem 1. Section 3 is devoted to rigorously proving the convergence of approximations f N 1 (x, t) to the exact 1-PDF, f 1 (x, t), associated with the solution SP, X(t). In Section 4, several illustrative examples are shown to demonstrate the usefulness of the theoretical results established in Sections 2 and 3. Our main conclusions are drawn in Section 5.

Computation of the 1-PDF of the Truncated Solution SP
As was indicated in the Introduction, by the Fröbenius method, Theorem 1, the solution SP of the random IVP (10) and (11) can be written as: where t ∈T 0 and X 1 (t; A) and X 2 (t; A) are determined taking into account the values of the random roots of the associated indicial Equation (9). Since A is assumed to be an absolutely continuous RV, Cases (b) and (c) in Theorem 1 have null probability to occur, because punctual probabilities considering continuous RVs are zero. Thus, by (a), for each t ∈T 0 , the solution SP is given by Expression (13) being: where r 1 (A) and r 2 (A) are the random roots of the random indicial Equation (9). Random coefficients X 1,n (A) and X 2,n (A) are recursively determined in practice. At this point, it is convenient to underline that the factors |t − t 0 | r 1 (A) and |t − t 0 | r 2 (A) are the distinctive and major difference in dealing with the analysis of RDEs about regular-singular points with respect to the case of RDEs about ordinary points presented in [6]. By imposing that Expression (13) satisfies the ICs given in (11), we obtain the following random algebraic system: Solving for K 0 and K 1 , one gets: where (14).

Remark 1.
Since A is a continuous RV, then: Moreover, there exists a positive constant, m W , such that: being this lower bound m W independent of ω ∈ Ω since, according to Hypothesis H0, RV A is bounded.
Hence, according to Remark 1, the functions (15), are well defined. Summarizing, the solution SP (13) and (14) can be written in the form: where Y 0 and Y 1 are the random ICs and: From an analytical point of view, it could be infeasible to obtain the explicit expression of both series, X 1 (t; A) and X 2 (t; A). This fact makes it more advisable to consider the approximate solution SP that comes from truncating both series: where: with t 1 ; A), and X N 1 (t; A) and X N 2 (t; A) result from the truncation of X 1 (t; A) and X 2 (t; A), defined in (14), at a common order N, i.e., In the context of the RVT method [6] (Th. 1) and following the same methodology as in the contribution [6], we compute the 1-PDF of the truncated solution SP (18) and (19), obtaining: , for every t ∈T 0 arbitrary.

Study of the Convergence
In this section, we give sufficient conditions to guarantee the convergence of the approximation, f N 1 (x, t), given in (21), to the exact function f 1 (x, t) when N tends to infinity, i.e., we study when: being: where S 1 (t; a) and S 2 (t; a) are the realizations of the random series S 1 (t; A) and S 2 (t; A) defined in (17). Now, we include here some remarks about the convergence and boundedness of the series involved in (23), which will play a key role in our subsequent developments.
To commute the limit as N → +∞ and the above double integral, we apply [6] (Prop. 1). We assume the following hypothesis: H3: D A,Y 1 is a Lebesgue measurable set of R 2 with finite measure such that: Then, we shall prove that k N (a, y 1 ) : , y 1 1 S N 1 (t; a) and: By Remark 4, considering the lower bound given in (24) for |S N 1 (t; a)|, one gets: , y 1 .
Therefore, as f A,Y 0 ,Y 1 is a PDF, we conclude that k N (a, y 1 ) is Lebesgue integrable in D A,Y 1 , i.e., k N (a, y 1 ) : N ≥ 0 ⊂ L 1 (D A,Y 1 ). To prove that lim N→+∞ k N (a, y 1 ) = k(a, y 1 ) uniformly in D A,Y 1 , we will apply the same argument as in the previous contribution [6]. We demonstrate that for every > 0, there exists N 0 such that for all N ≥ N 0 : k N (a, y 1 ) − k(a, y 1 ) < , ∀(a, y 1 ) ∈ D A,Y 1 .
Adding and subtracting the term f A,Y 0 ,Y 1 a, x−y 1 S N 2 (t;a) S N 1 (t;a) , y 1 1 S 1 (t,a) and applying the triangular inequality, one gets: , Let (x, t) ∈ R ×T 0 ∩ [t 1 , +∞[ be fixed and N ≥ 0 be an arbitrary non-negative integer, and consider (t, a) = (t, a(ω)) ∈ N * X (t 0 ; a 0 (ω)), for all a 0 (ω) ∈ D A , ω ∈ Ω. Then, we apply [6] (Prop. 2) to z 1 = t fixed, z 2 = a ∈ D A arbitrary, D = {(t, a) ∈ R 2 : a ∈ D A }, g N (t, a) = 1, g(t, a) = 1, h N (t, a) = |S N 1 (t; a)|, and h(t, a) = |S 1 (t; a)|. Note that, by Remark 4, h N (t, a) × h(t, a) = 0 for all (t, a) ∈ D. In addition, it is obvious that g N (t, a) converges uniformly to g(t, a) on D. Regarding the real function h N (t, a), it converges uniformly to h(t, a), by the arguments shown in Remarks 2 and 3. As g N (t, a) = 1, its absolute value is bounded. By Remark 4, there exists constants m h = m S 1 and M h = M S 1 , such that 0 < m h < |h N (t, a)| < M h , ∀a ∈ D A . Taking into account the uniform convergences, the bounds, and the hypothesis H1 (boundedness of the PDF f A,Y 0 ,Y 1 (a, y 0 , y 1 )), for every > 0, there exists N 0 (which depends on ) such that: independently of the values of (a, y 1 ) ∈ D A,Y 1 . Now, we obtain an analogous result for the second term. First, assuming Hypothesis H3, we apply [6] (Prop. 3) to u = a, v = −y 1 , c = x, l N (a) = S N 2 (t; a), l(a) = S 2 (t; a), b N (a, y 1 ) = x − y 1 S N 2 (t; a) and b(a, y 1 ) = x − y 1 S 2 (t; a), with t ∈T 0 ∩ [t 1 , +∞[ fixed, and such that (t, a) = (t, a(ω)) ∈ N * X (t 0 ; a 0 (ω)), ω ∈ Ω. By Remark 3, l N (a) converges uniformly to l(a) on N * X (t 0 ; a 0 (ω)). Then, given Secondly, we apply [6] (Prop. 2) taking z 1 = a, z 2 = y 1 , g N (a, y 1 ) = x − y 1 S N 2 (t; a), g(a, y 1 ) = x − y 1 S 2 (t; a), h N (a, y 1 ) = S N 1 (t; a), and h(a, y 1 ) = S 1 (t; a). Note that, previously we have proven, by applying [6] (Prop. 3), that g N (a, y 1 ) converges uniformly to g(a, y 1 ). In addition, by Remark 3, h N (a, y 1 ) converges uniformly to h(a, y 1 ). Let M g = |x| +M M S 2 and M h = M S 1 , these bounds being the ones established in Hypothesis H3 and Remark 4, then: The next step is to apply [6] (Prop. 4) with the following identification: z 1 = a, z 2 = y 1 , S 1 (t;a) , y 1 . Finally, as it was previously shown, the sequence γ N (a, y 1 ) is uniformly convergent to γ(a, y 1 ), and according to Hypothesis H1, the mapping φ is continuous in y 0 . Therefore, taking into account the lower bound of |S 1 (t; a)| given in Remark 4 and applying [6] (Prop. 4), for every > 0 there exists N 0 , which depends on , such that independently of the values of (a, y 1 ). Hence, we proved that k N (a, y 1 ) and then, by [6] (Prop. 1) and Hypothesis H3, we can commute the limit and the integral. Therefore, applying the continuity of the joint PDF f A,Y 1 ,Y 2 (a, y 1 , y 2 ) with respect to the second variable (see Hypothesis H1) and taking into account Expression (23), one gets: , y 1 1 Summarizing, we establish the following result.

Theorem 2.
Let us consider the random IVP (10) and (11), and assume that: (i) The RV A satisfies Hypothesis H0.
(iii) The coefficients p(t; A) and q(t; A) satisfy Hypothesis H2.
(iv) The D A,Y 1 domain of the random vector (A, Y 1 ) satisfies Hypothesis H3.

Examples
In this section, we present two examples to illustrate the theoretical results previously established.
In both examples, we calculate the 1-PDF of the approximate solution SP, and from it, we also approximate the mean and the variance of the solution by taking advantage of the expressions given in (1).
First, we check that Hypotheses H0-H3 hold: • H0: Since A ∼ U( [1,2]), it is bounded. With the notation of Hypothesis H0, we can take M A = 2. • H1: Since A, Y 0 , and Y 1 are assumed to be independent RVs, their joint PDF is given by the product of the PDF of each RV; see Expression (12). In this case, proving the continuity of the joint PDF with respect to the second variable, y 0 , is equivalent to proving the continuity of the PDF of the RV Y 0 . Since Y 0 has a Gaussian distribution, then it is continuous and bounded on the whole real line. As the PDFs of A and Y 1 are also bounded, Hypothesis H1 holds, • H2: As it was previously indicated, p(t; A) and q(t; A) are not analytic functions at t 0 = 0, butp(t; A) andq(t; A) are, since both are polynomials as a function of t for every a = a(ω), ω ∈ Ω. • H3: The domain of random vector (A, Y 1 ) is: Hence, D A,Y 1 is a Lebesgue measurable set of R 2 andM = 1 < ∞. Therefore, Hypothesis H3 is fulfilled.
In this case, the solution SP of IVP (27) is given by: where S i (t; A), i = 1, 2, are defined in (17) with: (−1) n n!A n t n ; (29) see Appendix A for further details. In this case, we can compute, via Mathematica R , the exact value of both power series, where Γ(z) and Γ(b, z) are the gamma and the incomplete gamma function defined, respectively, by: Notice that in the above expression of X 1 (t; A), these gamma functions must be interpreted as random functions [22].
In Figures 1-3, the 1-PDF of the exact solution SP, f 1 (x, t), and the 1-PDF of the truncation, f N 1 (x, t), are plotted, respectively, for different values of the truncation order N ∈ {1, 2, 3, 4, 5} at the time instants t ∈ {1.1, 1.5, 2}. In order to graphically check the convergence, in all of these figures, we plotted the approximation corresponding to the highest order (N = 5) from Expression (21) together with the exact 1-PDF. In this latter case, the 1-PDF can been obtained by applying the RVT technique from the exact representation of the solution SP given by (28), (30), and (31) and using the software Mathematica R . From this latter plot, we can observe in the three cases (t ∈ {1.1, 1.5, 2}) that f 1 (x, t) and f N 1 (x, t), with N = 5, practically coincide, so illustrating quick convergence. For the sake of completeness, in Table 1, we show the values of the error measure e N (t), defined in (32), which has been calculated for each t ∈ {1.1, 1.5, 2} using different orders of truncation N ∈ {1, 2, 3, 4, 5}: From values collected in Table 1, we observe that for N fixed, the error increases as t goes far from the initial value t = 1 (except for N = 1 since the approximations are still rough), while for t fixed, the error decreases as the order of truncation N increases, as expected.   In Figure 4, we compare the expectation and the variance of the exact solution SP in the interval t ∈ [1, 2] with the corresponding approximations for different orders of truncation, N ∈ {1, 2, 3, 4, 5}. We measure the accuracy of the approximations for these two moments via the errors defined in (33), In Table 2, we collect the values for theses errors. For both moments, we observe that the corresponding error decreases as the order of truncation increases, as expected.
where Y 0 and Y 1 denote RVs fixed at the time instant t 1 = 0.5. In [12], the authors provided sufficient conditions on the input random data (A, Y 0 , and Y 1 ) to extend the classical analysis for the Bessel equation to the random setting in the so-called mean squared sense [5]. In the aforementioned contribution, approximations for the mean and the variance of the solution SP to the Bessel RDE were given. In this example, we go further since, based on the results established in previous sections, we will construct approximations for the 1-PDF of the solution SP of this important RDE. To this end, let us first observe that, according to [12], the solution is given by X(t) = Y 0 S 1 (t; A) + Y 1 S 2 (t; A), where S i (t; A), i = 1, 2 are defined by (17), being: Hereinafter, we assume that A, Y 0 , and Y 1 are independent RVs with the following distributions: • A has a gamma distribution with shape and scale parameters α = 2 and β = 0.5, respectively, truncated on the interval I = [2.25, 2.75], i.e., A I ∼ Ga(2; 0.5). • Y 0 has a beta distribution with parameters α = 4 and β = 3, i.e., Y 0 ∼ Be(4; 3). • Y 1 has a uniform distribution on the interval [1,2], i.e., Y 1 ∼ U( [1,2]).
Similarly to Example 1, we first check that Hypotheses H0-H3 are fulfilled: From this expression, it is clear that this function is continuous with respect to its second argument, y 0 , and it is bounded. As we deal with a rectangular domain where the first factor is a decreasing function and the second factor, y 3 0 (1 − y 0 ) 2 , has a maximum value at y 0 = 0.6 below 0.035, the maximum of f A,Y 0 ,Y 1 (a, y 0 , y 1 ) is obtained taking this value and evaluating the first factor at a = 2.25. Therefore, with the notation of H1, we can take, for instance, M f = 6.1. Therefore, Hypothesis H1 is fulfilled.
• H2: Notice that t 0 = 0 is a regular-singular point of the Bessel RDE (34). On the one hand, observe that according to Equation (2), a 0 (t) = t 2 , a 1 (t) = t, and a 2 (t) = t 2 − A 2 , being a 0 (t 0 ) = 0. On the other hand, according to (4) and (5), p(t; A) = t/t 2 and q(t; A) = (t 2 − A 2 )/t 2 , which are not analytic functions at t 0 = 0, whilep(t; A) = 1 andq(t; A) = t 2 − A 2 are both analytic at t 0 = 0 inasmuch as they are polynomials as functions of t, for every a = a(ω), ω ∈ Ω. • H3: Since A and Y 1 are assumed to be independent RVs, then the domain of the random vector (A, Y 1 ) is: Hence, D A,Y 1 is a Lebesgue measurable set of R 2 , and with the notation of Hypothesis H3, we can takê M = 2 < ∞. Therefore, Hypothesis H3 holds.
The 1-PDFs of the truncated solution SP, f N 1 (x, t), for different orders of truncation, N ∈ {1, 2, 3, 4, 5}, at the time instants t ∈ {1.5, 2.5, 3.5} are plotted in Figures 5-7, respectively. As in Example 1, using the RVT method and the exact representation of series X 1 (t, A) and X 2 (t, A), defined in (35), in terms of the Bessel function J A (t) integrated in Mathematica R and the gamma function Γ(z) defined in (31), namely , we can obtain the exact 1-PDF, f 1 (x, t). On the one hand, taking this benchmark, in the left panel of Figures 5-7, we show the convergence of the approximations, f N 1 (x, t), to the exact 1-PDF, f 1 (x, t), as N increases (N ∈ {1, 2, 3, 4, 5}) for different values of t ∈ {1.5, 2.5, 3.5}. On the other hand, to illustrate better the convergence, in the right panel of the aforementioned figures, we show the approximations corresponding to the highest order of truncation, f N 1 (x, t), N = 5, and the exact 1-PDF, f 1 (x, t). We observe that both plots overlap, thus showing quick convergence.
As in Example 1, we use the error e N (t) defined in (32) to measure the quality of approximations f N 1 (x, t). In Table 3, we calculated, for each t ∈ {1.5, 2.5, 3.5}, the values of this error for the following orders of truncation N ∈ {1, 2, 3, 4, 5}. Notice that for each t, the error diminishes as the order of truncation increases, while the error increases as we move far from the initial data t 1 = 0.5, as expected.    Taking f N 1 (x, t), given by (21), as an approximation to f 1 (x, t), in the expressions (1), we can calculate approximations for the mean (E[X N (t)]) and the variance (V[X N (t)]) of the solution of the random IVP (34). In Figure 8, we plotted the above-mentioned approximations on the interval t ∈ [0.5, 3.5] for different orders of truncation, N ∈ {1, 2, 3, 4, 5}, as well as their respective exact values, E [X(t)] and V [X(t)], using as the expression to f 1 (x, t) the one obtained via the RVT method. In these plots, we can clearly see that the approximations quickly improve on the whole interval as N increases. To finish, we calculated the errors of these approximations by means of the following expressions: Values of errors for the mean, e E N , and the variance, e V N , for each truncation are shown in Table 4. From these values, we observe that errors decrease as N increases.

Conclusions
In this paper, we presented a methodology to construct reliable approximations to the first probability density function of the solution of second-order linear differential equations about a regular-singular point with a finite degree of randomness in the coefficients and also assuming that both initial conditions were random variables. Therefore, we studied this class of differential equations assuming randomness in all its input data, which provided great generality to our analysis. The results completed the ones already presented in a previous paper where the same class of random initial value problems was solved about an ordinary point. In this manner, the results obtained in both contributions permitted solving, from a probabilistic standpoint, a number of important randomized differential equations that appear in the field of mathematical physics like Airy, Hermite, Legendre, Laguerre, Chebyshev, Bessel, etc. Under our approach, we could compute the first probability density function of the solution. This is really advantageous since apart from allowing us to determine the mean and the variance, the knowledge of the first probability density function also permits determining higher one-dimensional moments, as well as the probability that the solution lies in any interval of interest, which might be key information in many practical problems.

Conflicts of Interest:
The authors declare no conflict of interest.