Medical, Pharma, Engineering, Science, Technology and Business

Mathematics Division, Los Angeles Harbor College/West LA College, and School of International Business, California International University, USA

- *Corresponding Author:
- Oepomo TS

Science, Technology, Engineering, and Mathematics Division

Los Angeles Harbor College/West LA College, and

School of International Business, California International University

1301 Las Riendas Drive Suite: 15, Las Habra, CA 90631, USA

**Tel:**310-287-4216

**E-mail:**Oepomot@wlac.edu; oepomotj@lattc.edu; oepomots@lahc.edu

**Received Date:** November 10, 2016; **Accepted Date:** December 23, 2016; **Published Date:** December 27, 2016

**Citation: **Oepomo TS (2016) An Alternating Sequence Iteration’s Method
for Computing Largest Real Part Eigenvalue of Essentially Positive Matrices:
Collatz and Perron-Frobernius’ Approach. J Appl Computat Math 5:334. doi:
10.4172/2168-9679.1000334

**Copyright:** © 2016 Oepomo TS. This is an open-access article distributed under
the terms of the Creative Commons Attribution License, which permits unrestricted
use, distribution, and reproduction in any medium, provided the original author and
source are credited.

**Visit for more related articles at** Journal of Applied & Computational Mathematics

This paper describes a new numerical method for the numerical solution of eigenvalues with the largest real part of essentially positive matrices. Finally, a numerical discussion is given to derive the required number of mathematical operations of the new method. Comparisons between the new method and several well know ones, such as Power and QR methods, were discussed. The process consists of computing lower and upper bounds which are monotonically approximating the eigenvalue.

Collatz’s theorem; Perron-Frobernius’ theorem; Eigenvalue

15A48; 15A18; 15A99; 65F10 and 65F15

A variety of numerical methods for finding eigenvalues of nonnegative irreducible matrices have been reported over the last decades, and the mathematical and numerical aspects of most of these methods are well reported [1-24]. In recent article of Tedja [19], it was presented the mathematical aspects of Collatz’s eigenvalue inclusion theorem for non-negative irreducible matrices. It is the purpose of this manuscript to present the numerical implementation of [19]. Indeed, there is the hope that developing new numerical method could lead to discovering properties that might be responsible for better numerical method in finding and estimating eigenvalues of non-negative irreducible matrices. Birkhoff and Varga [2] observed that the results of the Perron-Frobernius theory and consequently Collatz’s theorem could be slightly generalized by allowing the matrices considered to have negative diagonal elements. They introduced the terms “essentially non-negative for matrices, the off-diagonal elements of which are non-negative, and “essentially positive matrices” for essentially nonnegative, irreducible matrices. The only significant changes is that whenever Perron-Frobernius theory and Collatz’s theorem refer to the spectral radius of a non-negative matrix A, the corresponding quantity for an essentially non-negative matrix Ã is the (real) eigenvalue of the maximal real part in the spectrum of Ã, also to be denoted by Λ[Ã]. Of course Λ[Ã] need not be positive and it is not necessary dominant among the eigenvalues in the absolute value sense.

**Definition**

Incidentally, if A is what you call an essentially positive matrix (so it is a real matrix with positive off-diagonal entries), then A + aI has positive entries for sufficiently large positive a, so A + aI has a Perron eigenvalue, p say, with corresponding eigenvector v, say, having positive entries. But p is the only eigenvalue of A + aI for which there is a corresponding eigenvector with positive entries. Thus p - a is the only eigenvalue of A with a corresponding eigenvector having all its entries nonnegative, but p - a is real and need not be positive (since a could be greater than p). In this manuscript the eigenvalue corresponding to a positive eigenvector is real. There probably is a term already in the literature for "essentially positive" For example: Z-matrix, tau-matrix, M-matrix, and Metzler matrix all refer to matrices with off-diagonal entries all of the same sign, but having extra conditions.

**Background**

Let A be an n *x* n essentially positive matrix. The new method can be used to numerically determine the eigenvalue λ_{A} with the largest real part and the corresponding positive eigenvector *x*[A] for essentially positive matrices. This numerical method is based on previous manuscript [16]. A matrix *A*=(*a _{ij}*) is an essentially positive matrix if

Let *x* > 0 be any positive n components vector [19]. Let

(1)

(2)

(3)

(4)

* In Ostrowsky [25] *m*(*x*) is defined as:

(5)

The following theorem is an application of corollary 2.3 from [16] to the design of numerical method using the Perron-Frobenius-Collatz mini-max principle for the calculation of *x*[*A*] [10].

Let {*x ^{p}*} (p=0, 1, 2,……) be a sequence of positive vectors and .

*Theorem* 1 *If the sequence* {*x ^{p}*} (

We will now define a group of sequences, the “decreasingsequence.”

**Decreasing-sequence**

Let *Y ^{r}*(

(6)

Here *Ω _{r}*(

• *Ω _{r}*(

• *Ω _{r}*(

• *F _{r}*(

• Equality in c) may be applicable only if *Y ^{r}*(

• If for some x > 0 *Ω _{r}*(

. This will imply that *f _{n}*(

Then n-component vector valued function *Y ^{r}*(

*x ^{p}*

Where *Y ^{k}*(x)=

*r* ≡ *p* + 1 (mode n) (9)

Such a sequence will be called a decreasing maximum ratio sequence or briefly decreasing-sequence.

*Corollary* 1:*Any decreasing-sequence converges to x _{A}*.

**Proof:** From equation (6) and 0 ≤ Ω_{r}(x) ≤ *x _{r}* yield to imply 0 ≤

Note: In any case, if we have an increasing bounded sequence, the limit always exists and is finite (the sup is an upper bound/ the sequence is bounded from above by a supremum). It does converge to the supremum. This criterion is also true for a decreasing bounded sequence and is bounded below by an infimum. It will converge to the infimum. But we need to be very careful to define the term bounded, it means bounded from above and bounded from below. These statements hold for sets of real numbers. We will now define a second group of sequences, the “Increasing-sequence”.

**Increasing-sequence**

Let *y ^{r}*(

(10)

Here *ω _{r}*(

*ω _{r}*(

*ω _{r}*(

*fr*(*yr*(*x*)) ≥ *m*(*x*) (11)

Equality in c) may be applicable only if *y ^{r}*(

If for some x > 0 *ω _{r}*(

. This will imply that *f _{n}*(

Then n-component vector valued function *Y ^{r}*(

*x ^{p}*

Where *y ^{k}*(

*Corollary* 2:*Any Increasing-sequence converges to x _{A}*.

Proof: Since *wr*(*x*) ≥ *x*, so that *yr*(*x*) ≥ *x*. This yields to imply that, starting with *x*^{0} ≥ 0, we have *x _{p}*

Numerical tests indicate that an alternation of the application of the decreasing and increasing sequences will converge faster than either the decreasing or increasing sequence separately. Therefore, we will define a sequence of vectors {*x ^{p}*} which are constructed by alternating methods of the decreasing or increasing type functions.

We will describe a sequence of n steps which generate *x ^{j}*

*x ^{k}*

Where k=0 corresponds to the input vector. The first iteration could be either in the increasing or decreasing mode. We also define the sequence of real number {*t _{?}*} and {

(14)

Where (*?*=1, 2, 3…) are indexes of the iteration. If inequalities (14- 1) and (14-2) are met, we may set

(15)

And the mode or sequence of the next (*?* + 1)^{st} iteration will be different from the *?*^{st} iteration, i.e. the sequence of the *?*^{st} iteration is different from the (*?* + 1)^{st} iteration. If either inequality (14-1) or (14- 2) is not satisfied then the mode or sequence of the (*?* + 1^{st}) iteration is the same as that of the *?*^{st} iteration or unchanged and we set: *t _{?}*=

A sequence having the above mentioned properties will be called the alternating sequence iteration.

*Corollary* 3:*Any alternating sequence iteration converges to x _{A}*.

Proof: If inequalities (14-1) and (14-2) are satisfied, it means that the estimated result is located between the upper bound, *M*(*x ^{n}*

Corollary 1, 2, and 3 described above lay the foundation of the procedure of an iterative method for the determination of the positive eigenvector of essentially positive matrices. The choices of the functions Ω_{r}(*x*), *ω _{r}*(

*Let H _{r}*(

*and equality may hold on either side of equation* (16) *only if m*(*x*)=*M*(*x*)=*λ _{A}*.

*For r* *ε N* (*N*=1, 2, 3,…, *n*), *let* Ω* _{r}*(

*Where* *H _{r}* ≡

**Proof:** *We will first show that if* *f _{r}* <

as *f _{r}* <

For an essentially positive matrix all the off diagonal elements cannot be zero and consequently for any vector . Therefore (18) becomes 0 <*H _{r}*–

It is clear from equations (17) and (19) that Ω* _{r}*(

*θ _{r}*=

The above inequalities are equivalent to *θ _{r}*=max[

*θ _{r}* is the maximum of two continuous functions and is therefore continuous. By definition

Equation (20) implies *θ _{r}* >

From equations (22), (23) therefore Ω* _{r}*(

Equation (22) is equivalent to (24)

As stated before for an essentially positive matrix, *z _{r}* is positive, and therefore it is obvious from (22) that

Thus Ω* _{r}*(

Finally suppose that Ω* _{r}*(

Since *M*(*x*) ≥ *Hr* by definition, we have *M*(*x*)=*H _{r}*. By assumption this is possible only if

*Let h _{r}* (

(25)

*and equality may hold on either side of equation* (24) *only if* *m*(*x*)=*M*(*x*)=*λ _{A}*,

*w _{r}*(

*If* *f _{r}* ≤

(27)

*If* *otherwise Then the function w _{r} (together with an x*

**Proof:** We will first show that if *f _{r}* >

For an essentially positive matrix, all the off diagonal elements cannot be zero and accordingly for any other vector ; therefore equation (28) becomes 0 >*h _{r}*–

It is clear from equation (26) and (28) that *ω _{r}*(

*f _{r}*≤

The above inequalities are equivalent to:

*θr*=min[*f _{r}*,

*θ _{r}* is the minimum of two continuous functions and is thus continuous. By definition

(31)

Equation (29) implies

*θ _{r}*≤

From equation (31), (32) therefore *ω _{r}*(

Thus *ω _{r}*(

(33)

As stated before, for an essentially positive matrix, zr is positive, and hence it is obvious from (31) that *θ _{r}* <

The last inequality is equivalent to . Since *m*(*x*)=*h _{r}* by definition, we have

The functions *H _{r}*(

a) where r ∈ N (N=1, 2, 3,….,n) and *M*^{*}(*x*)=min(*m*(*x*),*M*_{1}) and *M*_{1} is an upper estimate of the eigenvalue *λ _{A}*, e.g., . In [8], m(x) is defined as and

b) For full matrices, a reasonable choice for *H _{r}*(

c) Another simple choice for *hr*(*x*); *H _{r}*(

*υ*(*x*) can also be defined in the following mean: (34)

A step of the alternating sequence iteration method consists in modifying a single component *x _{r}* of

and I=1, 2, 3,……,n

(35)

These steps will be referred to as the updating iteration. The updating equations can be obtained easily from equations 1 through 5. To prevent the accumulation of round off errors after a number of iterations, the variables will have to be recalculated instead of updating. If we are working in a double precision, our previous experiences indicate that it is more than sufficient to recalculate after every twenty five iterations.

From various choices for functions *H _{r}*(

(36)

As it well known, for several suitable values forγ, is the overrelaxation factor, and 1 ≤ γ ≤ 2. Equation (36) may be useful in case of banded matrices. The over-relaxation method contains the following cases:

• γ=1 for simultaneous over-relaxation method, and

• 1 < γ < 2 for over-relaxation method.

Error vector in all methods the quantity Δ(*x*)=*M*(*x*)–*m*(*x*) as indicated in (4) is used as a measure of accuracy.

Before we go any further, the following issues should be understood. Are both eigenvalues and eigenvectors required to be calculated, or are eigenvalues by itself enough? Is only the absolute largest eigenvalue of interest? Does the matrix have special characteristics such as real symmetric, essentially positive, and so on? If all eigenvalues and eigenvectors are required then this new cannot be used;

If a matrix (*A*) is essentially positive and the positive eigenvector (*x _{A}*) and the corresponding eigenvalue (λ

For our numerical comparisons all three methods, Power, New Method, and QR methods, were tried to solve eigenvalue of the following matrices:

All three methods were used to estimate the eigenvalue of Hilbert matrices of various orders. Let *H _{n}* be a Hilbert matrix of order n. The elements of Hilbert matrix are defined according to the following relation:

The results of the 3 methods can be seen in **Tables 1**-**3**.

Operations | Δ(x) | LogΔ(x) |
---|---|---|

1640 | 1.423 | 0.3528 |

4921 | 1.41 × 10^{-1} |
-1.958 |

8200 | 1.39 × 10^{-2} |
-4.275 |

11490 | 1.3441 × 10^{-3} |
-6.611 |

14764 | 1.296 × 10^{-4} |
-8.949 |

18040 | 1.25 × 10^{-5} |
-11.288 |

21322 | 1.208 × 10^{-6} |
-13.626 |

24600 | 1.116 × 10^{-7} |
-15.965 |

27880 | 1.123 × 10^{-8} |
-18.304 |

**Table 1:** Hilbert matrix *H*_{40} new method.

Operations | Δ(x) | LogΔ(x) |
---|---|---|

4800 | 1.016 × 10^{-1} |
-2.288 |

9600 | 3.71 × 10^{-3} |
-5.597 |

14400 | 8.549 × 10^{-5} |
-9.368 |

19200 | 3.068 × 10^{-6} |
-12.693 |

24000 | 7.061 × 10^{-8} |
-16.467 |

28800 | 2.536 × 10^{-9} |
-19.289 |

29700 | 9.977 × 10^{-10} |
-20.628 |

**Table 2:** Hilbert matrix *H*_{40} power method.

Operations | Δ(x) | LogΔ(x) |
---|---|---|

17400 | 2.423 × 10^{-2} |
-3.21 |

23400 | 1.007 × 10^{-4} |
-9.202 |

26600 | 8.263 × 10^{-9} |
-18.614 |

332000 | 2.422 × 10^{-2} |
-3.722 |

442000 | 1.0096 × 10^{-4} |
-9.212 |

506000 | 8.239 × 10^{-9} |
-18.615 |

**Table 3:** Hilbert matrix *H*_{40} QR method.

• We would like to find the efficiency of the three numerical methods, when a matrix had eigenvalues of nearly the same modulus. So it was decided to pick a matrix of order n that was almost cyclic (*c _{n}*). Consider the below mentioned matrix

. The elements of *A*_{1, 2} and *A*_{2, 1} were defined as follows,

, *A*_{1, 2} is a (8, 12) matrix, and *A*_{2, 1} is a (12, 8) matrix.

The elements of *A*_{1, 1} and *A*_{2, 2} were defined in the following respects, , *A*_{1, 1} is a (12, 12) matrix, and *A*_{2, 2} is a (8, 8) matrix.

If the elements of *A*_{1, 1} and *A*_{2, 2} were replaced by zero, then the matrix would be nearly cyclic. For comparisons, the results of those 3 methods can be seen in **Tables 4**-**6**.

Operations | Δ(x) | LogΔ(x) |
---|---|---|

3780 | 3.091 × 10^{-1} |
-1.174 |

7980 | 2.089 × 10^{-1} |
-1.566 |

1188 | 1.421 × 10^{-1} |
-1.950 |

16390 | 9.706 × 10^{-2} |
-2.334 |

20580 | 6.6381 × 10^{-2} |
-2.714 |

24780 | 4.5461 × 10^{-2} |
-3.092 |

28990 | 3.111 × 10^{-2} |
-3.478 |

33190 | 2.1341 × 10^{-2} |
-3.848 |

37380 | 1.4651 × 10^{-2} |
-4.228 |

**Table 4:** Almost cyclic matrix *H*_{40} power method.

Operations | Δ(x) | LogΔ(x) |
---|---|---|

1400 | 2.76 × 10^{-2} |
-3.589 |

2800 | 7.607 × 10^{-4} |
-7.187 |

4200 | 2.293 × 10^{-5} |
-10.699 |

5600 | 6.3492 × 10^{-7} |
-14.277 |

7000 | 1.9246 × 10^{-8} |
-17.774 |

8400 | 5.3279 × 10^{-10} |
-21.358 |

**Table 5:** Almost cyclic matrix *C*_{40} new method.

Operations | Δ(x) | LogΔ(x) |
---|---|---|

2400 | 3.091 × 10^{-1} |
-6.622 |

3260 | 2.089 × 10^{-1} |
-8.308 |

4000 | 1.421 × 10^{-1} |
-14.178 |

44680 | 9.706 × 10^{-2} |
-21.798 |

21340 | 6.6381 × 10^{-2} |
-6.620 |

29340 | 4.5461 × 10^{-2} |
-8.30892 |

37300 | 3.111 × 10^{-2} |
-14.178 |

41340 | 2.1341 × 10^{-2} |
-21.798 |

**Table 6:** Almost cyclic matrix *C*_{40} QR.

• Introducing a proper shift of origin could speed up the convergence of power method [9]. So it was decided to try that kind of matrix by introducing a shift of origin would not help the speed of convergence. Such a matrix of order *n*(*Q _{n}*) can be given by the following relations.

1≤ i, j ≤ n. and 1≤i ≤ n.

The results of our tests are indicated in **Tables 7**-**10** for Arnoldi.

Operations | Δ(x) | LogΔ(x) |
---|---|---|

4200 | 6.506 × 10^{3} |
8.788 |

8.788 | 1.0023 × 10^{3} |
4.608 |

3780 | 6.196 | 1.824 |

4.608 | 1.065 | 0.0609 |

7990 | 2.1967 × 10^{-1} |
-1.518 |

1.824 | 4.985 × 10^{-2} |
-2.988 |

12190 | 1.160 × 10^{-2} |
-4.458 |

0.0609 | 2.703 × 10^{-3} |
-5.916 |

16390 | 6.294 × 10^{-4} |
-7.372 |

**Table 7:** Matrix Q_{40} power method.

Operations | Δ(x) | LogΔ(x) |
---|---|---|

2400 | 3.150 | 1.134 |

3260 | 2.089 10^{-4} |
-8.472 |

4000 | 4.778 10^{-7} |
-14.558 |

44680 | 3.124 10^{-10} |

**Table 8:** Matrix Q_{40} new method.

Operations | Δ(x) | LogΔ(x) |
---|---|---|

4260 | 1.429 × 10^{1} |
2.6591 |

5260 | 1.322 × 10^{-2} |
-4.3429 |

5870 | 4.664 × 10^{-5} |
-9.9722 |

41300 | 1.4294 × 10 | 2.6588 |

51300 | 1.324 × 10^{-2} |
-4.3262 |

55300 | 4.668 × 10^{-5} |
-9.9722 |

**Table 9:** Matrix Q_{40} new method.

Operations | Δ(x) | LogΔ(x) |
---|---|---|

4350 | 1.439 × 10^{1} |
2.5591 |

4500 | 1.322 × 10^{-2} |
-6.473 |

5000 | 4.5778 × 10^{-7} |
-15.558 |

54678 | 4.134 × 10^{-10} |

**Table 10:** Matrix Q_{40} arnoldi method.

We will assume that we are interested in the positive eigenvector and the corresponding eigenvalue of the essentially positive matrix. From our trials, it is obvious that in all three cases the rate of convergence of our new method is better than or at least as fast as the power [9]. The QR [26] method converges very slowly in the last two cases, when the separation between the eigenvalues is poor. Let us consider the results of case b, when the matrix is nearly cyclic. For a cyclic matrix there are some eigenvalues of equal modulus, and so for a matrix that is “near cyclic” it is plausible to assume the separation between the modules is very poor. The new method takes about 5, 700 multiplications and divisions to reach an accuracy of 8 digits; which is about 5 times the computations of the power method and the QR method reaches an accuracy of 2 digits and 4 digits respectively. We should remember that the QR method is not specifically designed to calculate just one eigenvalue; therefore, a comparable efficiency cannot be expected. Thus from our recent experience, we can conclude that the new method shows a good speed of convergence even when the separation of the eigenvalues is poor. However, in the case of banded matrices the new method converges slowly. The new was tried on various banded matrices arising from finite difference approximation to boundary value problems of ordinary differential equations. A computer code was written specially for banded matrices, to take advantage of the large number of zero elements in a banded matrix. We will here summarize the results of our computer runs with the following (20, 20) matrix

*a _{ii}*= – 2 if 1 ≤

The over relaxation method as described in equation (36), was tried on the previously mentioned matrix with values of *γ* ranging from 1 to 1.99. The speed convergence did not show a remarkable dependence of *γ*. An 8 digit of accuracy was obtained in 168 iterations for *γ*=1.73, whereas for full matrices the new method gave a 9 digit of accuracy in 21 steps.

We will now return our attention to full matrices. Let *R _{n}* be a matrix (of order n) with pseudo-random entries. The new method and the power method were tried on each family of matrices (

No convergence rates are presented, but the experiments shown indicate a linear convergence comparable to classical methods, such as the power method. Of course other methods, such as

Rayleigh Quotient Iteration, exhibit faster convergence rates (usually cubic).

The table of computational results is presented for a special class of dense matrices, namely Hilbert matrices, and others of similar structure. The new proposed method is about 10% faster (in number of operations) than the power method (compare 24, 000 operations in **Table 2** to 27,880 operations in **Table 1** for an error of the order of 10^{–8}). For sparse matrices stemming for standard PDE problems, the new method is inferior.

The author wishes to thank Prof. Schneider, Mathematics Department at the University of Wisconsin in Madison, for his suggestions during the writing of the author’s earlier manuscript and criticism during the writing. His criticism and suggestions yielded the development of this method. The author acknowledges indebtedness to him, and to his stimulating comments during the review of the earlier article [19]. The author would also like to thank Professor Tsuypshi Ando at Hokkaido University, for his proof of corollaries 1, 2, and 3 for its convergences, which is what led me to pursue most of the research collected herein. Also, a special thanks to him for his enthusiasm and help. Lastly, the author would like to express his gratitude to Prof. Thomas Laffey in the school of mathematical sciences, University of Dublin, for the useful comments regarding the better clarification of essentially positive definite matrices and the refinement of the proof for corollaries 1 and 3.

- Van ATF (1971) Reduction of a positive matrix to a quasistochastic matrix by a similar variation method. USSR Computational Mathematics and Mathematical Physics 11: 255-262.
- Birkhoff G, Varga RS (1958) Reactor criticality and nonnegative matrices. Journal of the Society for Industrial and Applied Mathematics 6: 354-377.
- Brauer A (1957) The Theorems of Lederman and Ostrowsky on Positive Matrices. Duke Math J 24: 265-274.
- Brauer A (1957) A Method for the Computation of the Greatest Root of a Positive Matrix. J Soc Indust Appl Math 5: 250-253.
- Bunse W (1981) A class of diagonal transformation methods for the computation of the spectral radius of a nonnegative irreducible matrix. SIAM J Numer Anal 18: 693-704.
- Collatz L (1942) Einschliessungssatz fuer die charakteristischen Zahlen von Matrizen. Math Z 48: 221-226.
- Elsner L (1971) Verfahren zur Berechnung des Spektralradius nichtnegativer irreducibler Matrizen. Computing 8: 32-39.
- Elsner L (1972) Verfahren zur Berechnung des Spektralradius nichtnegativer irreducibler Matrizen II. Computing 9: 69-73.
- Fan K (1958) Topological Proofs for Certain Theorems on Matrices with Non-Negative Elements. Monatsh. Math. 62: 219-237.
- Frobenius GF (1909) Ueber Matrizen aus Positiven Elementen I and II. Sitzungsber Preuss Akad Wiss, Berlin, pp: 471-476.
- Hall CA, Porsching TA (1969) Bounds for the Maximal Eigenvalue of a Non-Negative Irreducible Matrix. Duke Math J 36: 159-164.
- Hall CA, Porsching TA (1968) Computing the Maximal Eigenvalue and Eigenvector of a Positive Matrix. SIAM J Numer Anal 5: 269-274.
- Hall CA, Porsching TA (1968) Computing the Maximal Eigenvalue and Eigenvector of a Non-Negative Irreducible Matrix. SIAM J Numer Anal 5: 470-474.
- Higham NJ (2002) Accuracy and Stability of Numerical Algorithms. SIAM.
- Lederman L (1950) Bounds for the Greatest Latent Roots of a Positive Matrix. J London Math Soc 25: 265-268.
- Ma W (2015) A Backward Error for the Symmetric Generalized Inverse Eigenvalue Problem, Linear Algebra and its Applications 464: 90-99.
- Markham TL (1968) An Iterative Procedure for Computing the Maximal Root of a Positive Matrix. Math Comput 22: 869-871.
- Matejas J, Hari V (2015) On High Relative Accuracy of the Kogbetliantz Method. Linear Algebra and its Applications 464: 100-129.
- Tedja S, Oepomo A (2003) Contribution to Collatz’s Eigenvalue Inclusion Theorem for Nonnegative Irreducible Matrices ELA 10: 31-45.
- Ostrowsky AM (1952) Bounds for the Greatest Latent Root of a Positive Matrix. J London Math Soc 27: 253-256.
- Ostrowsky AM, SchneiderH (1960) Bounds for the Maximal Characteristic Root of a Non-Negative Irreducible Matrix. Duke Math J 27: 547-553.
- Schneider H (1958) Note on the Fundamental Theorem on Irreducible Non-Negative Matrices. Proc Edinburgh Math Soc 11: 127-130.
- Stor NJ, Slapnicar I, Barlow JL (2015) Accurate Eigenvalue Decomposition of Real Symmetric Arrowhead Matrices and Applications. Linear Algebra and Its Applications. 464: 62-89.
- Wilkinson JH (1966) Convergence of LR, QR and Related Algorithms. Comp Jour 8: 77.
- Faddeev DK, Faddeeva VN (1973) Computational Methods of Linear Algebra. WH Freeman and Company, San Fransisco, USA.
- Wieland H (1967) Topics in the Analytical Theory of Matrices. Lecture Notes Prepared by R. Meyer. Department of Mathematics, University of Wisconsin, Madison.

Select your language of interest to view the total content in your interested language

- Adomian Decomposition Method
- Algebraic Geometry
- Analytical Geometry
- Applied Mathematics
- Axioms
- Balance Law
- Behaviometrics
- Big Data Analytics
- Binary and Non-normal Continuous Data
- Binomial Regression
- Biometrics
- Biostatistics methods
- Clinical Trail
- Complex Analysis
- Computational Model
- Convection Diffusion Equations
- Cross-Covariance and Cross-Correlation
- Differential Equations
- Differential Transform Method
- Fourier Analysis
- Fuzzy Boundary Value
- Fuzzy Environments
- Fuzzy Quasi-Metric Space
- Genetic Linkage
- Hamilton Mechanics
- Hypothesis Testing
- Integrated Analysis
- Integration
- Large-scale Survey Data
- Matrix
- Microarray Studies
- Mixed Initial-boundary Value
- Molecular Modelling
- Multivariate-Normal Model
- Noether's theorem
- Non rigid Image Registration
- Nonlinear Differential Equations
- Number Theory
- Numerical Solutions
- Physical Mathematics
- Quantum Mechanics
- Quantum electrodynamics
- Quasilinear Hyperbolic Systems
- Regressions
- Relativity
- Riemannian Geometry
- Robust Method
- Semi Analytical-Solution
- Sensitivity Analysis
- Smooth Complexities
- Soft biometrics
- Spatial Gaussian Markov Random Fields
- Statistical Methods
- Theoretical Physics
- Theory of Mathematical Modeling
- Three Dimensional Steady State
- Topology
- mirror symmetry
- vector bundle

- Total views:
**363** - [From(publication date):

December-2016 - Aug 18, 2017] - Breakdown by view type
- HTML page views :
**318** - PDF downloads :
**45**

Peer Reviewed Journals

International Conferences 2017-18