The matrix equation AXBH=E with SX=XR or PX=sXQ constraint is considered, where S, R are Hermitian idempotent, P, Q are Hermitian involutory, and s=±1. By the eigenvalue decompositions of S, R, the equation AXBH=E with SX=XR constraint is equivalently transformed to an unconstrained problem whose coefficient matrices contain the corresponding eigenvectors, with which the constrained solutions are constructed. The involved eigenvectors are released by Moore-Penrose generalized inverses, and the eigenvector-free formulas of the general solutions are presented. By choosing suitable matrices S, R, we also present the eigenvector-free formulas of the general solutions to the matrix equation AXBH=E with PX=sXQ constraint.
1. Introduction
In [1], Chen has denoted a square matrix X, the reflexive or antireflexive matrix with respect to P by
(1)PX=XPorPX=-XP,
where the matrix P∈𝒞n×n is Hermitian involutory. He also pointed out that these matrices possessed special properties and had wide applications in engineering and scientific computations [1, 2]. So, solving the matrix equation or matrix equations with these constraints is maybe interesting [3–14]. In this paper, we consider the matrix equation
(2)AXBH=E
with constraint
(3)PX=sXQorSX=XR,
where the matrices A∈𝒞m×n, B∈𝒞p×n, E∈𝒞m×p, the Hermitian involutory matrices P,Q∈𝒞n×n, the Hermitian idempotent matrices S,R∈𝒞n×n, and the scalars s=±1.
Equation (2) with different constraints such as symmetry, skew-symmetry, and PX=±XP, was discussed in [9–11, 15–21], where existence conditions and the general solutions to the constrained equation were presented. By generalized singular value decomposition (GSVD) [22, 23], the authors of [15–17] simplified the matrix equation by diagonalizing the coefficient matrices and block-partitioned the new variable matrices into several block matrices, then imposed the constrained condition on subblocks, and determined the unknown subblocks separately for (2) with symmetric constraint. A similar strategy was also used in [18]; the authors achieved symmetric, skew-symmetric, and positive semidefinite solutions to (2) by quotient singular value decomposition (QSVD) [24, 25]. Moreover, in [20], CCD [26] was used for establishing a formula of the general solutions to (2) with diagonal constraint.
In [19], we have presented an eigenvector-free solution to the matrix equation (2) with constraint PX=±XP, where we represented its general solution and existence condition by g-inverses of the matrices A, B, and P. Note that the g-inverses are always not unique, and they can be generalized to the Moore-Penrose generalized inverses. Moreover, the constraint which guarantees the eigenvector-free expressions can be maybe improved further. So, in this paper, we focus on (2) with generalized constraint PX=sXQ or another constraint SX=XR; our ideas are based on the following observations.
If we set
(4)S=12(I+P),R=12(I+sQ),
then S and R are both Hermitian idempotent. The above fact implies PX=sXQ is the special case of SX=XR. So, we only discuss (2) with SX=XR constraint and construct the PX=sXQ constrained solution by selecting suitable matrices R, Q as (4).
With the eigenvalue decompositions (EVDs) of the Hermitian matrices R, S, matrix X with SX=XR constraint can be rewritten in (lower dimensional) two free variables X^ and Y^. And the corresponding constrained problem can be equivalently transformed to an unconstrained equation
(5)A^1X^B^1H+A^2Y^B^2H=E,
with given coefficient matrices A^i, B^i, i=1,2 (one can see the details of this discussion in Section 2).
The general solutions and existence conditions of (5) can be represented by the Moore-Penrose generalized inverses of A^i, B^i, i=1,2 [15, 20, 27–29]. However, the formulas above are maybe not simpler because the coefficient matrices contain the eigenvectors of S, R. In fact, the Hermitian idempotence of the matrices S, R implies they only have two clusters different eigenvalues, and their corresponding eigenvectors appear in the expression of general solutions, and existence conditions can be easily represented by S, R themselves. So we present a simple and eigenvector-free formulation for the constrained general solution.
The rest of this paper is organized as follows. In Section 2, we give the general solutions and the existence condition to (2) with SX=XR constraint by the EVDs of S, R. In Section 3, we present the corresponding eigenvector-free representations. Equation (2) with PX=sXQ constraint is regarded as the special case of (2) with SX=XR constraint, and its eigenvector-free representation is given in Section 4. Numerical examples are given in Section 5 to display the effectiveness of our theorems.
We will use the following notations in the rest of this paper. Let 𝒞m×n denote the space of complex m×n matrix. For a matrix A, AH and A† denote its transpose and Moore-Penrose generalized inverse, respectively. Matrix In is identity matrix with order n; Om×n refers to m×n zero matrix, and On is the zero matrix with order n. For any matrix A∈𝒞m×n, we also denote
(6)𝒫A=AA†,KA=Im-𝒫A.
So,
(7)𝒫AH=A†A,KAH=In-𝒫AH.
2. Solution to (<xref ref-type="disp-formula" rid="EEq1.1">2</xref>) with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M78"><mml:mi>S</mml:mi><mml:mi>X</mml:mi><mml:mo mathvariant="bold">=</mml:mo><mml:mi>R</mml:mi><mml:mi>X</mml:mi></mml:math></inline-formula> Constraint by the EVDs
For the Hermitian idempotent matrices S, R, let
(8)S=Udiag(Ik,On-k)UH,R=Vdiag(Il,On-l)VH
be their two eigenvalue decompositions with unitary matrices U, V, respectively. Then SX=XR holds if and only if
(9)diag(Ik,On-k)X~=X~diag(Il,On-l),
where X~=UHXV. And the constrained solution X can be expressed in
(10)X=Udiag(X^,Y^)VH,X^∈𝒞k×l,Y^∈𝒞(n-k)×(n-l).
Partitioning U=[U1,U2], V=[V1,V2] and using the transformations (10), (2) with SX=XR constraint is equivalent to the following unconstrained problem:
(11)A^1X^B^1H+A^2Y^B^2H=E,
where
(12)A^1=AU1,B^1=BV1,A^2=AU2,B^2=BV2.
For the unconstrained problem (11), we introduce the results about its existence conditions and expression of solutions.
Lemma 1.
Given A∈𝒞m×n, B∈𝒞p×q, C∈𝒞m×r, D∈𝒞s×q, and E∈𝒞m×q, the linear matrix equation AXB+CYD=E is consistent if and only if
(13)𝒫GKAE𝒫DH=KAE,𝒫CEKBH𝒫JH=EKBH,
or, equivalently, if and only if
(14)KGKAE=0,KAEKDH=0,KCEKBH=0,EKBHKJH=0,
where G=KAC and J=DKBH. And a representation of the general solution is
(15)Y=G†KAED†+T-𝒫GHT𝒫D,X=A†(E-CYD)B†+Z-𝒫AHZ𝒫B,
with
(16)T=(CKGH)†(Im-CG†KA)EKBHJ†+W-𝒫(CKGH)HW𝒫J,
where the matrices W∈𝒞r×s and Z∈𝒞n×p are arbitrary.
The lemma is easy to verify; we can turn to [27] for details. The difference between them is that we replace the g-inverse in the theorem of [27] by the corresponding Moore-Penrose generalized inverse, and the expression of solutions is complicated relatively. However, compared with the multiformity of the g-inverses, the Moore-Penrose generalized inverse involved representation is unique and fixed.
Apply Lemma 1 on the unconstrained problem (11), we have the following theorem.
Theorem 2.
The matrix equation AXBH=E with constraint SX=XR is consistent if and only if
(17)𝒫G^KA^1E𝒫B^2=KA^1E,𝒫A^2EKB^1𝒫J^H=EKB^1,
where
(18)G^=KA^1A^2,J^=B^2HKB^1.
In the meantime, a general solution is given by
(19)Y^=G^†KA^1EB^2H†+(A^2KG^H)†(Im-A^2G^†KA^1)EKB^1J^†Y^=-𝒫G^H(A^2KG^H)†(Im-A^2G^†KA^1)EKB^1J^†𝒫B^2HY^=+W-𝒫G^HW𝒫B^2H-𝒫(A^2KG^H)HW𝒫J^Y^=+𝒫G^H𝒫(A^2KG^H)HW𝒫J^𝒫B^2H,X^=A^1†(E-A^2Y^B^2H)B^1H†+Z-𝒫A^1HZ𝒫B^1H,
where the matrices W and Z are arbitrary.
In order to separate Y^ from X^ of the second equality in (19), we substitute Y^ into X^. Let
(20)Y*=G^†KA^1EB^2H†+(A^2KG^H)†(Im-A^2G^†KA^1)EKB^1J^†Y*=-𝒫G^H(A^2KG^H)†(Im-A^2G^†KA^1)EKB^1J^†𝒫B^2H,X*=A^1†EB^1H†-A^1†A^2Y*B^2HB^1H†,
together with
(21)B^2†B^2B^2H=(B^2†B^2)HB^2H=B^2H,A^2KG^H(A^2KG^H)†A^2KG^H=A^2KG^H.
Then (19) can be rewritten as
(22)Y^=Y*+W-𝒫G^HW𝒫B^2H-𝒫(A^2KG^H)HW𝒫J^Y^=+𝒫G^H𝒫(A^2KG^H)HW𝒫J^𝒫B^2H,X^=X*+Z-𝒫A^1HZ𝒫B^1H-A^1†A^2KG^HWKJ^B^2HB^1H†.
3. Eigenvector-Free Formulas of the General Solutions to (<xref ref-type="disp-formula" rid="EEq1.1">2</xref>) with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M124"><mml:mi>S</mml:mi><mml:mi>X</mml:mi><mml:mo mathvariant="bold">=</mml:mo><mml:mi>X</mml:mi><mml:mi>R</mml:mi></mml:math></inline-formula> Constraint
The existence conditions and the expression of the general solution given in Theorem 2 contain the eigenvector matrices of S, R, respectively. This implies that the eigenvalue decompositions will be included. In this section, we intend to release the involved eigenvectors in detailed expressions. With the first equality in (8), we have
(23)U1U1H=S,U2U2H=In-S,V1V1H=R,V2V2H=In-R.
Note that Ui(AUi)† is the Moore-Penrose generalized inverse of AUiUiH, which gives
(24)𝒫A^i=A^iA^i†=(AUiUiH)(AUiUiH)†=AiAi†=𝒫Ai,
where
(25)A1=AU1U1H=AS,A2=AU2U2H=A(In-S).
Then
(26)KA^i=Im-𝒫A^i=Im-𝒫Ai=KAi,G^U2H=KA1A2.
Set
(27)B1=BV1V1H=BR,B2=BV2V2H=B(In-R),
and denote
(28)G=KA1A2,J=B2HKB1.
It is not difficult to verify that
(29)V2J^=J,G^U2H=G,
together with
(30)𝒫G^=G^U2H(G^U2H)†=𝒫G,𝒫J^H=(V2J^)†(V2J^)=𝒫JH.
Then the first equality of (17) can be rewritten as
(31)𝒫GKA1E𝒫B2=KA1E,
and the other can be rewritten as
(32)𝒫A2EKB1𝒫JH=EKB1.
Now, we consider the simplification of the general solution X given by (10), which can be rewritten as
(33)X=U1X^V1H+U2Y^V2H.
Note that
(34)U2G^†=(G^U2H)†=G†,KG^HU2H=U2HKGH,U2A^2†=A2†.
Together with (26),
(35)U2Y*V2H=U2(G^†KA^1EB^2H†+(A^2KG^H)†×(Im-A^2G^†KA^1)EKB^1J^†-𝒫G^H(A^2KG^H)†×(Im-A^2G^†KA^1)EKB^1J^†𝒫B^2HG^†KA^1EB^2H†+(A^2KG^H)†)V2H=G†KA1EB2H†+(A2KGH)†×(Im-A2G†KA1)EKB1J†-𝒫GH(A2KGH)†×(Im-A2G†KA1)EKB1J†𝒫B2H,
so we can represent Uj2Y*V2H by a given expression of Ai, Bi, E. Let
(36)f(A1,A2,B1,B2,E)=G†KA1EB2H†+(A2KGH)†×(Im-A2G†KA1)EKB1J†-𝒫GH(A2KGH)†×(Im-A2G†KA1)EKB1J†𝒫B2H.
Hence, we have
(37)Uj2Y*V2H=f(A1,A2,B1,B2,E),U1X*V1H=A1†EB1H†-A1†A2U2Y*V2HB2HB1H†=A1†EB1H†-A1†A2f(A1,A2,B1,B2,E)B2HB1H†.
Since
(38)V2KJ^=V2(In-l-𝒫J^)=(Ip-V2J^(V2J^)†)V2=KJV2,
then
(39)U1(Z-𝒫A^1HZ𝒫B^1H-A^1†A^2KG^HWKJ^B^2HB^1H†)V1H=U1ZV1H-𝒫A1HU1ZV1H𝒫B1H-A1†A2KGHU2WV2HKJB2HB1H†,U2(W-𝒫G^HW𝒫B^2H-𝒫(A^2KG^H)HW𝒫J^+𝒫G^H𝒫(A^2KG^H)HW𝒫J^𝒫B^2H)V2H=U2WV2H-𝒫GHU2WV2H𝒫B2H-𝒫(A2KGH)HU2WV2H𝒫J+𝒫GH𝒫(A2KGH)HU2WV2HPJ𝒫B2H.
Letting
(40)U1ZV1H+U2WV2H=F,
it is not difficult for us to verify SF=FR. Together with
(41)A2U1=0,A1U2=0,V2HB1†=0,V1HB2†=0,
the following equality holds:
(42)PA1HU1ZV1H𝒫B1H+𝒫GHU2WV2H𝒫B2H=(PA1H+𝒫GH)(U2WV2H+U1ZV1H)×(𝒫B1H+𝒫B2H)=(PA1H+𝒫GH)F(𝒫B1H+𝒫B2H).
Note that
(43)GU1=0,A2KGHU1=0.
Then
(44)A2KGHU2WV2H=A2KGH(U2WV2H+U1ZV1H)=A2KGHF.
Hence,
(45)A1†A2KGHU2WV2HKJB2HB1H†=A1†A2KGHFKJB2HB1H†,𝒫(A2KGH)HU2WV2H𝒫J-𝒫GH𝒫(A2KGH)HU2WV2HPJ𝒫B2H=𝒫(A2KGH)HF𝒫J-𝒫GH𝒫(A2KGH)HFPJ𝒫B2H.
Substituting the expressions above into (33) yields that
(46)X=A1†EB1H†+f(A1,A2,B1,B2,E)-A1†A2f(A1,A2,B1,B2,E)B2HB1H†+F-(PA1H+𝒫GH)F(𝒫B1H+𝒫B2H)-A1†A2KGHFKJB2HB1H†-𝒫(A2KGH)HF𝒫J+𝒫GH𝒫(A2KGH)HFPJ𝒫B2H.
We have the following theorem.
Theorem 3.
Let
(47)A1=AS,A2=A(In-S),B1=BR,B2=B(In-R).
The matrix equation (2) with constraint SX=XR is consistent if and only if
(48)𝒫GKA1E𝒫B2=KA1E,𝒫A2EKB1𝒫JH=EKB1,
with
(49)G=KA1A2,J=B2HKB1.
In the meantime, a general solution is given by
(50)X=A1†EB1H†+f(A1,A2,B1,B2,E)-A1†A2f(A1,A2,B1,B2,E)B2HB1H†+F-(PA1H+𝒫GH)F(𝒫B1H+𝒫B2H)-A1†A2KGHFKJB2HB1H†-𝒫(A2KGH)HF𝒫J+𝒫GH𝒫(A2KGH)HFPJ𝒫B2H,
where the arbitrary matrix F satisfies SF=FR and f(A1,A2,B1,B2,E) is determined by (36).
4. Eigenvector-Free Formulas of the General Solutions to (<xref ref-type="disp-formula" rid="EEq1.1">2</xref>) with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M167"><mml:mi>P</mml:mi><mml:mi>X</mml:mi><mml:mo mathvariant="bold">=</mml:mo><mml:mi>s</mml:mi><mml:mi>X</mml:mi><mml:mi>Q</mml:mi></mml:math></inline-formula> Constraint
For this constraint, if we set S and R as (4), it is not difficult to verify that S, R are Hermitian idempotent, and the constraint PX=sXQ is equivalent to
(51)SX=XR.
By Theorem 3, we have the following theorem.
Theorem 4.
Let
(52)A1=12A(In+P),A2=12A(In-P),B1=12B(In+sQ),B2=12B(In-sQ).
The matrix equation (2) with constraint PX=sXQ is consistent if and only if
(53)𝒫GKA1E𝒫B2=KA1E,𝒫A2EKB1𝒫JH=EKB1,
with
(54)G=KA1A2,J=B2HKB1.
In the meantime, a general solution is given by
(55)X=A1†EB1H†+f(A1,A2,B1,B2,E)-A1†A2f(A1,A2,B1,B2,E)B2HB1H†+F-(PA1H+𝒫GH)F(𝒫B1H+𝒫B2H)-A1†A2KGHFKJB2HB1H†-𝒫(A2KGH)HF𝒫J+𝒫GH𝒫(A2KGH)HFPJ𝒫B2H,
where the arbitrary matrix F satisfies PF=sFQ and f(A1,A2,B1,B2,E) is determined by (36).
5. Numerical Examples
In this section, we present some numerical examples to illustrate the effectiveness of Theorems 3 and 4. For simplicity, we set m=n=p and restrict the coefficient matrices A, B and the right-hand-sided matrix E to ℛn×n. The coefficient matrices A, B are randomly constructed by
(56)A=Udiag(σ1,…,σn)VT,
where the orthogonal matrices U and V are constructed as follows:
(57)[U,temp]=qr(1-2rand(n)),[V,temp]=qr(1-2rand(n)),
and the singular values {σi} will be chosen at interval (0,1). For the computational value X of (2) with constraint PX=sXQ or SX=XR, the residual error ϵX, the PQ-commuting error ϵPQ, SR-commuting error ϵSR, and consistent error Conderr are denoted by
(58)ϵX=∥E-AXBH∥F,ϵPQ=∥PX-sXQ∥F,ϵSR=∥SX-XR∥F,Conderr=max{∥𝒫GKA1E𝒫B2-KA1E∥F,∥𝒫A2EKB1𝒫JH-EKB1∥F}.
Example 1.
In this example, we test the solutions to (2) with SX=XQ constraint by Theorem 3. The coefficient matrices A, B are constructed as in (56), and the right-hand-sided matrix E is constructed as follows:
(59)E=AX*BH,
where X* satisfies
(60)RX*=X*S,
and S, R are symmetric idempotent. That implies that the constrained equation (2) is consistent, so the residual error ϵX and consistent error Conderr should be zero with the computational value X.
For different n, the residual error ϵX, SR-commuting error ϵSR, and consistent errors Conderr can reach the precision 10-09, but all of them seem not to depend on the matrix size n very much, and the CPU time also grows quickly as n increases. In Table 1, we list the CPU time, ϵX, ϵSR, and Conderr, respectively.
Variant matrix sizes n for the solutions to (2) with SX=XR constraint.
n
CPU (s)
ϵX
ϵSR
Conderr
100
0.38
1.14*10-12
6.53*10-13
7.12*10-12
300
1.34
3.23*10-12
4.43*10-13
5.63*10-12
500
5.62
4.12*10-10
4.76*10-13
2.24*10-11
700
14.55
3.91*10-10
7.54*10-13
5.43*10-11
900
29.63
2.31*10-09
3.13*10-12
1.37*10-11
1100
55.34
9.36*10-09
6.64*10-12
2.19*10-11
Example 2.
We test the solutions to (2) with PX=XQ constraint by Theorem 4. The test matrices A, B, and E are constructed as in (56) with X* satisfying
(61)E=AX*BH,
where X* satisfies
(62)PX*=X*Q,
and P, Q are symmetric involutory.
For different n, the numerical result is similar to those of Example 1; that is, the residual error ϵX, PQ-commuting error ϵPQ, and consistent errors Conderr can all reach the precision 10-09, but it seems that they do not depend on the matrix size n very much. However, the CPU time grows quickly as n increases. In Table 2, we list the CPU time, ϵX, ϵPQ, and Conderr, respectively.
Variant matrix sizes n for solutions to (2) with PX=XQ constraint.
n
CPU (s)
ϵX
ϵPQ
Conderr
100
0.42
6.11*10-13
5.61*10-13
2.31*10-11
300
2.83
2.07*10-10
9.73*10-13
4.34*10-10
500
8.21
5.85*10-10
1.55*10-12
3.61*10-10
700
14.53
1.17*10-10
2.24*10-12
5.37*10-09
900
28.54
2.60*10-09
4.61*10-11
8.18*10-09
1100
52.81
5.35*10-09
4.92*10-11
6.53*10-09
6. Conclusion
In this paper, we consider (2) with two special constraints PX=sXQ and SX=XR, where P, Q∈𝒞n×n are Hermitian involutory, S, R∈𝒞n×n are Hermitian idempotent, and s=±1. We represent the general solutions to the constrained equation by eigenvalue decompositions of P, Q, S, R, release the involved eigenvector by Moore-Penrose generalized inverses, and get the eigenvector-free formulas of the general solutions.
Acknowledgments
The author is grateful to the referees for their enlightening suggestions. Moreover, the research was supported in part by the Natural Science Foundation of Zhejiang Province and National Natural Science Foundation of China (Grant nos. Y6110639, LQ12A01017, and 11201422).
ChenH.-C.Generalized reflexive matrices: special properties and applicationsChenH.-C.SamehA. H.A matrix decomposition method for orthotropic elasticity problemsLiF.HuX.ZhangL.The generalized reflexive solution for a class of matrix equations (AX=B,XC=D)MengC.HuX.ZhangL.The skew-symmetric orthogonal solutions of the matrix equation AX=BMengC. J.HuX. Y.An inverse problem for symmetric orthogonal matrices and its optimal approximationPengZ.-Y.The inverse eigenvalue problem for Hermitian anti-reflexive matrices and its approximationPengZ.-Y.HuX.-Y.The reflexive and anti-reflexive solutions of the matrix equation AX=BQiuY.ZhangZ.LuJ.The matrix equations AX=B, XC=D with PX=sXP constraintWangQ.-W.YuS.-W.LinC.-Y.Extreme ranks of a linear quaternion matrix expression subject to triple quaternion matrix equations with applicationsWangQ.-W.ChangH.-X.LinC.-Y.P-(skew)symmetric common solutions to a pair of quaternion matrix equationsWangQ.-W.van der WoudeJ. W.ChangH.-X.A system of real quaternion matrix equations with applicationsWangQ.-W.HeZ.-H.Some matrix equations with applicationsWangQ.HeZ.A system of matrix equations and its applicationsHeZ.-H.WangQ.-W.A real quaternion matrix equation with applicationsChuK. E.Singular value and generalized singular value decompositions and the solution of linear matrix equationsChuK. E.Symmetric solutions of linear matrix equations by matrix decompositionsDaiH.On the symmetric solutions of linear matrix equationsDengY.-B.HuX.-Y.ZhangL.Least squares solution of BXAT=T over symmetric, skew-symmetric, and positive semidefinite XQiuY.QiuC.Matrix equation AXB=E with PX=sXP constraintXuG.WeiM.ZhengD.On solutions of matrix equation AXB+CYD=FWangM.ChengX.WeiM.Iterative algorithms for solving the matrix equation AXB+CXTD=EPaigeC. C.SaundersM. A.Towards a generalized singular value decompositionPaigeC. C.Computing the generalized singular value decompositionChuD.De MoorB.On a variational formulation of the QSVD and the RSVDMoorB. D.GolubG. H.Generalized singular value decompositions: a proposal for a standardized nomenclatureGolubG. H.ZhaH. Y.Perturbation analysis of the canonical correlations of matrix pairsBaksalaryJ. K.KalaR.The matrix equation AXB+CYD=ELiaoA.-P.BaiZ.-Z.LeiY.Best approximate solution of matrix equation AXB+CYD=EÖzgülerA. B.The equation AXB+CYD=E over a principal ideal domain