Appendix

Indranarain Ramlall (University of Mauritius, Mauritius)

Applied Structural Equation Modelling for Researchers and Practitioners

ISBN: 978-1-78635-883-7, eISBN: 978-1-78635-882-0

Publication date: 8 December 2016

Citation

Ramlall, I. (2016), "Appendix", Applied Structural Equation Modelling for Researchers and Practitioners, Emerald Group Publishing Limited, Leeds, pp. 131-138. https://doi.org/10.1108/978-1-78635-883-720161027

Publisher

:

Emerald Group Publishing Limited

Copyright © 2017 Emerald Group Publishing Limited


A.1 Matrix Operations

Order of a matrix: Number of rows × Number of columns

Matrix addition/subtraction: add/subtract corresponding elements.

Matrix multiplication

A : m × n
B : n × p
A B = ( m × n ) ( n × p ) = ( m × p ) matrix
Matrix multiplication is non-commutative; that is ABBA
Proof.

(n × p) × (m × n) not workable

Matrix multiplication is associative; that is, A(BC) = (AB)C

Matrix division: A/B = AB −1

To find the inverse of a matrix, there is need to compute the determinant of a matrix (minors and co-factors).

A.2 Determinant of a Matrix

Determinant of a matrix is a number and not a matrix

A = ( a b c d )
Determinant of A = adcb

A.3 Matrix of Minors

To find the minor of each element, just draw a vertical and a horizontal line through that element.

A = ( 1 6 3 2 7 1 3 1 4 )
M 11 = ( 7 1 1 4 ) = 28 1 = 29
M 12 = ( 2 1 3 4 ) = 8 3 = 11
M 13 = ( 2 7 3 1 ) = 2 21 = 19
M 21 = ( 6 3 1 4 ) = 24 3 = 21
M 22 = ( 1 3 3 4 ) = 4 9 = 13
M 23 = ( 1 6 3 1 ) = 1 18 = 19
M 31 = ( 6 3 7 1 ) = 6 21 = 27
M 32 = ( 1 3 2 1 ) = 1 + 6 = 5
M 33 = ( 1 6 2 7 ) = 7 12 = 19
A minors = ( 29 11 19 21 13 19 27 5 19 )

A.4 Matrix of Cofactors

A matrix of cofactors is obtained by multiplying the elements of the matrix of minors by (−1). All odd rows start with a + sign while all even rows

+ + + + +
A minors = ( 29 11 19 21 13 19 27 5 19 )
A cofactors = ( 29 11 19 21 13 19 27 5 19 )
→ the matrix of cofactors is very helpful to find the determinant of any size matrix.
A = ( 1 6 3 2 7 1 3 1 4 )
A cofactors = ( 29 11 19 21 13 19 27 5 19 )
Determinant of A
= a 11 c 11 + a 12 c 12 + a 13 c 13 = 1 ( 29 ) + 6 ( 11 ) + 3 ( 19 ) = 152
This can be done for each row to get the same answer.

A.5 Transpose of a Matrix

The transpose of a matrix is done by taking the rows of a matrix and placing them into corresponding columns. → A

A.6 Inverse of a Matrix

A 1 = 1 Det A A cofactors
= 1 152 ( 29 21 27 11 13 5 19 19 19 )
A A 1 = I

The inverse of a matrix, multiplied by itself, generates an identity matrix.

Steps: Minors, Matrix of cofactors, Determinant, Transpose of cofactor matrix and Inverse

A matrix is invertible if its determinant is not zero

If a square matrix is invertible it is called a non-singular matrix

An (n × n) matrix A is invertible if there exists an n × n matrix B such that AB = BA, in this case, B is called the inverse of A.

(Note: Invertible matrices should always be square matrices.)

A.7 Matrix Formulation of SEM

Measurement model (observed–latent):

(A.1) Y j = μ j + λ j 1 ω 1 + + λ j q ω q + ɛ j ( j = 1 , , p )

Structural model (latent–latent):

(A.2) η j = Y j 1 ξ + + Y j q 2   ξ q 2 + δ j ( j = 1 , , q 1 )

Y = (Y 1 ,…,Y p )T: (p × 1) vector of observed variables

ω = (ω 1,…,ω q )T: (q × 1) vector of latent variables

µ = an intercept

λ j k = factor loadings

ɛ j  = residual error

η = (q 1 × 1) random vector of dependent latent variables

η = (η 1,…,η q1)T

ξ = (ξ 1,…,ξ q2)T random vector of the explanatory latent variables

ξ = q 2 × 1 [or (q – q 1) × 1] = Number of explanatory latent variables

ω = (n T, ξ T)T

Matrix Notations/Matrix Versions of Eqs. (A.1) and (A.2):

(A.3) y = μ + ω + ɛ
(A.4) n = Γ ξ + δ

Y: (p × 1) random vector of observed variables

µ: (p × 1) vector of intercepts

˄: (p × q) matrix of factor loadings

ω: (q × 1) matrix of latent variables

ɛ: (p × 1) matrix of measurement errors

η = (q 1 × 1) matrix of dependent latent variables

Γ: (q 1 × q 2) matrix of regression coefficients

ξ = (q 2 × 1) matrix of the explanatory latent variables

δ: (q 1 × 1) matrix of residual errors

When formulating the measurement equation, it is vital to specify the structure of the factor loading matrix, that is which parameters to be made free and which parameters to be made fixed. Such a decision is predominantly based on prior knowledge of the observed and latent variables. The regression coefficients represent the magnitude of the expected changes in the dependent variable for a one-unit change in independent variable.

Measurement equation constitutes a confirmatory tool with a specifically defined loading matrix.

SEM enables researchers to recognize the imperfect nature of their measures by explicitly specifying the error.

Defining the Degrees of Freedom in an SEM Model:

Y = B 1 X 1 + B 2 X 2 + B 3 X 3 + ɛ 1 : Measurement model

Z = B 4 Y + ɛ B : Structural model

Five variables: Y, X 1, X 2, X 3, Z

Four regression coefficients

Five variances of the independent variables (including the variances of the errors)

Three covariances among the independent variables: defined among the observed independent variables

Covariance structure of matrix of SEM will consist of:

  1. Regression coefficients

  2. Variances of the independent variables (including the error variances)

  3. Covariances among the independent variables

All the above will form the model parameters

Degrees of freedom

Number of unique elements ( p ( p + 1 ) 2 )

Minus

Order Condition Assessment for Identification:

Number of unique elements in the variance-covariance matrix of the observed variables

minus

Number of parameters to be estimated (Number of regression coefficients, Number of variances and Number of covariances)

The number of regression coefficients is captured by the number of single-headed arrows.

The number of covariances is captured by the number of double-headed arrows.