• Users Online: 89
  • Print this page
  • Email this page

Table of Contents
Year : 2022  |  Volume : 23  |  Issue : 2  |  Page : 127-133

Latent profile analysis – An emerging advanced statistical approach to subgroup identification

1 PhD Scholar, Department of Biobehavioral Nursing Science, University of Illinois, Department of Biobehavioral Nursing Science; Professor, College of Nursing, Christian Medical College, Vellore, Tamil Nadu, India
2 Professor, Biobehavioral Sciences, University of Illinois, Chicago, Illinois, USA

Date of Submission18-Mar-2022
Date of Decision20-Oct-2022
Date of Acceptance27-Oct-2022
Date of Web Publication24-Jan-2023

Correspondence Address:
Dr. Asha Mathew
College of Nursing, Christian Medical College, Vellore, Tamil Nadu

Login to access the Email id

Source of Support: None, Conflict of Interest: None

DOI: 10.4103/ijcn.ijcn_24_22

Rights and Permissions

Latent profile analysis (LPA) is emerging as an advanced statistical clustering approach. It is a type of mixture modeling that uses a person-centred approach to classify individuals from a heterogeneous population into homogenous subgroups. LPA identifies the distinct patterns of responses to a set of observed continuous variables in a sample of individuals, and these response patterns are known as latent profiles. This article presents an overview of LPA with key assumptions, sample size considerations, advantages, and limitations. Using an example of LPA application in research, the article also presents the process of conducting LPA and its implications for nursing research. LPA has valuable potential in nursing and could provide new insights into a particular research concept and offer more nuanced information regarding patterns of responses. Further, researchers could examine the impact of targeted assessment and interventions, identify predictors of subgroup membership and explore differences in outcomes across the profiles.

Keywords: Latent profile analysis, latent variable, mixture modelling, person-centered approach, subgroup identification

How to cite this article:
Mathew A, Doorenbos AZ. Latent profile analysis – An emerging advanced statistical approach to subgroup identification. Indian J Cont Nsg Edn 2022;23:127-33

How to cite this URL:
Mathew A, Doorenbos AZ. Latent profile analysis – An emerging advanced statistical approach to subgroup identification. Indian J Cont Nsg Edn [serial online] 2022 [cited 2023 Feb 3];23:127-33. Available from: https://www.ijcne.org/text.asp?2022/23/2/127/368422

  Introduction Top

Nurse researchers are often interested in subgroup identification and analyses, where they examine if subgroups significantly differ in outcomes or intervention response or other characteristics. The examples of subgroups include males versus females, intervention vs control groups, pre- vs post-treatment groups or groups with varying levels of disease severity. These examples are based on the variables that are observed or measured during the study, i.e., gender captured through a demographic form or disease severity captured through standardised indices. However, there could be the differences in the sample attributed to a variable that is unobserved (latent). In other terms, there could be 'hidden' groups within the sample that are different from each other based on an attribute that is not directly measured.

Advanced statistical methods are used to infer conclusions about a population of interest based on data obtained from a sample drawn from it.[1] Going beyond describing the sample (i.e., descriptive statistics), advanced statistical methods allow researchers to explore relationships, study effects and establish causal links among other purposes.[2] One such emerging advanced method is latent profile analysis (LPA). LPA is a person-centered statistical approach, which focuses on similarities and differences among individuals, instead of relationships among variables. LPA divides a heterogeneous population into subgroups of individuals, such that individuals within a subgroup are similar to each other but are different from individuals in other subgroups.[3] These subgroups (also known as classes or profiles) are categories of an unobserved nominal or ordinal variable. Class membership of individuals is unknown but can be inferred from a set of observed continuous characteristics,[4] and each subgroup possesses a unique set of characteristics that differentiates it from other subgroups.[3] LPA is a more commonly reported type of latent variable mixture modelling, along with latent class analysis (LCA). When the observed variables (or indicators) are on a continuous scale, the subgroups (or latent profiles) are identified using LPA. When the indicators are categorical, the subgroups (or latent classes) are identified using LCA.

This article presents an overview of LPA and discusses its implications in nursing research. The authors provide a non-technical overview of LPA for the sake of clarity, however, some technical terms have been used for the sake of completeness. These terms have been explained separately in [Table 1]. The procedures in LPA are extensive and hence only an overview of an unconditioned model is discussed, i.e., without the addition of predictors and outcome variables to the LPA model. Further, this article is in no way an exhaustive source of information on LPA. The authors have provided resources at the end of the article for additional reference.

  Latent Profile Analysis – An Overview Top

LPA is commonly attributed to Lazarsfeld and Henry[5] and is an emerging clustering approach. The goal of LPA is to classify individuals from a heterogeneous population into smaller, more homogenous subgroups based on individuals' values on continuous indicators. LPA identifies the distinct patterns or combinations of responses to a set of observed continuous indicators in a sample of individuals, and these distinct response patterns are known as latent profiles. Thus, LPA is a mixture model that postulates that there is an underlying unobserved categorical variable that divides a population into mutually exclusive and exhaustive latent classes.[4] Identifying meaningful subgroups in this way has prevention and treatment implications. Once the correct number of profiles is identified, researchers can examine how the indicators combine to form the profiles, and how those combinations are differentially associated with predictors and outcomes.[6]

Statistical assumptions in latent profile analysis

As briefly explained earlier, population heterogeneity can be observed or unobserved. Observed population heterogeneity occurs when subgroups within the data can be defined in terms of observed variables (e.g., gender, age, occupation or socio-economic status). Researchers would then use analytic techniques such as t-tests, Chi-square or analysis of variance to compare the observed subgroups. On the other hand, unobserved population heterogeneity occurs when the variables that cause the heterogeneity are not observed a priori. In this case, subgroups are latent (or unobserved) and must be inferred from the data. LPA assumes unobserved heterogeneity and the existence of subgroups with specific sub-distributions of variables. Each sub-distribution represents a latent profile, and these profiles can be differentiated by different means and variances of the observed indicators. Thus, LPA assumes that the observed sample is drawn from a heterogeneous population that has a mixture of K profile-specific distributions.[6] In other words, individuals belonging to the same profile are similar to one another such that their observed scores on the continuous variables are from the same probability distribution.[5] LPA also assumes that the observed indicator variables are distributed normally within each latent profile, and the LPA model represents the distributions of the scores on the observed indicator variables (yi, i = 1, 2., n), as a function of the probability of being a member of latent profile k (k = 1, 2., K).

Another assumption is that of local independence which implies that the indicators are uncorrelated within the identified latent classes.[7],[8]This assumption indicates that latent class membership explains all of the shared variances among the observed indicators. Stated differently, any association among the observed indicators is assumed to be entirely explained by the latent class variable.[7]

Person-centered approach and latent profile analysis

The person-centered approach in LPA is based on three assumptions. First, individual differences exist and are important within a phenomenon. Second, these differences occur in a logical way, which can be examined through patterns or a combination of responses. Third, a small number of patterns are meaningful and occur across individuals.[9] In LPA, every individual in the data set has their probabilities calculated for their membership in all the estimated latent profiles. A posterior probability [Table 1] is calculated for each individual in each profile, with values closer to 1 indicating a higher probability of membership in a profile. A given individual is assigned to the profile with the highest probability values (e.g., probability of. 95 of belonging to profile 1 vs. 0.05 of belonging to profile 2), and each individual in the population has membership in exactly one of the latent profiles. Thus, latent profiles are based on probabilities, which is why LPA is a probabilistic technique.

Estimated parameters

In LPA, two sets of parameters are of most interest: (i) profile membership probabilities that describe the distribution of the profiles in the population, i.e., class prevalence or percentage of sample in a given profile and (ii) means (and variances) of the items within each profile, i.e., means (and variances) of each indicator given membership in a particular latent profile. The profile-specific means are used to interpret and label the profiles.

  Sample Size in Latent Profile Analysis Top

Power analysis in LPA is a developing area. There is currently no formula or criterion to estimate the required sample size in LPA.[10],[11] The required sample size is dependent on various factors including the number of profiles and the distance between the profiles, which are unknown in advance and can only be estimated based on prior research.[11] The most rigorous way of dealing with sample size issues would be to conduct Monte Carlo simulations to determine the power of specific sample sizes, or to decide on an appropriate sample size for a given power level. Some simulation studies have suggested samples of 300 to 500 as a minimum sample, and most commonly used fit indices for mixture models can be expected to function with a minimum sample of 300.[7],[11]

  Latent Profile Analysis Procedure Top

LPA involves a model-building process. In LPA, the expectation-maximisation (EM) algorithm is used to maximise the complete likelihood estimates of parameters [Table 1].[12] EM algorithm treats unobserved membership of the observations (i.e., the profile membership of individuals) as missing data, and then creates a complete data set for the model, based on which the parameters are estimated. Research questions typically investigated with LPA analyses include: (a) What profiles are present in the data? (b) What is the size of each profile? (c) What factors predict latent profile membership? and (d) How do outcomes differ across profiles? LPA can be approached in a purely exploratory manner (e.g., no assumptions about a number of profiles, size of profiles or potential predictors) or a fully confirmatory manner (e.g., hypotheses about the number of profiles, size of profiles or potential outcomes).[6]

An example from literature

An example of a study that used LPA is provided to walk the readers through the process of conducting LPA. This study used LPA to confirm four patterns of attachment among individuals with psychosis.[13] The sample consisted of 588 people who participated in psychosis-related studies across the United Kingdom. The researchers used a 16-item self-report measure, Psychosis Attachment Measure (PAM) that assesses two dimensions of attachment – anxious and avoidant and aimed to explore the patterns of response across the attachment (PAM) items.

Steps of latent profile analysis

The model-building process in LPA generally involves six steps:

  1. Identifying LPA indicators
  2. Data inspection
  3. Specifying LPA models
  4. Estimating LPA models
  5. Evaluating LPA models
  6. Interpreting LPA results.

Identifying latent profile analysis indicators

First, researchers need to choose the variables that will be used to classify individuals into meaningful subgroups. This first step is important because the selected variables that form the profiles should have a strong conceptual basis. LPA application should be theory-driven,[6] and the choice of variables should be based on relevant theories or previous research, to ensure that the identified profiles are theoretically and practically meaningful.

In the example, the 16 items of the PAM measure were used as the indicators, with responses (0 − 3) being treated as continuous. Because the PAM is the most widely used measure of attachment in psychosis, the researchers hypothesised that PAM can be used to categorise clients with psychosis into four different attachment groups. To visualize LPA, the latent profile model for this study is depicted in [Figure 1]. There is a latent categorical variable 'C' (attachment style). Across the latent profiles of 'C' (which are unobserved and are to be identified), the item means of continuous indicators can vary (implied by the arrows pointing from 'C' to the variables). In other words, the latent profiles predict responses on the 16 indicators.
Figure 1: A visual representation of a latent profile model

Click here to view

Data inspection

As with all analyses, the data should be cleaned for analysis and checked for standard statistical assumptions and missing values. Appropriate procedures should be used to handle missing data. In the example, there was no mention of handling missing data.

Specifying latent profile analysis models

Next, the number of profiles that will be examined needs to be determined. The number of profiles may be based on hypotheses that are derived from theory and previous research, clinical relevance, or researchers' expectations.[6] In the example, their hypothesis was to confirm four attachment profiles. Hence, they examined 2-profile model to 6-profile model, thus examining models with two additional profiles than hypothesised.

Estimating latent profile analysis models

This step involves the process of estimating the models using the statistical program of choice. In this step, the researchers also need to determine the specific estimation method (e.g., maximum likelihood estimation or robust maximum likelihood estimation). If the data are non- normally distributed, the robust maximum likelihood estimation is generally preferred. The commands used for estimating models vary with statistical programmes. Currently, several programs allow the estimation of LPA models, including Mplus, SAS, STATA and R.

Further, there are four variances-covariances structures based on how the variances and covariances are allowed to vary.[12] The four specifications include covariances within classes are fixed; covariances within classes are allowed to vary; variances across classes are equal; and variances across classes are different. The researchers need to estimate and compare models across the full range of these four specifications. For example, 2-profile, 3-profile, 4-profile and 5-profile solutions could be estimated across the range of four specifications, thus requiring a researcher to run 16 models using the statistical program. Also, when estimating LPA models, to ensure that the convergence is at the global maximum and not at the local maxima [Table 1], researchers need to use multiple random sets of starting values with the estimation algorithm (a minimum of 50–100 sets of randomly varied starting values).[12]

The model-building process begins by estimating a 1-class LPA model, which is a model that estimates the observed item proportions in the sample. This 1-class model serves as a comparative baseline for models with more than one class. Then, the number of classes is increased by one, and the resulting solution is examined if it is conceptually and statistically superior than the previous solution.[7] Estimating additional classes is usually stopped when convergence issues are encountered (generally indicated by error messages from the software) or when enough information is not present in the data to estimate all the model parameters (underidentified model).[7]

In the example, researchers estimated two-to six-class models following maximum likelihood estimation method, using Mplus 7.11. They specifically aimed to examine the number, size, and symptom profiles of risk classes. There was no information on variance-covariance structure specifications, starting values, or convergence issues.

Evaluating latent profile analysis models

This step involves choosing the correct model, i.e., determining the optimum number of profiles based on model fit [Table 1]. Statistics to determine model fit is used to evaluate each solution. As successive LPA models are estimated, fit information from each model is collected and summarized in a single table for ease of evaluation. A general rule for evaluating models is that multiple fit values, as well as content decision criteria, should be applied when deciding on the final profile solution.[6] The process of model selection might be the most challenging aspect of LPA for nurse researchers.

The most widely used fit indices are the Akaike information criteria (AIC),[14] the Bayesian information criteria (BIC),[15] the sample size-adjusted BIC (SABIC), the Lo–Mendell–Rubin Adjusted Likelihood Ratio Test (LMRT),[16] the Bootstrapped Likelihood Ratio Test (BLRT)[17] and the Entropy.[18] AIC, BIC and SABIC are goodness-of-fit measures used to compare competing models, and lower values indicate a better fit. The BIC and the adjusted BIC are comparatively better indicators of the number of classes than the AIC.[19],[20] The LMRT and BLRT compare the fit of a target model (k class model) to a model with one less class (i.e., k-1 class model), and the P values obtained for the LMRT and BLRT indicate whether the target model is statistically better (P < 0.05) or not (P > 0.05). Entropy is a measure of classification accuracy ranging from 0 to 1,[18] with higher values indicating better classification quality. Higher entropy values indicate the more precise assignment of individuals to latent profiles and values >0.90 indicate that the groups are highly discriminative. Practically, due to the increased amount of computing time of the BLRT, experts suggest that BIC, entropy, and the LMRT can be used as guides to get solutions and then once plausible models have been identified, these models can be reanalysed using the BLRT.[20] Also, not all statistical programs currently allow the examination of all the model fit indices. For instance, STATA 16 allows examination using AIC, BIC and Entropy, while STATA LCA Plugin also performs BLRT among other indices.

Content decisions involve a subjective examination of the identified profiles.[4],[6] An important aspect to look for is profile discrimination. If an additional profile is relatively close to another profile in the prior solution (e.g., only minor differences in all indicators), and thereby adds no meaningful new insights, the new profile might not be retained due to reasons of model parsimony [Table 1]. Models with lower entropy values and redundant profiles that are not of any particular theoretical interest could be rejected. Another aspect is examining the sizes of the derived profiles. Any profile that includes <1% of the total sample size or fewer than 25 cases could be rejected.[6] Classes having less than 5% of the sample are typically considered spurious and tend to over-fit the data and could fail to replicate in independent datasets.[21] Thus, in combination with fit indices, aspects such as logical patterns in the profiles, distinctness from the other profiles, parsimony, and ease of labelling are considered. Further, the decision about the final profile solution should also consider previous theories and findings, as well as the implications for practitioners, especially when different model fit statistics suggest different solutions.

After selecting the best-fit model, an individual's most likely class membership is assigned based on posterior probabilities [Table 1]. As described earlier, individual classification is done in a way such that each individual in the sample is assigned to the latent profile for which he or she has the largest posterior class probability (called as modal class assignment).

In the example presented, researchers determined the optimum number of latent classes based on several posterior fit statistics: AIC, BIC, SABIC, LMRT, BLRT and entropy measures. They also subjectively assessed the additional profiles for being qualitatively different. There was a lack of consistency in what different model fit statistics suggested. For instance, the AIC and BIC values for the five- and six- class solution were lower than that for the four-class solutions, but: (i) the response patterns in the additional classes appeared qualitatively similar; (ii) the five- and six-class solutions were not substantiated by any meaningful underlying theoretical model and (iii) the number of individuals in the additional classes was relatively low. Hence, they considered the four-class model as the best-fit model. The entropy of the four-class model (0.821) indicated a good classification of clients into classes (82% accurately classification). This way, the researchers chose the most parsimonious and theoretically meaningful model, after looking at fit statistics, relevant theory, and model parsimony.

Interpreting latent profile analysis results

After determining the best model based on fit indices and the content of the profiles, researchers need to interpret the profiles. First, to assess the class homogeneity, the within-class variances for each indicator are examined across the classes and compared to the total overall sample variance. Classes with smaller values of within-class variances are more homogenous with respect to an item m than classes with larger values. Next, to quantify the degree of class separation between two classes with respect to a particular item, a standardized mean difference (SMD) is computed. A large SMD (>2) corresponds to less than 20% overlap in the distributions and indicates a high degree of separation between the two classes with respect to an item. A small SMD (< 0.85) corresponds to more than 50% overlap and a low degree of separation between the two classes with respect to an item.[12] Researchers should also examine if any discrepancy exists between model-estimated proportion of individuals for a profile and proportion of individuals modally assigned to that profile. A larger discrepancy between both proportions indicate larger class assignment errors.

Finally, the identified profiles are labelled (or named). Naming the profiles is not necessary but is helpful to differentiate between groups and convey what indicators appear meaningful. This process is more descriptive and is done based on the mean values of the indicators for each profile.

In the example provided, Class 1 was the largest class with 37% of the sample (n = 219) and was characterized by the lowest mean scores on almost all the PAM items [[Figure 2], sourced from the published article].[13] This class was named the secure attachment group. Class 2 comprised 20% of individuals (n = 120) and was characterized by high mean scores on the items relating to insecure-avoidant attachment and low mean scores on the insecure-anxious items. This class was named the insecure-avoidant attachment group. Class 3 comprised 28% of individuals (n = 166) and was characterised by high mean scores on the items relating to insecure-anxious attachment and low mean scores on the insecure-avoidant items. This class was named the insecure-anxious attachment group. Class 4, the smallest class, comprised 14% of individuals (n = 83), was characterized by high mean scores on all items and was labelled the disorganized attachment group. In this way, each distinct profile reflected a particular pattern of responses to the indicator variables and researchers labelled the profiles according to these pattern of responses.
Figure 2: Latent class profile plot for the four classes. Reprinted from Psychiatry Research, Volume 247, Bucci, S., Emsley, R., & Berry, K., Attachment in psychosis: A latent profile analysis of attachment styles and association with symptoms in a large psychosis cohort, 243-249, Copyright 2016, with permission from Elsevier

Click here to view

To conclude, in the above example, the latent categorical variable that was unobserved was the attachment style. This variable represented the four profiles of attachment styles which were identified as secure, insecure-anxious, insecure-avoidant and disorganised. Thus, LPA allowed the researchers to identify patterns and model heterogeneity in the study sample by classifying individuals with psychosis into four subgroups, each with distinct patterns of attachment styles.

Advantages of latent profile analysis over other statistical methods for subgroup identification

Specific advantages of LPA as compared to traditional, non-latent clustering methods (e.g., k-means clustering, hierarchical clustering), are that: (a) individuals are classified into clusters based upon membership probabilities estimated directly from the model; (b) variables may be continuous, categorical (nominal or ordinal), counts or any combination of these; and (c) demographics and other covariates can be used for profile description.[22] Further, LPA does not require normal distribution of indicators, and indicators can be included in more than one cluster by modelling them as conditionally independent given the clusters.[22] LPA is also considered superior to dealing with methodological challenges of subgroup analysis such as low statistical power, high Type I error rate, and limitations associated with higher-order interactions.[4]

  Implications for Nursing Research Top

A person-centred approach to analysis is of great value in nursing research. If a sample under study contains subgroups and the variables of interest combine and relate differently to other variables within these subgroups, use of person-centred strategies such as LPA can be used to explore these response patterns and identify these subgroups. Further, subgroups in nursing research are typically identified by using the mean or median composite score which is then used to divide the sample into high vs low subgroups. Or alternatively, percentiles have been used to divide a sample into subgroups which are then used for subgroup analyses. LPA approach provides researchers with a more robust person-centred strategy to identify subgroups. It must be noted that the processes of estimating and comparing models make LPA an intensive process, and a robust sample size is important in LPA. Nevertheless, nurse researchers should move beyond traditional methods of subgroup identification and use LPA to explore distinct patterns of responses in a sample of individuals and identify subgroups.

LPA has been used among other sciences; however, their use in nursing has been recently emerging. It can be used to address research questions in different fields of nursing. Although not exhaustive, some hypothetical examples of research questions that could be addressed through LPA include:

  • Do distinct typologies of academic expectations exist among nursing students after completion of their undergraduate program?
  • Do distinct profiles of motivation and learning strategies exist among nursing students that affect their academic achievement?
  • Do distinct career attitudes and career behaviours exist among staff nurses?
  • Do distinct patterns of environmental factors affect physical activity among patients with heart disease?
  • Do distinct profiles of attachment exist among mothers with preterm babies?

Addressing research questions such as these would require nurse researchers to choose the set of indicators carefully, backed by sound theoretical and clinical basis. Additionally, research applying LPA should be able to show that the latent profiles contribute to the understanding of the constructs and tell us something that we did not know before.[6] LPA is particularly useful for researchers as patterns of shared behaviour between and within samples may be missed when researchers conduct inter-individual, variable-centred analyses.[9] Variable-centred analyses assume the individuals within the sample all belong to a single profile or population with no differentiation between latent subgroups. Thus, if meaningful differences exist between subgroups of individuals, LPA provides the opportunity to examine the profiles and what predicts or is predicted by membership within the different profiles. To conclude, LPA has a valuable potential in nursing and it could provide new insights into a particular research concept and offer more nuanced information regarding patterns of responses through the identified profiles. Once profiles are identified, nurses can better examine the impact of targeted assessment and interventions, identify the predictors of subgroup membership and explore the differences in outcomes across the profiles.

  Resources for Further Reading Top

  1. Masyn KE. 25 LCA and finite mixture modelling. The Oxford handbook of quantitative methods. 2013;2:551-611.
  2. Nylund-Gibson K and Choi AY. Ten frequently asked questions about LCA. Translational Issues in Psychological Science 2018; 4(4): 440-461. https://doi.org/10.1037/tps0000176

Financial support and sponsorship


Conflicts of interest

There are no conflicts of interest.

  References Top

Antonisamy B, Premkumar PS, Christopher S. Principles and Practice of Biostatistics-E-Book. New Delhi: Elsevier Health Sciences; 2017.  Back to cited text no. 1
Foster JJ, Barkus E, Yavorsky C. Understanding and Using Advanced Statistics: A Practical Guide for Students. London: Sage Publications; 2005. p. 1-15.  Back to cited text no. 2
Muthén L, Muthén B. Mplus User's Guide. 8th ed. Los Angeles, CA: Muthén & Muthén; 1998-2017.  Back to cited text no. 3
Lanza ST, Rhoades BL. Latent class analysis: An alternative perspective on subgroup analysis in prevention and treatment. Prev Sci 2013;14:157-68.  Back to cited text no. 4
Lazarsfeld PF, Henry NW. Latent Structure Analysis. New York: Houghton Mifflin; 1968.  Back to cited text no. 5
Spurk D, Hirschi A, Wang M, Valeroc D, Kauffeldd S. Latent profile analysis: A review and “how to” guide of its application within vocational behavior research. J Vocat Behav 2020;120:103445.  Back to cited text no. 6
Nylund-Gibson K, Choi AY. Ten frequently asked questions about latent class analysis. Transl Issues Psychol Sci 2018;4:440-61.  Back to cited text no. 7
Williams GA, Kibowski F. Latent class analysis and latent profile analysis. In: Jason LA, Glenwick DS, editors. Handbook of Methodological Approaches to Community-Based Research: Qualitative, Quantitative, and Mixed Methods. New York: Oxford Universty Press; 2016. p. 143-51.  Back to cited text no. 8
Sterba SK. Understanding linkages among mixture models. Multivariate Behav Res 2013;48:775-815.  Back to cited text no. 9
Dziak JJ, Lanza ST, Tan X. Effect size, statistical power, and sample size requirements for the bootstrap likelihood ratio test in latent class analysis. Struct Equ Modeling 2014;21:534-52.  Back to cited text no. 10
Tein JY, Coxe S, Cham H. Statistical power to detect the correct number of classes in latent profile analysis. Struct Equ Modeling 2013;20:640-57.  Back to cited text no. 11
Masyn KE. 25 latent class analysis and finite mixture modeling. In: Little TD, Nathan PE, editors. The Oxford Handbook of Quantitative Methods. New York: Oxford University Press; 2013. p. 551-611.  Back to cited text no. 12
Bucci S, Emsley R, Berry K. Attachment in psychosis: A latent profile analysis of attachment styles and association with symptoms in a large psychosis cohort. Psychiatry Res 2017;247:243-9.  Back to cited text no. 13
Akaike H. Factor analysis and AIC. Psychometrika 1987;52:317-32.  Back to cited text no. 14
Schwarz G. Estimating the dimension of a model. Ann Stat 1978;6:461-4.  Back to cited text no. 15
Lo Y, Mendell NR, Rubin DB. Testing the number of components in a normal mixture. Biometrika 2001;88:767-78.  Back to cited text no. 16
McLachlan G, Peel D. Finite mixture models. In: McLachlan G, Peel D, editors. Wiley Series in Probability and Statistics, Applied Probability and Statistics. New York: John Wiley & Sons, Inc.; 2000.  Back to cited text no. 17
Wang MC, Deng Q, Bi X, Ye H, Yang W. Performance of the entropy as an index of classification accuracy in latent profile analysis: A Monte Carlo simulation study. Acta Psychol Sin 2017;49:1473-82.  Back to cited text no. 18
Weller BE, Bowen NK, Faubert SJ. Latent class analysis: A guide to best practice. J Black Psychol 2020;46:287-311.  Back to cited text no. 19
Nylund KL, Asparouhov T, Muthén BO. Deciding on the number of classes in latent class analysis and growth mixture modeling: A Monte Carlo simulation study. Struct Equ Modeling 2007;14:535-69.  Back to cited text no. 20
Kircanski K, Zhang S, Stringaris A, Wiggins JL, Towbin KE, Pine DS, et al. Empirically derived patterns of psychiatric symptoms in youth: A latent profile analysis. J Affect Disord 2017;216:109-16.  Back to cited text no. 21
Ryan CJ, Vuckovic KM, Finnegan L, Park CG, Zimmerman L, Pozehl B, et al. Acute coronary syndrome symptom clusters: Illustration of results using multiple statistical methods. West J Nurs Res 2019;41:1032-55.  Back to cited text no. 22


  [Figure 1], [Figure 2]

  [Table 1]


    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

  In this article
Latent Profile A...
Sample Size in L...
Latent Profile A...
Implications for...
Resources for Fu...
Article Figures
Article Tables

 Article Access Statistics
    PDF Downloaded10    
    Comments [Add]    

Recommend this journal