Coh-Metrix

Coh-Metrix is a program that uses natural language processing to analyze discourse. It calculates a number of linguistic indices related to various aspects of language that can be used to determine the quality, readability, or other specific properties of a written or spoken text. The system analyzes multiple levels and factors of texts in order to provide a multi-dimensional perspective of text. For instance, it measures simple indices such as word frequency and sentence length as well as more complex indices such as cohesion and syntactic complexity. Coh-Metrix has been used to analyze texts for multiple educational tutoring systems and for a variety of purposes.

For access to the Coh-Metrix desktop tool, please fill out this form

T.E.R.A. is a tool that uses the Coh-Metrix program to analyze a text by providing measures of text easability and readability.

T.E.R.A is currently available at the following link: T.E.R.A Common Core

Coh-Metrix measures

Coh-Metrix provides the user with descriptive indices (such as the length of words, sentences, and paragraphs) related to the text’s basic characteristics. In addition, it also provides numerous indices to assess specific factors of text quality and readability.

Validation of Coh-Metrix

Various studies have validated Coh-Metrix as a powerful tool for analyzing multiple components of discourse (see McNamara, Graesser, McCarthy, & Cai, 2014). For example, a study by McNamara and colleagues (2006) compared high and low cohesion versions of texts, revealing that factors related to cohesion are predictive of text readability. Another study by Duran and colleagues (2006) discovered how indices of local cohesion accurately predicted temporality in texts from different domains (e.g., narrative, history, and science). In 2007, Crossley and colleagues found that simplified texts (as opposed to authentic texts) are more difficult to comprehend for beginner second language learners because of their modified syntactic structure, which is more heavily dependent on noun phrases and lacking in causal associations. A recent study by McNamara et al. (2015) observed how Coh-Metrix, in conjunction with other tools such as the Writing Assessment Tool (WAT) and Linguistic Inquiry and Word Count (LIWC), could be used to assess the accuracy of automated essay scoring (AES). The study revealed that the models based on the NLP scoring tools offered a 55% exact accuracy and 92% adjacent accuracy between the model’s predicted score and human ratings.

References/further reading

Duran, N., McCarthy, P.M., Graesser, A.C., & McNamara, D.S. (2006). Using Coh-Metrix temporal indices to predict psychological measures of time. In R. Sun & N. Miyake (Eds.), Proceedings of the 28th Annual Conference of the Cognitive Science Society (pp. 190-195). Austin, TX: Cognitive Science Society.

Bellissens, C., Jeuniaux, P., Duran, N.D., McNamara, D.S. (2010). A text relatedness and dependency computational model: Using Latent Semantic Analysis and Coh-Metrix to predict self-explanation quality. Studia Informatica Universalis, 8(1), 85-125.

Crossley, S.A., Louwerse, M., McCarthy, P.M., & McNamara, D.S. (2007). A linguistic analysis of simplified and authentic texts. Modern Language Journal, 91, 15-30. 

Crossley, S.A., & McNamara, D.S. (2011). Understanding expert ratings of essay quality: Coh-Metrix analyses of first and second language writing. IJCEELL, 21, 170-191.

Crossley, S.A., Salsbury, T., & McNamara, D.S. (2009). Measuring L2 lexical growth using hypernymic relationships. Language Learning, 59, 307-334.

McNamara, D. S., Crossley, S. A., Roscoe, R. D., Allen, L. K., & Dai, J. (2015). Hierarchical classification approach to automated essay scoring. Assessing Writing, 23, 35-59.

McNamara, D.S., Louwerse, M.M., McCarthy, P.M., & Graesser, A.C. (2010). Coh-Metrix: Capturing linguistic features of cohesion. Discourse Processes, 47, 292-330.

McNamara, D.S., Ozuru, Y., Graesser, A.C., & Louwerse, M. (2006). Validating Coh-Metrix. In R. Sun & N. Miyake (Eds.), Proceedings of the 28th Annual Conference of the Cognitive Science Society (pp. 573-578). Austin, TX: Cognitive Science Society.
 

Please also refer to the 2014 book on Coh-Metrix

McNamara, D. S., Graesser, A. C., McCarthy, P., & Cai, Z. (2014). Automated Evaluation of Text and Discourse with Coh-Metrix. Cambridge: Cambridge University Press.

For more publications on Coh-Metrix please visit the Publications page.