Computational tools have become increasingly prevalent in the analysis and evaluation of various linguistic dimensions in second language (L2) writing pedagogy and research. Despite their widespread use, there is limited research investigating the alignment between computationally-derived linguistic features and human assessments of academic writing quality. To fill this gap, this study probed the extent to which computational indices of syntactic and lexical features predict human-judged assessments of narrative writing quality. A total of 104 essays written by Iranian undergraduate learners of English as a Foreign Language (EFL) were analyzed using three computational tools: Coh-Metrix, VocabProfiler, and the Tool for the Automatic Analysis of Cohesion (TAACO). The results from correlation and regression analyses revealed that the computational indices of lexical features were significant predictors of human-judged writing quality, with lexical diversity and sophistication emerging as the most significant predictors. Manual coding of syntactic complexity proved to be a stronger predictor of writing quality than computational measures of this text feature. These findings underscore the value of computational tools in L2 writing assessment, while simultaneously highlighting their limitations in capturing the multifaceted nature of writing quality. Furthermore, the results point to an overemphasis on infrequent and diverse vocabulary in current analytic writing rubrics, suggesting that these rubrics should be revised to adopt a more comprehensive perspective on lexical proficiency in L2 writing pedagogy and evaluation.