The field of natural language processing is witnessing significant developments in the application of large language models (LLMs) to linguistic analysis. Researchers are exploring the capabilities and limitations of LLMs in various tasks, including syntactic parsing, grammatical analysis, and language instruction. A key area of focus is the evaluation of LLMs' pedagogical grammar competence, with the development of specialized benchmarks and frameworks to assess their performance. Another important direction is the improvement of LLMs' parsing capabilities, with proposed methods such as self-correction leveraging grammar rules to enhance their accuracy. The study of linguistic phenomena, including empty categories and subject islands, is also being advanced through the use of LLMs and experimental research. Notable papers in this area include:
- CPG-EVAL, which introduces a multi-tiered benchmark for evaluating LLMs' pedagogical grammar competence in Chinese language teaching contexts.
- Self-Correction Makes LLMs Better Parsers, which proposes a self-correction method to improve LLMs' parsing capabilities.
- Subject islands do not reduce to construction-specific discourse function, which presents evidence for a subject island effect in various constructions, arguing for an account of islands in terms of abstract syntactic representations.