The recent advancements in the research area have predominantly focused on leveraging large language models (LLMs) and graph neural networks (GNNs) to automate and enhance various knowledge extraction and representation tasks. A significant trend is the integration of LLMs with specialized frameworks to construct and enrich knowledge graphs (KGs), particularly in domains such as cybersecurity and e-commerce. These approaches aim to address the challenges of data scarcity and non-isomorphic structures by employing optimized in-context learning and hierarchical entity alignment techniques. Additionally, there is a notable shift towards automating the extraction of complex scientific information from full-text documents, which has been facilitated by the release of new datasets designed to capture intricate entity interactions. The use of LLMs in ontology learning has also seen innovation, with methods being developed to model entire subcomponents of ontologies, enhancing both semantic accuracy and structural integrity. Overall, the field is moving towards more efficient, scalable, and adaptable solutions that reduce manual effort and improve the precision of knowledge extraction and representation.
Leveraging LLMs and GNNs for Knowledge Automation
Sources
CTINEXUS: Leveraging Optimized LLM In-Context Learning for Constructing Cybersecurity Knowledge Graphs Under Data Scarcity
SciER: An Entity and Relation Extraction Dataset for Datasets, Methods, and Tasks in Scientific Documents
GraphLSS: Integrating Lexical, Structural, and Semantic Features for Long Document Extractive Summarization