Advancements in LLM and Transformer-Based Models for Industrial and Material Applications

The recent developments in the research area highlight a significant shift towards leveraging large language models (LLMs) and transformer-based architectures for a variety of tasks, ranging from industrial process prediction to material property estimation and fault diagnosis in machinery. These advancements are characterized by innovative approaches that address traditional challenges such as high development costs, poor robustness, training instability, and lack of interpretability. Notably, the integration of domain-specific knowledge retrieval with LLMs has emerged as a powerful strategy to enhance model performance and flexibility. Furthermore, the application of pre-trained models for text-based tasks demonstrates remarkable adaptability and efficiency, reducing the need for extensive retraining and complex feature engineering. The field is also witnessing a move towards multimodal and lightweight models that offer superior performance with less computational cost, alongside the exploration of open vocabulary frameworks for activity recognition that promise greater generalization across unseen activities and modalities.

Noteworthy Papers

  • A Soft Sensor Method with Uncertainty-Awareness and Self-Explanation Based on Large Language Models Enhanced by Domain Knowledge Retrieval: Introduces a novel framework leveraging LLMs for soft sensor modeling, achieving state-of-the-art predictive performance and robustness.
  • Text to Band Gap: Pre-trained Language Models as Encoders for Semiconductor Band Gap Prediction: Demonstrates the effectiveness of using a pre-trained RoBERTa model for semiconductor band gap prediction, significantly reducing the need for extensive retraining.
  • A Multimodal Lightweight Approach to Fault Diagnosis of Induction Motors in High-Dimensional Dataset: Presents a transfer-learning-based lightweight DL model for diagnosing induction motor faults, showcasing superior performance and computational efficiency.
  • A Text-Based Knowledge-Embedded Soft Sensing Modeling Approach for General Industrial Process Tasks Based on Large Language Model: Proposes a general framework for soft sensing modeling that integrates natural language modalities, overcoming limitations of pure structured data models.
  • Initial Findings on Sensor based Open Vocabulary Activity Recognition via Text Embedding Inversion: Introduces an open vocabulary framework for human activity recognition, offering robust generalization across unseen activities and modalities.

Sources

A Soft Sensor Method with Uncertainty-Awareness and Self-Explanation Based on Large Language Models Enhanced by Domain Knowledge Retrieval

Text to Band Gap: Pre-trained Language Models as Encoders for Semiconductor Band Gap Prediction

A Multimodal Lightweight Approach to Fault Diagnosis of Induction Motors in High-Dimensional Dataset

A Text-Based Knowledge-Embedded Soft Sensing Modeling Approach for General Industrial Process Tasks Based on Large Language Model

Pre-Trained Large Language Model Based Remaining Useful Life Transfer Prediction of Bearing

Evaluation of Artificial Intelligence Methods for Lead Time Prediction in Non-Cycled Areas of Automotive Production

Initial Findings on Sensor based Open Vocabulary Activity Recognition via Text Embedding Inversion

Benchmarking Classical, Deep, and Generative Models for Human Activity Recognition

Built with on top of