The recent advancements in autonomous vehicle perception systems have seen a significant shift towards multi-modal datasets and robust sensor integration. Researchers are increasingly focusing on integrating 4D Radar, LiDAR, and camera data to enhance perception capabilities, particularly in adverse weather conditions and challenging scenarios. This trend is driven by the need for more reliable and accurate scene understanding, which is crucial for the safety and efficiency of autonomous driving systems. Notably, the development of datasets that include diverse weather conditions and comprehensive sensor modalities is paving the way for more robust algorithms. These datasets not only facilitate the training of perception models but also enable the benchmarking of existing algorithms, highlighting areas for future improvement. Additionally, there is a growing emphasis on the calibration and efficiency of confidence estimation in LiDAR semantic segmentation, which is vital for real-time applications and safety-critical decisions. Overall, the field is moving towards more integrated, reliable, and efficient perception systems that can handle a wide range of conditions and scenarios.