The recent advancements in radar-based perception systems have significantly enhanced the capabilities of indoor and autonomous driving applications. Innovations in multi-view radar detection, particularly through the integration of transformer architectures, have demonstrated substantial improvements in object detection and instance segmentation accuracy. These advancements address the unique challenges posed by multi-view radar settings, such as depth prioritization and radar-to-camera transformations, leading to more robust and reliable systems. Additionally, the incorporation of physics-guided learning paradigms in Synthetic Aperture Radar (SAR) target detection has shown promise in improving fine-grained classification tasks, leveraging prior knowledge of target characteristics to enhance feature representation and instance perception. Furthermore, resource-efficient fusion networks that combine camera and raw radar data have been developed to improve object detection in Bird's-Eye View (BEV) scenarios, balancing accuracy with computational efficiency. These developments collectively push the boundaries of radar-based perception, making it a more viable and powerful tool across various applications.