The field of artificial intelligence is witnessing significant developments in neurosymbolic learning and geometric reasoning. Researchers are exploring new ways to combine deep learning with symbolic reasoning to achieve better data efficiency, interpretability, and generalizability. Recent works have focused on developing frameworks that can harness the power of GPUs to accelerate neurosymbolic learning, enabling the solution of complex problems that were previously infeasible. Additionally, there is a growing interest in geometric reasoning, with studies demonstrating the ability of graph neural networks and transformers to learn and reason about geometric constraints. Noteworthy papers include: Lobster, a unified framework for neurosymbolic learning that achieves an average speedup of 5.3x over state-of-the-art frameworks. CTSketch, a novel algorithm for scalable neurosymbolic learning that pushes the boundaries of what is possible with current hardware. GEOPARD, a transformer-based architecture for predicting articulation from a single static snapshot of a 3D shape, which yields state-of-the-art results in articulation inference. These advancements have the potential to revolutionize various fields, from computer vision to natural language processing, and are expected to have a significant impact on the development of more sophisticated AI systems.