The recent advancements in the field of robotics and AI have shown a significant shift towards more human-like and adaptable systems. Key developments include the integration of advanced conversational AI in embodied robots, enabling them to perform tasks such as interviews with human-like fluency and attentiveness. Another notable trend is the advancement in reinforcement learning techniques, particularly in the context of robotic manipulation, where reward machines are being inferred directly from visual demonstrations, enhancing the ability to learn complex tasks over extended time horizons. Additionally, there is a growing focus on multimodal instruction-following agents, which leverage weak supervision and latent variable models to improve their ability to follow diverse instructions across various environments. The field is also witnessing innovative approaches to sim2real transfer, particularly in industrial automation scenarios like forklift operations, where zero-shot learning from simulated environments is being successfully applied. Furthermore, there are significant strides in human-like robotic manipulation, where inverse reinforcement learning is being used to mimic human actions more accurately, enhancing compatibility in industrial settings. The integration of differentiable multiphysics simulations with novel reinforcement learning algorithms is also expanding the scope of tasks that can be learned and controlled by robots, including those involving deformable objects. Lastly, there is a strong emphasis on scalable and adaptable humanoid robot control, leveraging large-scale datasets of human motion to enhance generalization capabilities. Notably, some papers stand out for their innovative approaches: the use of android robots for human-like interviews and the development of a visual-based forklift learning system for zero-shot sim2real transfer.