The recent developments in the field of large language models (LLMs) have significantly advanced the understanding and application of uncertainty quantification and distributional semantics. A notable trend is the shift towards enabling LLMs to express and estimate uncertainty more accurately, which is crucial for enhancing their reliability in high-stakes applications. This is being achieved through innovative methods such as refinement-based data collection frameworks and two-stage training pipelines, which aim to improve the models' ability to express uncertainty in long-form responses. Additionally, there is a growing focus on mitigating biases, such as sycophancy, in uncertainty estimation by incorporating both model and user uncertainty. The field is also witnessing advancements in probabilistic programming and term rewriting, with new approaches being developed to model and compute probabilities in these systems. Furthermore, the integration of semantic entropy for fine-tuning LLMs to abstain from answering questions beyond their capabilities is proving to be an effective strategy for reducing hallucinations. Overall, these developments are paving the way for more trustworthy and reliable AI systems, particularly in contexts requiring nuanced understanding and accurate uncertainty representation.