The recent advancements in automated fact-checking and large language models (LLMs) have significantly shifted the focus towards more nuanced and fine-grained approaches. There is a growing emphasis on developing evaluation frameworks that move beyond traditional metrics and closed knowledge sources, recognizing their limitations in assessing the quality and reliability of generated content. This trend is particularly evident in the development of benchmarks that facilitate more granular verification of claims, breaking them down into sub-claims for individual assessment. Additionally, there is a push towards creating more comprehensive and language-specific benchmarks, especially for low-resource languages, to ensure that LLMs can be effectively evaluated across diverse linguistic contexts. The integration of LLMs with modern search engines for multi-hop evidence pursuit is also emerging as a promising direction, enhancing the ability to verify complex claims by iteratively pursuing missing evidence. Notably, the field is witnessing the introduction of new benchmarks that challenge LLMs with long-context mention resolution and fine-grained fact verification, highlighting the need for models to demonstrate strong referential capabilities and accurate information synthesis. These developments collectively underscore a move towards more sophisticated and context-aware evaluation methodologies in the realm of LLMs and automated fact-checking.