The field of human-AI interaction and generative models is rapidly evolving, with a focus on improving the trustworthiness and effectiveness of AI systems. Recent studies have highlighted the importance of evaluating the creative capabilities of language models, with benchmarks such as NoveltyBench being developed to assess their ability to produce diverse and high-quality outputs. The use of AI systems in search and recommendation tasks is also being explored, with a emphasis on understanding how users interact with and trust these systems. Notably, research has shown that providing reference links and citations can increase trust in AI-generated search results, but also that uncertainty highlighting can decrease trust. Furthermore, the development of holistic evaluation frameworks for recommender systems powered by generative models is necessary to ensure responsible deployment. Overall, the field is moving towards a more nuanced understanding of the complex interactions between humans and AI systems, with a focus on promoting transparency, accountability, and creativity. Noteworthy papers include: Surveying Professional Writers on AI, which provides insights into the adoption of AI-driven tools among professional writers. NoveltyBench: Evaluating Creativity and Diversity in Language Models, which introduces a benchmark for evaluating the creative capabilities of language models. Human Trust in AI Search, which investigates the factors that influence human trust in AI-generated search results.