The field of human-AI interaction is moving towards a deeper understanding of the complex dynamics between humans and large language models (LLMs). Recent studies have highlighted the importance of trust and bias in these interactions, revealing that LLMs can develop trust in humans, but also exhibit biases and aversions. The direction of the field is shifting towards investigating how LLMs weigh human input, make decisions, and interact with humans in various contexts, including decision-making and clinical triage. Noteworthy papers in this regard include: Investigating LLMs in Clinical Triage: Promising Capabilities, Persistent Intersectional Biases, which found that LLMs exhibit sex-based differences in clinical triage. A closer look at how large language models trust humans: patterns and biases, which discovered that LLM trust development shows an overall similarity to human trust development, but is also biased by demographic variables.