FaithLens: Detecting and Explaining Faithfulness Hallucination
arXiv:2512.20182v4 Announce Type: replace-cross Abstract: Recognizing whether outputs from large language models (LLMs) contain faithfulness hallucination is crucial for real-world…
arXiv:2512.20182v4 Announce Type: replace-cross Abstract: Recognizing whether outputs from large language models (LLMs) contain faithfulness hallucination is crucial for real-world…
arXiv:2506.20020v2 Announce Type: replace Abstract: Reasoning in humans is prone to biases due to underlying motivations like identity protection, that…
arXiv:2604.12617v2 Announce Type: replace-cross Abstract: The post-training pipeline for diffusion models currently has two stages: supervised fine-tuning (SFT) on curated…
arXiv:2604.16207v1 Announce Type: cross Abstract: As forgery types continue to emerge consistently, Incremental Face Forgery Detection (IFFD) has become a…
arXiv:2507.02935v3 Announce Type: replace-cross Abstract: Successful human-agent teaming relies on an agent being able to understand instructions given by a…
arXiv:2601.05201v2 Announce Type: replace-cross Abstract: Large vision-language models (VLMs) are highly capable, yet often hallucinate by favoring textual prompts over…
arXiv:2510.27617v2 Announce Type: replace Abstract: Automation of Register Transfer Level (RTL) design can help developers meet increasing computational demands. Large…
arXiv:2604.14646v2 Announce Type: replace Abstract: Recent advances in reinforcement learning (RL) have improved the reasoning capabilities of large language models…
arXiv:2604.15495v1 Announce Type: new Abstract: Navigating complex, densely packed environments like retail stores, warehouses, and hospitals poses a significant spatial…
arXiv:2604.16241v1 Announce Type: cross Abstract: Large language models have shown strong performance on broad-domain knowledge and reasoning benchmarks, but it…