Why are LLMs’ abilities emergent?
arXiv:2508.04401v1 Announce Type: cross Abstract: The remarkable success of Large Language Models (LLMs) in generative tasks has raised fundamental questions…
arXiv:2508.04401v1 Announce Type: cross Abstract: The remarkable success of Large Language Models (LLMs) in generative tasks has raised fundamental questions…
arXiv:2508.03546v2 Announce Type: replace-cross Abstract: This paper studies the problem of dimension reduction, tailored to improving time series forecasting with…
arXiv:2508.03342v1 Announce Type: cross Abstract: Existing cybersecurity playbooks are often written in heterogeneous, non-machine-readable formats, which limits their automation and…
arXiv:2508.02741v1 Announce Type: cross Abstract: Large-scale tuberculosis (TB) screening is limited by the high cost and operational complexity of traditional…
arXiv:2505.03646v2 Announce Type: replace-cross Abstract: Adversarial robustness of deep autoencoders (AEs) remains relatively unexplored, even though their non-invertible nature poses…
arXiv:2507.20968v2 Announce Type: replace-cross Abstract: Domain shift poses a fundamental challenge in time series analysis, where models trained on source…
arXiv:2508.02724v1 Announce Type: cross Abstract: Urban air pollution is a major health crisis causing millions of premature deaths annually, underscoring…
arXiv:2508.01887v1 Announce Type: cross Abstract: AI-generated text detectors have become essential tools for maintaining content authenticity, yet their robustness against…
arXiv:2507.11554v4 Announce Type: replace-cross Abstract: Recent advancements in diffusion models (DMs) have been propelled by alignment methods that post-train models…
arXiv:2505.19147v2 Announce Type: replace-cross Abstract: The rapid advancement of large language models (LLMs) and multi-modal LLMs (MLLMs) has historically relied…