Well Begun is Half Done: Low-resource Preference Alignment by Weak-to-Strong Decoding
arXiv:2506.07434v1 Announce Type: cross Abstract: Large Language Models (LLMs) require alignment with human preferences to avoid generating offensive, false, or…
For An Exciting Tomorrow
arXiv:2506.07434v1 Announce Type: cross Abstract: Large Language Models (LLMs) require alignment with human preferences to avoid generating offensive, false, or…
arXiv:2506.07435v1 Announce Type: cross Abstract: Computing classical centrality measures such as betweenness and closeness is computationally expensive on large-scale graphs.…
arXiv:2506.06278v2 Announce Type: replace-cross Abstract: Current LLM unlearning methods are not robust: they can be reverted easily with a few…
arXiv:2506.06282v1 Announce Type: new Abstract: Effective financial reasoning demands not only textual understanding but also the ability to interpret complex…
arXiv:2506.05167v2 Announce Type: replace-cross Abstract: Large Language Models (LLMs) have shown remarkable performance in Open-Domain Question Answering (ODQA) by leveraging…
arXiv:2506.06018v1 Announce Type: cross Abstract: Watermarking becomes one of the pivotal solutions to trace and verify the origin of synthetic…
arXiv:2506.06020v1 Announce Type: cross Abstract: Large language models frequently encounter conflicts between their parametric knowledge and contextual input, often resulting…
arXiv:2506.05340v2 Announce Type: replace-cross Abstract: Designing model architectures requires decisions such as selecting operators (e.g., attention, convolution) and configurations (e.g.,…
arXiv:2506.05352v1 Announce Type: new Abstract: This work lays the foundations for a rigorous ontological characterization of love, addressing its philosophical…