Logic Jailbreak: Efficiently Unlocking LLM Safety Restrictions Through Formal Logical Expression
arXiv:2505.13527v4 Announce Type: replace-cross Abstract: Despite substantial advancements in aligning large language models (LLMs) with human values, current safety mechanisms…
