News
Researchers have discovered a weak spot for AI chatbots: ASCII art. ASCII was created during the early days of printers when the printers were unable to handle graphics. ASCII art are images that ...
Researchers have discovered a new way to hack AI assistants that uses a surprisingly old-school method: ASCII art. It turns out that chat-based large language models such as GPT-4 get so ...
How porous AI chatbots are is reflected in the recently published research, ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs. Researchers were able to jailbreak five state-of-the ...
This new AI jailbreaking method leverages ASCII art, a form of representation using characters, to mask trigger words that are typically censored by the AI’s safety protocols. Researchers from ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results