
Models like GPT create password patterns with predictable structures that are easy to memorize – because they learn from human data.
Nowadays, there are those who ask Artificial Intelligence for help (IA) for practically everything. One of the examples is create a strong password.
Yes, AI can create complex passwords; mix letters, numbers and symbols into a safe combination.
But here comes another problem related to this technological advance: what appears to be random…may not be, warns . To the Passwords can be easy to guess.
One published last month reinforces this warning. Several language models have been published – ChatGPT, Gemini and Claude – that are supposed to create strong 16-character passwords.
And they began to emerge repeating patterns, predictable combinations, repeating passwords and much weaker passwords than they appear to be.
In fact, the study authors warn, passwords created by AI are “dangerously unsafe”.
But why?
Because language models are designed to predict tokens – which is the opposite of presenting random characters safely and uniformly.
And those passwords come from real world inspirations – used by real users and chosen invisibly by programming agents as part of code development tasks; do not follow traditional methods of generating strong passwords.
This problem isn’t a question of “human vs. AI” – it’s more a question of how the password is created.
And that’s the point: it’s the AI that creates the passwords, but uses “natural” patterns, predictable structures and favors combinations that seem strong but follow human logic.
Because… AI learns from human data. Therefore, avoid really random sequences, which are not natural, opt for something memorizable, easy to memorize.
Therefore, when asking technology for help to create a password, it is best to use cryptographic generators designed for this purpose. Or turn to…your head.