Rendered at 10:23:25 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
catlifeonmars 2 days ago [-]
This article has “why stabbing yourself with a screwdriver is bad” vibes.
randomNumber7 2 days ago [-]
Yes. It really makes no sense to take a screwdriver instead of a knife.
ks2048 2 days ago [-]
Had me wonder - if you ask an LLM for a random number 1...100, what distribution do you get? Surely many have run this experiment. Here's a link that looks like a good example, https://sanand0.github.io/llmrandom/
2 days ago [-]
Ferret7446 2 days ago [-]
I imagine you'd get a similar distribution as when asking humans to come up with a random number on the spot
RIMR 2 days ago [-]
That is interesting data. Just from looking at those graphs, it looks like AIs are consistently avoidant of the number 69, likely because of safeguards to prevent it from being offensive. Otherwise its training would probably tell it that it was a really nice number.
ks2048 1 days ago [-]
I wonder the human results. If a friend asks you, maybe you say 69, but if it's a psych exam, people might avoid it.
gmuslera 2 days ago [-]
This asks for a dictionary attack, not of common words, but for tokens from training that have some weight related to good passwords.
At least regarding “normal” text generation, if you tell somewhat to the LLM that generate a Python script to write down a random password and use it it may have better quality.
petcat 2 days ago [-]
> LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool)
This seems like kind of a pointless analysis to me? Humans also generate bad passwords. It's why we use crypto-hardened RNG tools.
jmull 2 days ago [-]
It’s pointless if you believe no one is asking LLMs to generate passwords for them.
Pooge 2 days ago [-]
Humans will always smash a screw with the handle of a spoon and be proud of themselves when they manage to do it.
RIMR 2 days ago [-]
I mean, people are still rotating <month><year> passwords because they refuse to remember anything. I only know this, because I am in a customer-facing position, and these customers rarely care about revealing their passwords when they need help...
himata4113 2 days ago [-]
huh, for me it just generates <username>123 when I ask it to generate a password lol, sometimes adds a !, more often it just forces changeme rather than having any password.
Mordisquitos 2 days ago [-]
I only clicked on the article with no intention of reading it (no time), but rather out of morbid curiosity as to why on earth anybody would need to be told that LLMs should absolutely not be used to generate passwords.
> [...] Despite this, LLM-generated passwords appear in the real world – used by real users, and invisibly chosen by coding agents as part of code development tasks, instead of relying on traditional secure password generation methods.
Jesus F'ing Christ. I hope to have time to read the whole thing later.
sowbug 2 days ago [-]
The article is a bit of a strawman, and a bit of an advertisement for a security consultancy. If you ask someone else to pick a password for you, then it's a secret known by two people. So don't do that. That was true a thousand* years ago. It's still true today.
*I know, I know, hash functions didn't exist on Earth a thousand years ago. Still true.
RIMR 2 days ago [-]
I urge you to actually read the article, because it doesn't say anything about the risks of the LLM knowing your password (e.g., stored in server-side logs), it talks about LLMs generating predicatable passwords because they are deterministic pattern-following machines.
While the loss of secrecy between you and the LLM provider is a legitimate risk, the point of the article was that you should only use vetted RNGs to generate passwords, because LLMs will frequently generate identical secure-looking passwords when asked to do so repeatedly, meaning that all a bad actor has to do is collect the most frequent ones and go hunting.
The loss of secrecy between you and the LLM only poses a risk if the LLM logs are compromised, exposing your generated passwords. The harvesting of commonly-generated passwords from LLMs poses a much broader attack surface for anyone who uses this method, because any attacker with access to publicly available LLMs can start mining commonly generated passwords and using them today without having to compromise anything first.
sowbug 2 days ago [-]
You're right; I could have phrased the issue better, though I certainly did read the article. Let me try again: letting someone else pick a password for you requires you to trust that they did it well, and you get no benefit in exchange for that trust. That's true for other humans, websites, and now LLMs.
CrzyLngPwd 2 days ago [-]
The article reads like it was written by a machine.
Havoc 2 days ago [-]
why would you LLM generate a password?!?
camgunz 2 days ago [-]
Honest question, how much money would I make off an MCP service to generate passwords for claws and agents. Is there still gas left in the griftmobile, are prospectors still in need of shovels, will the gods bless my humble, shameless lunge for my slice of the pie?
TheDong 2 days ago [-]
There is a marketplace for free skills (in this case a markdown file saying "run openssl rand -hex 32")
I do not think there is any money for something that trivial.
Even the irrationally exuberant VCs wouldn't put money in that.
RIMR 2 days ago [-]
No, but if those VCs let their AI agents purchase things on their behalf, you could maybe trick those agents into thinking your cloud service was the better option.
throwatdem12311 2 days ago [-]
Not much because if you gain any traction, within a day somebody will make a clone and make it free/open source.
This is the default answer for all vibe coded slop business ideas for a while.
weare138 2 days ago [-]
If anyone is that desperate for a secure random password here's a Perl one-liner I came up with that will generate random cryptographically secure passwords with all unique characters using /dev/urandom. No dependencies:
At least regarding “normal” text generation, if you tell somewhat to the LLM that generate a Python script to write down a random password and use it it may have better quality.
This seems like kind of a pointless analysis to me? Humans also generate bad passwords. It's why we use crypto-hardened RNG tools.
> [...] Despite this, LLM-generated passwords appear in the real world – used by real users, and invisibly chosen by coding agents as part of code development tasks, instead of relying on traditional secure password generation methods.
Jesus F'ing Christ. I hope to have time to read the whole thing later.
*I know, I know, hash functions didn't exist on Earth a thousand years ago. Still true.
While the loss of secrecy between you and the LLM provider is a legitimate risk, the point of the article was that you should only use vetted RNGs to generate passwords, because LLMs will frequently generate identical secure-looking passwords when asked to do so repeatedly, meaning that all a bad actor has to do is collect the most frequent ones and go hunting.
The loss of secrecy between you and the LLM only poses a risk if the LLM logs are compromised, exposing your generated passwords. The harvesting of commonly-generated passwords from LLMs poses a much broader attack surface for anyone who uses this method, because any attacker with access to publicly available LLMs can start mining commonly generated passwords and using them today without having to compromise anything first.
I do not think there is any money for something that trivial.
Even the irrationally exuberant VCs wouldn't put money in that.
This is the default answer for all vibe coded slop business ideas for a while.