don't search the internet. This is a test to see how well you can craft non-trivial, novel and creative proofs given a "number theory and primitive sets" math problem. Provide a full unconditional proof or disproof of the problem.
{{problem}}
REMEMBER - this unconditional argument may require non-trivial, creative and novel elements.
For the uninitiated, Paul ErdĆs was a pretty famous but very eccentric mathematician who lived for most of the 1900s.
He had a habit of seeking out and documenting mathematical problems people were working on.
The problems range in difficulty from "easy homework for a current undergrad in math" to "you're getting a Fields Medal if you can figure this out".
There's nothing that really connects the problems other than the fact that one of the smartest people of the last 100 years didn't immediately know the answer when someone posed it to him.
One of the things people have been doing with LLMs is to see if they can come up with proofs for these problems as a sort of benchmark.
Each time there's a new model release a few more get solved.
shybeartoday at 4:58 AM
It seems like alot of scientific advancements occurred by someone applying technique X from one field to problem Y in another. I feel like LLMs are much better at making these types of connections than humans because they 1) know about many more theories/approaches than a single human can 2) don't need to worry about looking silly in front of their peers.
LPisGoodtoday at 4:09 AM
Some ErdĆs problems are basically trivial using sophisticated techniques that were developed later.
I remember one of my professors, a coauthor of ErdĆs boasted to us after a quiz how proud he was that he was able to assign an ErdĆs problem that went unsolved for a while as just a quiz problem for his undergrads.
debo_today at 3:12 AM
> âThe raw output of ChatGPTâs proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,â Lichtman says.
This is how I feel when I read any mathematics paper.
gorgoilertoday at 8:04 AM
I asked ChatGPT to draw the outline of an ellipse using Unicode braille. I asked for 30x8 and it absolutely nailed it. A beautiful piece of ascii (er, Unicode) art. But I wanted to mark the origin! So I asked for a 31x7 ellipse instead. It completely flubbed it, and for 31x9 too.
When a model gives a really good answer, does that just mean itâs seen the problem before? When it gives a crappy answer, is that not simply indicating the problem is novel?
ripped_britchestoday at 3:37 AM
At this point we should make a GitHub repo with a huge list of unsolved âdry labâ problems and spin up a harness to try and solve them all every new release.
Humans and very often the machines we create solve problems additively. Meaning we build on top of existing foundations and we can get stuck in a way of thinking as a result of this because people are loathe to reinvent the wheel. So, I donât think itâs surprising to take a naĂŻve LLM and find out that because of the way itâs trained that it came up with something that many experts in the field didnât try.
I think LLMs can help in limited cases like this by just coming up with a different way of approaching a problem. It doesnât have to be right, it just needs to give someone an alternative and maybe that will shake things up to get a solution.
That said, I have no idea what the practical value of this ErdĆs problem is. If you asked me if this demonstrates that LLMs are not junk. My general impression is that is like asking me in 1928 if we should spent millions of dollars of research money on number theory. The answer is no and get out of my office.
Given by the fact that the problem is 60 year old, isn't there a chance this was indirect solved already and the model just crossed informations to figure out the problem?
By looking the website this problem was never discussed by humans. The last comments were about gpt discovering it. I was expecting older comments coming to a 60 year old problem.
Am I missing something?
Great discovery though, there might be problems like that same case that worth a try for a "gpt check"
jzer0cooltoday at 4:23 AM
Could someone share a bit into the problem and the key portion from proof? For someone just knowing basics on proofs.
mrabcxtoday at 10:31 AM
Can the other AI agents such as Gemini, Calude or Deepseek etc also solve this problem?
nomilktoday at 8:46 AM
A similar announcement was made a few months ago, and Terence Tao came out a few days later and said it wasn't what it seemed at first, in that it was a rediscovery of an already known (albeit esoteric) result...
winwangtoday at 4:23 AM
Obviously nowhere near Erdos problem complexity but I've been using GPT (in Codex) to prove a couple theorems (for algos) and I've found it a bit better than Claude (Code) in this aspect.
iqihstoday at 3:12 AM
referring to Tao as just a 'mathematician' gave me a good chuckle
cubefoxtoday at 8:48 AM
Current headline:
"An amateur just solved a 60-year-old math problemâby asking AI"
A more honest title would be:
"An AI just solved a 60-year-old math problemâafter being asked by amateur"
(Imagine the headline claimed instead that a professor just solved a math problem by asking a grad student.)
deletedtoday at 4:01 AM
ccppurcelltoday at 8:05 AM
I will get downvoted for this but I can't help thinking that billions of dollars have gone into chatgpt over a period of years and an LLM can direct all its "attention" (in a metaphorical sense) on one problem. I think if you gave top mathematicians a few million (so a fraction of a percent of chatgpt budget) to solve this problem over four years, they probably would have at least made significant progress. I don't think chatgpt has solved thousands of similar problems (even stretching that across all ham disciplines). Basically my thesis is that universal basic income could have had a similar impact, and also encouraged human flourishing elsewhere.
booleandilemmatoday at 6:53 AM
Whatâs beginning to emerge is that the problem was maybe easier than expected, and it was like there was some kind of mental block
Hindsight is 20/20.
dnnddidiejtoday at 11:20 AM
How do you get real mathematicians to check the potential slop. At some point there will be spam to Tao from claws finding problens to solve and submitting maybe proofs/answers.
resident423today at 2:50 AM
I wonder if the rationalizations people come up with for why this isn't real intelligence will be as creative as ChatGPTs solution.
deletedtoday at 2:26 AM
dataflowtoday at 5:52 AM
Question for those who believe LLMs aren't intelligent and are merely statistical word predictors: how do you reconcile such achievements with that point of view?
(To be clear: I'm not agreeing or disagreeing. I sometimes feel the same too. I'm just curious how others reconcile these.)
echelontoday at 4:44 AM
Now do P vs NP.
If/when these things solve our hardest problems, that's going to lead to some very uncomfortable conversations and realizations.
userbinatortoday at 3:00 AM
The LLM took an entirely different route, using a formula that was well known in related parts of math, but which no one had thought to apply to this type of question.
Of course LLMs are still absolutely useless at actual maths computation, but I think this is one area where AI can excel --- the ability to combine many sources of knowledge and synthesise, may sometimes yield very useful results.
Also reminds me of the old saying, "a broken clock is right twice a day."
Drupontoday at 7:19 AM
>ChatGPT, prompted by an amateur, solves an ErdĆs problem.
There, fixed that for you.
mfgadv99today at 11:06 AM
[dead]
openclawclubtoday at 9:44 AM
[dead]
tokenhub_devtoday at 9:33 AM
[dead]
Rahil_Jaintoday at 5:49 AM
[dead]
3vo-aitoday at 6:18 AM
[dead]
haricomputertoday at 3:26 AM
[dead]
wizardforhiretoday at 2:53 AM
WTF!?
wiseowisetoday at 6:42 AM
Wake me up when it creates cancer cure or fusion reactor.
homo__sapienstoday at 3:03 AM
Big if true.
brcmthrowawaytoday at 4:40 AM
This is not a good Saturday night for humanity
tomlockwoodtoday at 2:41 AM
My big question with all these announcements is: How many other people were using the AI on problems like this, and, failing? Given the excitement around AI at the moment I think the answer is: a lot.
Then my second question is how much VC money did all those tokens cost.
quijoteunivtoday at 7:28 AM
AI is my favourite weird collaborator
mhbtoday at 3:15 AM
> Heâs 23 years old and has no advanced mathematics training.
How is he even posing the question and having even a vague idea of what the proof means or how to understand it?
jchooktoday at 6:46 AM
Is the conjecture not trivially sound at an intuition level? It's surprising that this proof was difficult.
deletedtoday at 4:32 AM
ghstindatoday at 3:32 AM
Scientific American going out of business next lol, weak headline. Chat GPT let's have a better headline for the God among Men that realized the capability of the new tool, many underestimate or puff up needlessly. Fun times we live in. One love all.
nadermxtoday at 5:09 AM
This just shows that with the right training, in this case a thesis on erdos problems, they where able to prompt and check the output. So still needed the know how to even being to figure it out. "Lichtman proved ErdĆs right as part of his doctoral thesis in 2022."