Epicycles All the Way Down (2025)

23 points - last Thursday at 3:09 PM

Source

Comments

OutOfHere today at 2:24 PM
The force equation example is disturbing, but it's easy to prevent by disallowing the inclusion of random decimal numbers in the formula, with the latter also suggesting over-fitting to the data. It is immediately obvious that such numbers make the equation inelegant and therefore likely to be wrong. If you're going to use symbolic construction, be careful in what formulations you allow, also having an appropriate penalty for complexity.

As for chess, although an LLM knows the rules of chess, it is not expected to have been trained on many optimal chess games. As such, is it fair to gauge its skill in chess, especially without showing it generated images of its candidate moves? Even if representational and training limitations were addressed, we know that LLMs are architecturally crippled in that they have no neural memory beyond their context. Imagine a next-gen LLM that if presented with a chess puzzle would first update its internal weights for playing optimal chess via a simulation of a billion games, and then return to address the puzzle you gave it. Even with the current arch, it could equivalently create a fork of itself for the same purpose, a new trained model in effect, but the rushing human's desire for wanting the answer immediately comes in the way.

ogogmad today at 1:44 PM
The recent news of multiple solutions to Erdos problem 1196 produced by LLMs without any human help, makes any suggestion that LLMs have hit a wall in reasoning seem less credible. To give you an idea, problem 1196 has been worked on by different experts for years. Now suddenly, LLMs have come along and solved the problem in a multitude of ways. Perhaps LLMs will eventually stall, but this paradigm still has some juice left to squeeze.
throwaway210426 today at 1:29 PM
Needs a “[November 2025]” title. It is already outdated