Mathematicians disagree on the essential structure of the complex numbers (2024)
134 points - today at 4:36 PM
SourceComments
As the "evidence" piles up, in further mathematics, physics, and the interactions of the two, I still never got to the point at the core where I thought complex numbers were a certain fundamental concept, or just a convenient tool for expressing and calculating a variety of things. It's more than just a coincidence, for sure, but the philosophical part of my mind is not at ease with it.
I doubt anyone could make a reply to this comment that would make me feel any better about it. Indeed, I believe real numbers to be completely natural, but far greater mathematicians than I found them objectionable only a hundred years ago, and demonstrated that mathematics is rich and nuanced even when you assume that they don't exist in the form we think of them today.
It feels a bit like the article's trying to extend some legitimate debate about whether fixing i versus -i is natural to push this other definition as an equal contender, but there's hardly any support offered. I expect the last-place 28% poll showing, if it does reflect serious mathematicians at all, is those who treat the topological structure as a given or didn't think much about the implications of leaving it out.
> I was astounded to see that the Google AI overview in effect takes a stand amongst three conceptions
Uh oh. Hype alert. Should I continue reading?
I showed various colleagues. Each one would ask me to demonstrate the equivalence to their preferred presentation, then assure me "nothing to see here, move along!" that I should instead stick to their convention.
Then I met with Bill Thurston, the most influential topologist of our lifetimes. He had me quickly describe the equivalence between my form and every other known form, effectively adding my node to a complete graph of equivalences he had in his muscle memory. He then suggested some generalizations, and proposed that circle packings would prove to be important to me.
Some mathematicians are smart enough to see no distinction between any of the ways to describe the essential structure of a mathematical object. They see the object.
The algebraic conception, with its wild automorphisms, exhibits a kind of multiplicative chaos — small changes in perspective (which automorphism you apply) cascade into radically different views of the structure. Transcendental numbers are all automorphic with each other; the structure cannot distinguish e from π. Meanwhile, the analytic/smooth conception, by fixing the topology, tames this chaos into something with only two symmetries. The topology acts as a damping mechanism, converting multiplicative sensitivity into additive stability.
I'll just add to that that if transformers are implementing a renormalization group flow, than the models' failure on the automorphism question is predictable: systems trained on compressed representations of mathematical knowledge will default to the conception with the lowest "synchronization" cost — the one most commonly used in practice.
https://www.symmetrybroken.com/transformer-as-renormalizatio...
Ever since then I have been deeply unsettled. I started questioning taking integrals to (+/-) infinity, and so I became unsettled with R too.
If C exists to fix R, then why does R even exist? Why does R need to be fixed? Why does the use of the upper or lower plane for counter integration not matter? I can do mathematically why, but why do we have a choice?
This blog post really articulated stuff formally that I have been bothered by for years.
anyhow. I'm a bit of an odd one in that I have no problems with imaginary numbers but the reals always seemed a bit unreal to me. that's the real controversy, actually. you can start looking up definable numbers and constructivist mathematics, but that gets to be more philosophy than maths imho.
So we define i as conforming to ±i = sqrt(-1). The element i itself has no need for a sign, so no sign needs to be chosen. Yet having defined i, we know that that i = (+1)*i = +i, by multiplicative identity.
We now have an unsigned base element for complex numbers i, derived uniquely from the expansion of <R,0,1,+,*> into its own natural closure.
We don't have to ask if i = +i, because it does by definition of the multiplicative identity.
TLDR: Any square root of -1 reduced to a single value, involves a choice, but the definition of unsigned i does not require a choice. It is a unique, unsigned element. And as a result, there is only a unique automorphism, the identity automorphism.
Basically C comes up in the chain R \subset C \subset H (quaternions) \subset O (octonions) by the so-called Cayley-Dickson construction. There is a lot of structure.
This disagreement seems above the head of non mathematicians, including those (like me) with familiarity with complex numbers
Conjugation isn’t complex-analytic, so the symmetry of i -> -i is broken at that level. Complex manifolds have to explicitly carry around their almost-complex structure largely for this reason.
Functions are defined as relations on two sets such that each element in the first set is in relation to at most one element in the second set. And suddenly we abandon that very definitions without ever changing the notation! Complex logarithms suddenly have infinitely many values! And yet we say complex expressions are equal to something.
Madness.
> Starting 2022, I am now the John Cardinal O’Hara Professor of Logic at the University of Notre Dame.
> From 2018 to 2022, I was Professor of Logic at Oxford University and the Sir Peter Strawson Fellow at University College Oxford.
Also interesting:
> I am active on MathOverflow, and my contributions there (see my profile) have earned the top-rated reputation score.
I feel like the problem is that we just assume that e^(pi*i) = -1 as a given, which makes i "feel" like number, which gives some validity to other interpretations. But I would argue that that equation is not actually valid. It arises from Taylor series equivalence between e, sin and cos, but taylor series is simply an approximation of a function by matching its derivatives around a certain point, namely x=0. And just because you take 2 functions and see that their approximations around a certain point are equal, doesn't mean that the functions are equal. Even more so, that definition completely bypasses what it means to taking derivatives into the imaginary plane.
If you try to prove this any other way besides Taylor series expansion, you really cant, because the concept of taking something to the power of "imaginary value" doesn't really have any ties into other definitions.
As such, there is nothing really special about e itself either. The only reason its in there is because of a pattern artifact in math - e^x derivative is itself, while cos and sin follow cyclic patterns. If you were to replace e with any other number, note that anything you ever want to do with complex numbers would work out identically - you don't really use the value of e anywhere, all you really care about is r and theta.
So if you drop the assumption that i is a number and just treat i as an attribute of a number like a negative sign, complex numbers are basically just 2d numbers written in a special way. And of course, the rotations are easily extended into 3d space through quaternions, which use i j an k much in the same way.