People go all dopey eyed about "frequency space", that's a red herring. The take away should be that a problem centric coordinate system is enormously helpful.
After all, what Copernicus showed is that the mind bogglingly complicated motion of planets become a whole lot simpler if you change the coordinate system.
Ptolemaic model of epicycles were an adhoc form of Fourier analysis - decomposing periodic motions over circles over circles.
Back to frequencies, there is nothing obviously frequency like in real space Laplace transforms *. The real insight is that differentiation and integration operations become simple if the coordinates used are exponential functions because exponential functions remain (scaled) exponential when passed through such operations.
For digital signals what helps is Walsh-Hadamard basis. They are not like frequencies. They are not at all like the square wave analogue of sinusoidal waves. People call them sequency space as a well justified pun.
My suspicion is that we are in Ptolemaic state as far as GPT like models are concerned. We will eventually understand them better once we figure out what's the better coordinate system to think about their dynamics in.
* There is a connection though, through the exponential form of complex numbers, or more prosaically, when multiplying rotation matrices the angles combine additively. So angles and logarithms have a certain unity, or character.
I’d argue that most if not all of the math that I learned in school could be distilled down to analyzing problems in the correct coordinate system or domain! The actual manipulation isn’t that esoteric once you get in the right paradigm. And those professors never explained things at that kind of higher theoretical level, all I remember was the nitty gritty of implementation. What a shame. I’m sure there’s higher levels of mathematics that go beyond my simplistic understanding, but I’d argue it’s enough to get one through the full sequence of undergraduate level (electrical) engineering, physics, and calculus.
My favorite story about the Fourier Transform is that Carl Friedrich Gauss stumbled upon the algorithm for the Fast Fourier Algorthim over a century before Cooley and Tukey’s publication in 1965 (which itself revolutionized digital signal processing).[1] He was apparently studying the motion of the asteroids Pallas and Juno and wrote the algorithm down in his notes but it never made it into public knowledge.
There is a saying about Gauss: when another mathematician came to show him a new result, Gauss would remark that he had already worked on it, open a drawer in his desk, and pull out a pile of papers on the same topic.
One of the things I admire about many top mathematicians today like Terence Tao is that they are clearly excellent mentors to a long list of smart graduate students and are able to advance mathematics through their students as well as on their own. You can imagine a half-formed thought Terence Tao has while driving to work becoming a whole dissertation or series of papers if he throws it to the right person to work on.
In contrast, Gauss disliked teaching and also tended to hoard those good ideas until he could go through all the details and publish them in the way he wanted. Which is a little silly, as after a while he was already widely considered the best mathematician in the world and had no need to prove anything to anyone - why not share those half-finished good ideas like Fast Fourier Transforms and let others work on them! One of the best mathematicians who ever lived, but definitely not my favorite role model for how to work.
Well, in that time it was more or less how mathematics worked. It was a way of showing off, and often it would be a case of "Hey I've solved this problem, bet no-one else can". It was only later it became a lot more collaborative (and a bit more focused on publishing proofs).
You're correct that the culture of mathematics has changed a lot, and has become much more collaborative. The rise of the modern doctoral training system in Germany later in the 19th century is also relevant. So really Gauss's example points primarily to how much mathematics has changed. But at the same time, I think you could reasonably take Gauss to task even applying the standards of his own era - compare him with Euler, for example, who was much more open with publication and generous with his time and insights, frequently responding to letters from random people asking him mathematical questions, rather like Tao responding to random comments on his blog (which he does). I admire Euler more, and he was born 70 years before Gauss.
Of course, irascible brilliance and eccentricity has an honorable place in mathematics too - I don't want to exclude anyone. (Think Grigori Perelman and any number of other examples!)
There's also this notion of holding themselves to their own standards.
They, Newton included, would often feel that their work was not good enough, that it was not completed and perfected yet and therefore would be ammunition for conflict and ridicule.
Gauss did not publicize his work on complex numbers because he thought he would attacked for it. To us that may seem weird, but there is no dearth of examples of people who were attacked for their mostly correct ideas.
Deadly or life changing attacks notwithstanding, I can certainly sympathize. There's not in figuring things out, but the process of communicating that can be full of tediousness and drama that one maybe tempted to do without.
> There is a saying about Gauss: when another mathematician came to show him a new result, Gauss would remark that he had already worked on it, open a drawer in his desk, and pull out a pile of papers on the same topic.
As if phd students need more imposter syndrom to deal with. Ona serious side, I wonder what conditions allow such minds to grow. I guess a big part is genetics, but I am curious if the "epi" is relevant and how much.
Gauss's notes and margins is riddled with proofs he didn't bother to publish - he was wild.
Not sure if true, but allegedy he insisted his son not go into maths, as he would simply end up in his father's shadow as he deemed it utterly Impossible to surpass his brilliance in maths :'D
> as he would simply end up in his father's shadow as he deemed it utterly Impossible to surpass his brilliance in maths
Definitely true but also bad parenting. Gauss was somewhat of a freak of nature when it came to math. Him and Euler are two of the most unreasonably productive mathematicians of all time.
But what he deemed being posited as true, was this really bad parenting? It could be to head off competition or it could be brutal realism to head off future depression.
Nepotism existed since time immemorial but for a mathematical genius, what was the nepotistic deliverable for the child? A sinecure placement at university?
It’s unusual to tell others not to do something because you’re projecting they’re secretly doing it to compete with you, or that they’ll be depressed when they don’t do what you did.
Doubly so when the rationale is “I’m so fucking awesome”
Triply so when it’s something you’re passionate about, presumably inherently.
Quadruply so when it’s your child. Its tough as a kid hearing your parents come up with elongated excuses why you can’t dream and work towards a future.
When you let people find their own way, you might even learn something from it (ex. 70 yo Gauss learns he didn’t need to tie his mental state to his work because his son doesn’t suddenly become depressed from not matching dads output)
Re: second half, sounds about right, confused at relevancy though (is the idea the child would only do it to pursue nepotistic spoils and an additional reason is the spoils aren’t even good?)
I posit Gauss knew he was a GOAT and had ego. But I also posit he loved his children.
So, a nepotistic delivery was beneficial for his family, and advising his son to seek excellence outside the shadow cast by Gauss himself wasn't stamping on dreams (in my view) it was seeking the happiest outcome.
Without overdoing it, the suicide rate for rich kids with famous parents isn't nothing. There are positive examples, Stella McCartney comes to mind. She isn't wings.
For what it’s worth, his children were quite successful by all accounts. Two of the boys became successful businessmen after emigrating to the US and one of the boys became a director of the railway network in Hannover. Seems as though they weren’t harmed by their upbringing.
I only have 5 kids, and I am also not nearly as productive as Gauss but to a certain degree, it feels to me like responsibility kind of tries to force me to be more effective.
A signal cannot be both time and frequency band limited. Many years ago I was amazed when I read that this fact I learned in my undergraduate is equivalent to the Uncertainty Principle!
On a more mundane note: my wife and I always argue whose method of loading the dishwasher is better: she goes slow and meticulously while I do it fast. It occurred to me we were optimizing for frequency and time domains, respectively, ie I was minimizing time so spent while she was minimizing number of washes :-)
Signals can be approximately frequency and time bandlimited, though, meaning the set of values such that the absolute value exceeds any epsilon is compact in both domains. A Gaussian function is one example.
For those who don't get this comment, the Heisenberg uncertainty principle applies to any two quantities that are connected in QM via a Fourier transform. Such as position and momentum, or time and energy. It is really a mathematical theorem that there is a lower bound on the variance of a function times the variance of its Fourier transform.
That lower bound is the uncertainty principle, and that lower bound is hit by normal distributions.
thank you for that reminder/clarification. I forget sometimes how much we think we have clear pictures of how things like that work when really we're just listening to someone trying to explain what the math is doing and we're adding in detail.
Once you start looking at the world through the lens of frequency domain a lot of neat tricks become simple. I have some demo code that uses fourier transform on webcam video to read a heartrate off a person's face, basically looking for what frequency holds peak energy.
It's effectively the underpinning of all modern lossy compression algorithms. The DCT which underlies codecs like Jpeg, h264, mp3, is really just a modified FFT.
Inter/intra-prediction is more important than the DCT. H264 and later use simpler degenerate forms of it because that's good enough and they can define it with bitwise accuracy.
>Once you start looking at the world through the lens of frequency domain a lot of neat tricks become simple.
Not the first time I've heard this on HN. I remember a user commenting once that it was one of the few perspective shifts in his life that completely turned things upside down professionally.
My dumb ass sat there for a good bit looking at the example in the first link thinking "How does a 30-60 Hz webcam have enough samples per cycle to know it's 77 BPM?". Then it finally clicked in my head beats per minute are indeed not to be conflated with beats per second... :).
It is, I've done it live on a laptop and via the front camera of a phone. I actually wrote this thing twice, once in Swift a few years back, and then again in Python more recently because I wanted to remember the details of how to do it. Since a few people seem surprised this is feasible maybe it's worth posting the code somewhere.
It is, but there's a lot of noise on top of it (in fact, the noise is kind of necessary to avoid it being 'flattened out' and disappearing). The fact that it covers a lot pixels and is relatively low bandwidth is what allows for this kind of magic trick.
The frequency resolution must be pretty bad though. You need 1 minute of samples for a resolution of 1/60 Hz. Hopefully the heartrate is staying constant during that minute.
Funny. You might want to do that modulo capitalization, and perhaps some other common substitutions (LLM/LLMs/Large Language Model/Large Language Models, it's/ it is, what's/what is, I am/I'm), but they change the number of words, so better opt for the shortest alternative.
It's a play on the famous essay 1960 "The Unreasonable Effectiveness of Mathematics in the Natural Sciences".
I agree this is getting old after 75 years. Not least because it seems slightly manipulative to disguise a declarative claim ("The Fourier transform is unreasonably effective."), which could be false, as a noun phrase ("The unreasonable effectiveness of the Fourier transform"), which doesn't look like a thing that can be wrong.
Also how most of the articles with this kind of title (those posted on HN at least) are about computation/logical processes, which are by definition, reasonable.
FTs are actually very reasonable, in the sense that they are a easy to reason about conceptually and in practice.
There's another title referenced in that link which is equally asinine: "Eugene Wigner's original discussion, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". "
Like, wtf?
Mathematics is the language of science, science would not compound or be explainable, communicable, or model-able in code without mathematics.
It's actually both plainly obvious for mathematics then to be extremely effective (which it is) and also be evidently reasonable as to why, ergo it is not unreasonably effective.
Also the slides are just FTs 101 the same material as in any basic course.
Hi, original presenter here :) The beginning is FTs 101. The end gets more application-centric around OFDM and is why it feels 'unreasonably effective' to me. If it feels obvious, there's a couple of slides at the end that are food for thought jumping off points. And if that's obvious to you too, let's collab on building an open source LTE modem!
If one wants to contribute to an open-source LTE modem, the best place to start may be OpenLTE: https://en.wikipedia.org/wiki/OpenLTE The core of any LTE modem is software, even if it is written for DSPs or other low-level software.
So, biology and medicine are not sciences? Or are only sciences to the extent they can be mathematically described?
The scientific method and models are much more than math. Equating the reality with the math has let to myriad misconceptions, like vanishing cats.
And silly is good for a title -- descriptive and enticing -- to serve the purpose of eliciting the attention without which the content would be pointless.
They are still capable of being described with math, we are just not capable of doing the math, or probably better put is there is a diminishing return of doing the formalisation of those systems as our cognitive abilities are limited and couldn't reason about those models. It leaves them using very approximate models based on human language descriptions that can be reasoned about.
Which means, the language of some fields can’t be math.
However, I don’t think the original presenter was asserting those fields aren’t science, that’s an unreasonable interpretation. More so , ideally they would be use math as it is a language that would help prevent the silly argument “so, Y is not X?, or is Y only X provided Y is in the subset of X that excludes Z? “
(Even in Engineering, we hit this cognitive limit, and all sorts of silliness emerges about why things are or are not formalised)
I find it hard to parse the middle of your post. Are you saying Wigner's article, which is what all the "unreasonable effectiveness" titles reference, is silly?
If that is what you are saying I suggest that you actually go back and read it. Or at least the Wiki article:
By means of contrast: I think it's clear that mathematics is, for example, not unreasonably effective in psychology. It's necessary and useful and effective at doing what it does, but not surprisingly so. Yet in the natural sciences it often has been. This is not a statement about mathematics but about the world.
(As Wittgenstein put it some decades earlier: "So too the fact that it can be described by Newtonian mechanics asserts nothing about the world; but this asserts something, namely, that it can be described in that particular way in which as a matter of fact it is described. The fact, too, that it can be described more simply by one system of mechanics than by another says something about the world.")
Yeah it's silly, I don't mean it in any mean spirited way.
> Wigner's first example is the law of gravitation formulated by Isaac Newton. Originally used to model freely falling bodies on the surface of the Earth, this law was extended based on what Wigner terms "very scanty observations"[3] to describe the motion of the planets, where it "has proved accurate beyond all reasonable expectations."
So despite 'very scant observations' they yielded a very effective model. Okay fine. But deciding they should be 'unreasonably' so is just a pithy turn of phrase.
Something can be effective, and can be unreasonably so if it's somehow unexpected, but I basically disagree that FTs or mathematics in general are unreasonably so since we have so much prior information to expect that these techniques actually are effective, almost obviously so.
I am not discussing the FT case. But as regards Wigner's article, the core thing he points out is that while we are used to the effectiveness of maths, centuries after Newton, there in fact is not any prior grounds to expect this effectiveness.
And no, this is unrelated to whether math is invented or discovered. If anything this is related to the extreme success of reductionism in physics.
As a general point of reflection: If an influential article by a smart person seems silly to you, it's good practice to entertain the question if you missed something, and to ask what others are seeing in it that you're missing.
It is likewise unreasonable to look down on any kind of world model from the past. Remember that you, in 2026, are benefitting from millions of aggregate improvements to a world model that you've absorbed passively through participation in society, and not through original thought. You have a different vantage point on many things as a result of the shoulders of giants you get to stand on.
I mean... this one's actually a pretty good paper, but we also had Linus Pauling pontificate on Vitamin C, so maybe we should cool it with the appeals to Nobel authority alone.
It's not easy to separate cause and effect from direct and strong correlations that we experience.
The job of a scientist is not to give up on a hunch with a flippant "correlation is not causation" but pursue such hunches to prove it this way or that (that is, prove it or disprove it). It's human to lean a certain way about what could be true.
> FTs are actually very reasonable, in the sense that they are a easy to reason about conceptually and in practice.
ok but it's not the FTs that are unreasonable, it's the effectiveness
I think we all understand at this point that "unreasonable effectiveness" just means "surprisingly useful in ways we might not have immediately considered"
So he explains OFDM in a way that implicitly does Amplitude shift keying.
I guess if you want to use different modulations you treat the complex number corresponding to the subcarrier as an IQ point in quadrature. So you take the same symbols, but read them off in the frequency domain instead of the time domain.
And I guess this works out quite equivalently to normally modulating these symbols at properly offset frequencies (just by the superposition principle)
Anybody who does anything in the real world with Fourier transforms uses the fast Fourier transform operating on windowed data. This eliminates all of that infinite support and infinite resolution of frequencies.
To be more precise, when working with sampled data with uniform sample rate you use the Discrete Time Fourier Transform (DTFT), not the Fourier Transform!. None the less, you still end up with an approximate spectrum which is the signal spectrum convolved with the window function's spectrum.
In my view the Fourier Transform is still useful in the real world. For example you can use it to analytically derive the spectrum of a given window.
But I think the parent is hinting at wavelet basis.
If you are from ML/Data science world, the analogy that finally unlocked FFT for me is feature size reduction using Principal Component Analysis. In both cases, you project data to a new "better" co-ordinate system ("time to frequency domain"), filter out the basis vectors that have low variance ("ignore high-frequency waves"), and project data back to real space from those truncated dimension ("Ifft: inverse transform to time domain").
Of course some differences exist (e.g. basis vectors are fixed in FFT, unlike PCA).
This talk was given at crowd supply’s 2025 teardown convention which after going for the first time last year I highly recommend it to anyone interested in hardware development. Met a lot of super cool people and managed to get my ticket price back 4x in the amount of free dev boards I got lol
Learning about the Fourier Transform in my Signals and Systems class was mind opening. The idea you can represent any cycling function with sinusoidal functions would not only never occur to me but I would have said it wasn't possible.
Too bad -- the article doesn't mention Gauss. The Fourier transform is best presented to students in its original mathematical form, then coded in the FFT form. It also serves as a practical introduction to complex numbers.
As to the listed patent, it moves uncomfortably close to being a patent on mathematics, which isn't permitted. But I wouldn't be surprised to see many outstanding patents that have this hidden property.
Pretty sure in the USA you can patent mathematics if it is an integral part of the realisation of a physical system.* There is a book "Math you Can't Use" that discusses this.
After all, what Copernicus showed is that the mind bogglingly complicated motion of planets become a whole lot simpler if you change the coordinate system.
Ptolemaic model of epicycles were an adhoc form of Fourier analysis - decomposing periodic motions over circles over circles.
Back to frequencies, there is nothing obviously frequency like in real space Laplace transforms *. The real insight is that differentiation and integration operations become simple if the coordinates used are exponential functions because exponential functions remain (scaled) exponential when passed through such operations.
For digital signals what helps is Walsh-Hadamard basis. They are not like frequencies. They are not at all like the square wave analogue of sinusoidal waves. People call them sequency space as a well justified pun.
My suspicion is that we are in Ptolemaic state as far as GPT like models are concerned. We will eventually understand them better once we figure out what's the better coordinate system to think about their dynamics in.
* There is a connection though, through the exponential form of complex numbers, or more prosaically, when multiplying rotation matrices the angles combine additively. So angles and logarithms have a certain unity, or character.
[1] https://www.cis.rit.edu/class/simg716/Gauss_History_FFT.pdf
In contrast, Gauss disliked teaching and also tended to hoard those good ideas until he could go through all the details and publish them in the way he wanted. Which is a little silly, as after a while he was already widely considered the best mathematician in the world and had no need to prove anything to anyone - why not share those half-finished good ideas like Fast Fourier Transforms and let others work on them! One of the best mathematicians who ever lived, but definitely not my favorite role model for how to work.
Of course, irascible brilliance and eccentricity has an honorable place in mathematics too - I don't want to exclude anyone. (Think Grigori Perelman and any number of other examples!)
They, Newton included, would often feel that their work was not good enough, that it was not completed and perfected yet and therefore would be ammunition for conflict and ridicule.
Gauss did not publicize his work on complex numbers because he thought he would attacked for it. To us that may seem weird, but there is no dearth of examples of people who were attacked for their mostly correct ideas.
Deadly or life changing attacks notwithstanding, I can certainly sympathize. There's not in figuring things out, but the process of communicating that can be full of tediousness and drama that one maybe tempted to do without.
As if phd students need more imposter syndrom to deal with. Ona serious side, I wonder what conditions allow such minds to grow. I guess a big part is genetics, but I am curious if the "epi" is relevant and how much.
Not sure if true, but allegedy he insisted his son not go into maths, as he would simply end up in his father's shadow as he deemed it utterly Impossible to surpass his brilliance in maths :'D
Definitely true but also bad parenting. Gauss was somewhat of a freak of nature when it came to math. Him and Euler are two of the most unreasonably productive mathematicians of all time.
Nepotism existed since time immemorial but for a mathematical genius, what was the nepotistic deliverable for the child? A sinecure placement at university?
Doubly so when the rationale is “I’m so fucking awesome”
Triply so when it’s something you’re passionate about, presumably inherently.
Quadruply so when it’s your child. Its tough as a kid hearing your parents come up with elongated excuses why you can’t dream and work towards a future.
When you let people find their own way, you might even learn something from it (ex. 70 yo Gauss learns he didn’t need to tie his mental state to his work because his son doesn’t suddenly become depressed from not matching dads output)
Re: second half, sounds about right, confused at relevancy though (is the idea the child would only do it to pursue nepotistic spoils and an additional reason is the spoils aren’t even good?)
So, a nepotistic delivery was beneficial for his family, and advising his son to seek excellence outside the shadow cast by Gauss himself wasn't stamping on dreams (in my view) it was seeking the happiest outcome.
Without overdoing it, the suicide rate for rich kids with famous parents isn't nothing. There are positive examples, Stella McCartney comes to mind. She isn't wings.
There's a spread of farmers, railroad and telegraph directors, high level practical infomation management skills in the children.
On a more mundane note: my wife and I always argue whose method of loading the dishwasher is better: she goes slow and meticulously while I do it fast. It occurred to me we were optimizing for frequency and time domains, respectively, ie I was minimizing time so spent while she was minimizing number of washes :-)
That lower bound is the uncertainty principle, and that lower bound is hit by normal distributions.
I'm probably just slow, but I'm not following. Do you mean because you went fast, you had to run another cycle to clean everything properly?
If you haven't already, you should watch the Technology Connections series on dishwashers.
https://www.youtube.com/watch?v=jHP942Livy0
Not the first time I've heard this on HN. I remember a user commenting once that it was one of the few perspective shifts in his life that completely turned things upside down professionally.
https://github.com/giladoved/webcam-heart-rate-monitor
https://medium.com/dev-genius/remote-heart-rate-detection-us...
The Reddit comments on that second one have examples of people doing it with low quality webcams: https://www.reddit.com/r/programming/comments/llnv93/remote_...
It's honestly amazing that this is doable.
Non-paywalled version of the second link https://archive.is/NeBzJ
https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampli...
https://news.mit.edu/2014/algorithm-recovers-speech-from-vib...
> The unreasonable effectiveness of The Unreasonable Effectiveness title?
I agree this is getting old after 75 years. Not least because it seems slightly manipulative to disguise a declarative claim ("The Fourier transform is unreasonably effective."), which could be false, as a noun phrase ("The unreasonable effectiveness of the Fourier transform"), which doesn't look like a thing that can be wrong.
FTs are actually very reasonable, in the sense that they are a easy to reason about conceptually and in practice.
There's another title referenced in that link which is equally asinine: "Eugene Wigner's original discussion, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". "
Like, wtf?
Mathematics is the language of science, science would not compound or be explainable, communicable, or model-able in code without mathematics.
It's actually both plainly obvious for mathematics then to be extremely effective (which it is) and also be evidently reasonable as to why, ergo it is not unreasonably effective.
Also the slides are just FTs 101 the same material as in any basic course.
So, biology and medicine are not sciences? Or are only sciences to the extent they can be mathematically described?
The scientific method and models are much more than math. Equating the reality with the math has let to myriad misconceptions, like vanishing cats.
And silly is good for a title -- descriptive and enticing -- to serve the purpose of eliciting the attention without which the content would be pointless.
Which means, the language of some fields can’t be math.
However, I don’t think the original presenter was asserting those fields aren’t science, that’s an unreasonable interpretation. More so , ideally they would be use math as it is a language that would help prevent the silly argument “so, Y is not X?, or is Y only X provided Y is in the subset of X that excludes Z? “
(Even in Engineering, we hit this cognitive limit, and all sorts of silliness emerges about why things are or are not formalised)
If that is what you are saying I suggest that you actually go back and read it. Or at least the Wiki article:
https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness...
By means of contrast: I think it's clear that mathematics is, for example, not unreasonably effective in psychology. It's necessary and useful and effective at doing what it does, but not surprisingly so. Yet in the natural sciences it often has been. This is not a statement about mathematics but about the world.
(As Wittgenstein put it some decades earlier: "So too the fact that it can be described by Newtonian mechanics asserts nothing about the world; but this asserts something, namely, that it can be described in that particular way in which as a matter of fact it is described. The fact, too, that it can be described more simply by one system of mechanics than by another says something about the world.")
> Wigner's first example is the law of gravitation formulated by Isaac Newton. Originally used to model freely falling bodies on the surface of the Earth, this law was extended based on what Wigner terms "very scanty observations"[3] to describe the motion of the planets, where it "has proved accurate beyond all reasonable expectations."
So despite 'very scant observations' they yielded a very effective model. Okay fine. But deciding they should be 'unreasonably' so is just a pithy turn of phrase.
That mathematics can model science so well, is reductive and reduces to the core philosophy of mathematics question of whether it is invented or discovered. https://royalinstitutephilosophy.org/article/mathematics-dis...
Something can be effective, and can be unreasonably so if it's somehow unexpected, but I basically disagree that FTs or mathematics in general are unreasonably so since we have so much prior information to expect that these techniques actually are effective, almost obviously so.
And no, this is unrelated to whether math is invented or discovered. If anything this is related to the extreme success of reductionism in physics.
As a general point of reflection: If an influential article by a smart person seems silly to you, it's good practice to entertain the question if you missed something, and to ask what others are seeing in it that you're missing.
It's not easy to separate cause and effect from direct and strong correlations that we experience.
The job of a scientist is not to give up on a hunch with a flippant "correlation is not causation" but pursue such hunches to prove it this way or that (that is, prove it or disprove it). It's human to lean a certain way about what could be true.
ok but it's not the FTs that are unreasonable, it's the effectiveness
I think we all understand at this point that "unreasonable effectiveness" just means "surprisingly useful in ways we might not have immediately considered"
Metaphorical language compels them to microrebuttals.
Why "The \"Unreasonable Effectiveness\" Title Considered Harmful" Matters
The Unreasonable Effectiveness of "\"Why \\\"The \\\\\\\"Unreasonable Effectiveness\\\\\\\" Title Considered Harmful\\\" Matters\" Considered Harmful"
Ironically a very relevant and accurate title.
I guess if you want to use different modulations you treat the complex number corresponding to the subcarrier as an IQ point in quadrature. So you take the same symbols, but read them off in the frequency domain instead of the time domain.
And I guess this works out quite equivalently to normally modulating these symbols at properly offset frequencies (just by the superposition principle)
In my view the Fourier Transform is still useful in the real world. For example you can use it to analytically derive the spectrum of a given window.
But I think the parent is hinting at wavelet basis.
Of course some differences exist (e.g. basis vectors are fixed in FFT, unlike PCA).
More here https://news.ycombinator.com/item?id=46553398
As to the listed patent, it moves uncomfortably close to being a patent on mathematics, which isn't permitted. But I wouldn't be surprised to see many outstanding patents that have this hidden property.
* not a legal definition, IANAL.