Can we ever know anything?
How I eventually convinced myself that knowledge wasn't impossible
July 2022
I should note, as a preamble, or rather, a pre-ramble,
that this is the first time I have written any kind of extended prose since
probably the beginning of the pandemic, so please forgive the unwieldiness of
the ensuing words.
-
Exam week. Second year. You failed last year, and your
director of studies has made it abundantly clear to you that you will be
expected to study at a different university if you also fall through this time.
Thoughts, as such, are all echoing and sympathetically amplifying in the isolated
cavity of your sleep deprived, overstretched brain, since the low-stimulus
environment of your own desk offers no alternative for your attention. Suddenly
you, in a glimpse of clarity appearing as if by magic or hidden variable,
manage to simultaneously gather your entire mental presence onto a quote from
the 2008 Iron Man film starring Robert Downey Jr.: “Then this is a very
important week for you, isn’t it?”, as if the complex colloid which forms your
mind had somehow, by sheer astronomical chance, managed to unmix itself. It is
an important week, although not just one, you realize. You frantically pack
your things, forgetting half of what you need, and lunge for the door, lest the
brief spark of inspiration extinguish.
This approximate sequence of events led me, 90 minutes
later, to find myself working some vector calculus problems opposite a good
friend of mine, Wren, who studies biological natural sciences (bionatsci), and
is, like me, in his second year at Homerton College. An inherited wisdom from
many academics is that one can best gauge one’s understanding of a topic when
attempting to explain it to someone in a different field. Wren does this a lot,
and I have come to learn a great range, perhaps too great a range, of
information about various bodily systems, including - and mostly limited to -
the mammaries. The discussions between (would-be) biologists and physicists
typically converge on arguments of scientific accuracy and repeatability, with
the general conclusion being “both of us make bullshit assumptions, physics can
get away with it more often, but hey, no need to point fingers at each other
when neither of us are as bad as the psychologists!”, which, I suppose, is not
entirely fair to the people responsible for a significant mitigation in human
suffering. Whilst we sat there, taking a small sanity break in between
attacking past papers, Wren mentioned a phrase which, much later, spurred this
whole essay. I had been, in return for his virtuous narrative on endotherms and
the other thing, telling him a little about what I understood of the evolution
of particle physics, and how progressively we had narrowed down the precise,
linearly-algebraic nature of states and how they relate to each other, and the
information that can be known in a quantum system. His response to this, by
comparison, half-assembled expository jumble of mine, was the following:
“All models are wrong, but some are useful”
I forget exactly which example he used to illustrate
this point, but it made me notice not only that biology and physics were
significantly more alike than 17-year-old me would ever have believed, but that
at their core, fundamentally, all fields of knowledge with the possible
exception of pure mathematics, emerge from this core tenet: that all models are
wrong, but some are useful. I would, in principle, extend this to a slightly
different statement of thesis, which is that all models in all fields
asymptotically approach a minimum degree of wrongness as they are developed,
and that models can change this limiting value of wrongness as they are
iterated upon by the introduction of new axioms, but I don’t want to get
knee-deep into philosophical/mathematical arguments (yet), as I am currently
just three years smarter than the arrogant schoolboy who belittled the “lesser
subjects” for the same lack of rigour which is emblematic of undergraduate
physics.
We will start with what was briefly alluded to
earlier. Classical and quantum physics, as I have come to learn it. We’re
talking all the way up to the formulation of Hamiltonian mechanics and the
Schrödinger and Heisenberg pictures of time dependent quantum states,
respectively. In 2 years, the university brought us up to speed after having
left school almost as stupid as would still be permissible for entry into a
physics degree. I felt good, after a pretty exhausting first year, at the
learned abilities of attacking any problem within the fields of statistical
thermodynamics, electrodynamics, special relativity, introductory condensed
matter, until I really sat back and began to think about what I had been taught
to do. None of it would be possible if differentiation wasn’t linear.
Methods of Green’s functions? Linear algebra. Maxwells equations? Linear.
Fourier series solutions? Only useful if your operator is linear. Measurements
in quantum mechanics? Define your inner product space, think of these things
called wave functions which are elements of a Hilbert space, where a
measurement corresponds, as is a postulate of quantum mechanics, to a Hermitian
operator acting on elements thereof, the most general stage for concepts of
linearity. The number of times we decomposed things into a sum of
eigenfunctions, which could be summed after applying the operator, by linearity,
is staggering. It is pretty much the only thing I know. What on earth would I
do if the world wasn’t made of nice straight lines all of a sudden? It was with
this inquiry that I realized I was standing at the edge of a cliff. On its face
was written, in graduates’ sweat and blood, “Non-linear Systems” (upside down
so that it could be read from above, of course), illuminated by a series of
small fires at the base, which were arranged in the shape of a
skull-and-crossbones and, for some reason, that emoji of the easter island
head. You know the one I mean.
Well, that isn’t entirely true. We had seen
anharmonicity. We had seen what happens when you try and solve a classical
system of more than three bodies. However, the approach we were taught is to just
move them a little bit so you can pretend all the potentials are quadratic,
and, by extension, your operator is linear. The reason for this is simple. We
just can’t do better with normal algebraic functions, as we seek analytic
solutions to sets of coupled differential equations describing positions of
particles which don’t behave as independent masses in an external potential.
Instead, when one particle moves, it moves its neighbour since there is a force
between them. The relative distances between these is now different, so the
effect of one particle on another feeds back into itself again. The position of
one particle in a group depends non-trivially on itself; the critical number of
particles at which this happens being 3 and above, since the feedback of one
particle spreads to more than one other particle. Now, all of a sudden you have
to tally up multiple places where the energy can be, and you have to start
introducing considerations of thermodynamics, and as the run-on nature of this
sentence should suffice to portray, this becomes quite hairy very quickly. So,
instead, just move the particles by such a small amount that, in the context of
the changing potential, they are functionally not moving, but still experience
a small oscillation about an equilibrium point. Find the modes at which this
can happen, which is to solve for an eigenvalue of a mass/tension matrix, etc.
And all the inherent problems associated with not being able to unambiguously
decompose solutions into a basis space vanish with a couple of reasonable
assumptions.
Even in the field of complex numbers, the fundamental
idea is to find a number system, where, if you express varying quantities in
terms of them, you can very easily pick out the decomposition of said
quantities in terms of things which are eigen under differentiation and
double-differentiation. Sturm-Liouville theory does exactly that when attempting
to solve a second order ODE; you assume such a decomposition exists, and then check
what happens to the solution when you start truncating this series by its small
eigenvalue terms. The Green’s function, which is the sum of the outer products
of all the SL eigenfunctions with each other (assuming completeness), is essentially
the decomposition of the solution to the equation into ones which are eigen
with respect to position once the operator is applied. This approach is very
general, owing to some very useful theorems that say that there always exists
an integrating factor which can change a general second order linear operator
into an SL operator with the corresponding boundary condition. None of this
really makes any sense without any equations to illustrate all the points, but to
someone who has seen these fields of mathematics before, the point I am trying
to make should be clear; linearity is general and useful for studying the
physical world, but does not actually appear anywhere in nature, which is
unfortunate, since it is the only thing we know how to do perfectly. It is
something we made up; a crutch of sorts, but one that has laser harpoons and
can shape-shift into whatever will best solve the problem to first order. The
physicist’s job in this instance, is that of solving a problem by proxy; we
can’t do the real thing, but we can do something which is close enough to the
real thing that it may as well not matter that they are different (see
perturbation theory).
Another particularly impressive example of extremely
insightful ends from seemingly rudimentary first principles (such as linearity),
is the entire field of group theory, but I feel my 4 weeks of having learnt it to
shallow depth hardly qualify me to speak on it here; all I know is that it
really blew my mind when first encountered, and also gave me a bit of a degree
crisis as I realized I also had a lot of fun playing the mathematician. Prof. John
Ellis, of notable Cavendish fame, introduced us also to a 1960 paper titled “The
Unreasonable Effectiveness of Mathematics in the Natural Sciences” by Eugene
Wigner, in which this almost magical process of reaching a lot from very simple
first principles is brought into question. He writes that the inclusion of
mathematical rules in physical phenomena is not due to their inherent
simplicity, but rather due to a selection bias of the lowly physicist in
drawing connections between that which is observed, and the frames of logic
developed by the mathematician long before. As a side note, he also names, as a
defining quality of all physicists, that they “believe that a union of the two
theories [of general relativity and quantum field theory] is inherently possible
and that we shall find it”, which I thought an uncharacteristically absolute statement.
Wren conveyed to me the point that in biology, there
is an organism for anything. To elaborate, there is some special animal, plant
or other weird thing, whose characteristic behaviour or physiology can be used
to test any theory about any system. The systems present in many organisms are,
in a small handful of them, greatly exaggerated in their form, size,
complexity, or are otherwise unusual in a useful way, and what is learned from
these ‘freaks’ can often be applied to many other biological creatures. Such
organisms are referred to as model organisms. For example, measurements of
nerve action potentials were impossible due to the diminutive size of the axon
and lumen, until 1930 when Huxley and Hodgkin used the squid giant axon as a
model, which was large enough for such measurements, and allowed for analysis
of signal speed, action potentials, effective currents, etc. with respect to
the nerve length, diameter and other factors. The only enabling condition for
this research was that, for some reason, a submarine species had developed
weird, big nerves. Wren will often tell stories of how strange squid anatomy
really is, and it must be quite strange since the words pertaining to it exceed
a scary number of letters in length. The history of biology is full of many
more such examples, and it seems a central axiom that researchers will never
run out of model organisms.
This, despite my juvenile attempts at invoking the
‘lack of rigour’ which, by now, has completely vanished from my perception of
the biological sciences, was something that pointed out to me, just how similar
our sciences really are. At first, it seemed so strange to me that new
knowledge had to wait on the discovery of some unusual, rare, in subjective
terms ‘grotesque’ living thing, rather than just looking at what we had already
discovered. But then, I was reminded by my lecture notes, of a certain
experiment at the beginning of the 20th century which changed the
face of physics by proving, once and for all, the existence of quantum
particles. Conceived of by Otto Stern, and first conducted by Walther Gerlach a
year later in 1922, the Stern-Gerlach effect showed that the distribution of
the magnetic momenta of electrons was firstly discretised, and secondly obeyed
a statistical law that implied that their orientations were unknowable, given that
one component thereof was known arbitrarily precisely. This was strange.
Grotesque, even. Something this relevant passed into the meme-pool because some
scientists threw tiny particles through a weird, irregular magnet in a very
specific way, and they happened to land in two clearly disjoint blotches on
some film. I began to accept that our brains formed a very homogeneous
science-machine across different disciplines, since we physicists have been
looking for similarly miraculous ‘freak’-particles for just as long as the
biologists have been looking for their freak animals. It is no more rigorous a
process.
This similarity is best exemplified by the excursions
of the great modern physicists into the adjacent fields; for instance, the
likes of Erwin Schrödinger, whose lecture
course ‘What is Life?’ during his tenure in Dublin, served to popularise the
field of molecular biology. He writes in many of his other works of
dilettantism, and how, counter to views held by his contemporaries in
philosophy, he sees it not as the highest sin of the intellectual, but rather
as a necessary evil, conjured up in response to fields of study narrowing and
deepening. I have read some different accounts of this trend in science, and
personally hold the opinion that those denouncing this ‘dilettantism’, a term I
find somewhat derogatory, have greatly underestimated the complexity of the
natural world. Regardless, Schrödinger’s analyses of why macroscopic life forms
must be so large compared to the atoms and molecules of the surroundings (in
thermodynamic terms), and his accounts of the changes in biology at the time of
Franklin, Watson and Crick’s discoveries, serve to show that the mind of a
theoretical physicist can apply itself to unfamiliar fields to some success,
and as a great inspiration and source of optimism for the future of the
increasingly abstract mutations of the science he would have known and loved.
From the prior discussions and examples, I might
propose a very loose, qualitative model for the behaviour of
correctness-convergence of theories in different systems of knowledge over
time. I think in general, ignoring some outlying blips (see N-rays), we can see
that accepted knowledge has a negative exponential convergence towards whatever
‘is’. Furthermore, what varies from field to field is the time constant with
which this happens, a type of purely imaginary ‘theory-frequency’, if you will.
This is related to what some humanities-sceptics may refer to as the degree of
‘rigour’, but I prefer to associate it with the degree of simplicity of the
fundamental objects of study in the field in question. For example, the
fundamental objects of Mathematics, such as sets, functions and graphs, are
extremely simple ideas which have some very complex emergent behaviours. Ideal
pendula, orbiting masses, quantum particles etc., such as are found in the
physical sciences, are likewise simple, but must adhere to non-trivial
observations of reality. It is very easy to develop the idea of one quantum
harmonic oscillator in isolation from its surroundings, at least when compared
to Avogadro’s number of them thermalizing with each other to form the matter
that surrounds us, and especially when compared to what one human brain can do,
let alone 8 billion of them. So, while progress is made everywhere
simultaneously, the rate of progress, as expressed as some decay constant with
which knowledge and ideas and theories converge on what is ‘true’, depends on
the complexity of what is studied. This rate constant in physics may be the
rate at which weird unexplained phenomena are found, and in biology, the rate
of new species discovery, as this is proportional to the rate at which those
special key-species for specific systems are found. The existence of such a
rate constant implies that there is limiting behaviour, which implies the
existence of the truth, the implication here however not being in a
mathematical sense of the word. I suppose that this kind of decay is piecewise
continuous, with the jumps in the general characters of the accepted theories
being what epistemologists call a paradigm-shift, but surely the decay
constants remain the same across the different jumps (not that they could ever
be measured, so this proposition is not really instructive of anything).
We evanescently converge on the truth, and we begin,
as we become more involved, to stretch the resources of our mental faculties
thin over the infinite surface of what there is to know. It is important for us
scientists to remain, at our core, philosophers, because the only defensible
option in the face of our nearly infinitely complicated world, is modesty and
submission to our own ignorance. Those who overestimate their own ability to
make progress will falter in most cases; a scientist is, by default,
unconfident and unsure. However, if we allow ourselves to lose faith in the
existence of the truth, then we also permit the degradation of our efforts to
pursue it. Why even do all this thinking and working and studying, if there is
no end? In conclusion, among other things, I believe that despite our enjoyment
of the works of Philip K Dick, we should all be cautious not to read too many
of them.
Schwiening CJ. A brief historical perspective: Hodgkin and Huxley. J Physiol. 2012 Jun 1;590(11):2571-5. doi: 10.1113/jphysiol.2012.230458. PMID: 22787170; PMCID: PMC3424716.
Comments
Post a Comment