I spent a lot of time the last couple years thinking about what it is we actually do when math modelling. What makes a good model? In an epistemic sense, are there any rules we must heed? Are there any guidelines to follow or patterns to strive toward?

At the end of the day, all of science™ is just a big model that’s come to be commonly accepted, one of many ways to conceptualize our experience of the world. Importantly, we judge the quality of our theories by mainly

  1. the predictions they’re able to make
  2. their explanatory power

After all, I may posit that it’s invisible massless goombas, not gravity, that induce the motion of the celestial bodies. I might even concoct one fantastic lie after another to defend my theory, each being perfectly consistent with our experience of reality. But no one would take me seriously unless my goombas predicted something falsifiable and true or somehow provided an explanation more satisfing than our current understanding.

When I started, I was incredibly starry-eyed. I looked the magic of my favourite work (the theory of heart or classical neuron models for instance), the incredibly clever shifts of perspective that made it all click, and was eager to recreate it.

At the same time, there was a certain flavour of models that left a bad taste in my mouth. I wrote (unnecessarily pompously) in an early entry:

In my experience, there are two varieties of math models. The first, which draws from the system it hopes to represent, understands the relationships between its intrinsic characteristics, and summarizes these relationships in the form of equations. In contrast, the second takes mathematical quirks and broad non-specific relationships to exploit their similarities to the system's behaviour and create a model that may very well reproduce the system's behaviour but fails to account for its underlying nature.

I don't deny models of the second kind have their uses. However, the primary use of any model is to draw conclusions inherent to the system and any conclusion drawn from a model that does not rely on the inherent characteristics of its systems or makes too many errant assumptions is hardly trustworthy.

This, I think, is only modelling for the sake of modelling, not for understanding. We should be wary of models that use complex mathematical techniques to cloud their underlying ill-formed epistemic justification.

Of course, we shouldn’t be striving to just make one squiggly line look like another. Over time, however, it became clear more than subservience to the subject matter was important. In a later entry:

There lies a much more fundamental issue at the heart of modelling: a fine balance between truth to the physical world and understandability/utility of results. On one hand there is a system of a hundred complex equations describing simple dynamics by mapping by mapping each and every physical quantity and, on the other hand, an overly reductive model of a diverse physical phenomenon. The former provides results so abstruse they lose their utility (a numerical soup) whereas the latter bears no resemblance to the system it proudly claims to describe.

I realized the reason I loved some of those early papers was exactly their balance between physcial accuracy and understandability. They seemed to assume just the right things to make everything fit. More than that, they broke the mould and worked outside of cookie-cutter conventions. This type of work is still out there and is something to strive towards; all it takes is some heart and a touch of ingenuity.

A famous quote rings true:

“Everything should be made as simple as possible, but no simpler.” - Big Al