The article comments on our not being able to prove we are not in a simulation, but don't really talk about looking evidence that we might be.
There is a comments about drifts of fundamental constants - but really, so what.
But it never really ask the question,
"If we were simulating a universe, and machine resource was an issue, then what sort of computational tricks would we use and how might their effect be observable?"
The first thing we did when we got digital computers was to adopt fixed word length floats rather than arbitrary precision. This limit on the precision of data also limits the volume of data and the processing width for a given computational volume. So why is everything in physics discretized?
In a very large problem space synthetic boundary conditions allow us to reduce the necessary computational volume to that in "close" proximity to the area of interest. If the full problem domain has artefacts at a distance that are expected to have an influence on the model, then one may introduce a synthetic boundary model of the distant artefacts (generally heuristic) to keep the computational volume small but allow those external effects to be taken into account. Of course this means that the distant artefacts may not behave to the detailed computational rules as seen from the computational volume space. So how do we explain pulsars this week?
Then we would want to limit the scope of calculation that we needed so solve at each time step - just as we would do in any finite element or finite difference model that has a time dynamic. An adequate distance increments to time increments relationship is necessary to get a stable model without synthetic damping, but it does give a reasonable model with vastly less computation that one where a full set of future values are included in each step solution. It does this, however, at the cost of introducing an absolute limit to the speed at which information can propagate. So why do we have a speed of light?
It seems that to produce an interesting (non deterministic) universe, we need some randomness and it appears that in our universe, this is expressed by randomness at the individual particle level. Seems to have worked OK here so lets go for that in our own model. But that means that we really do have to be willing to calculate the whole shebang down to the particle level. But we believe that that randomness only impacts larger scale events under special circumstances, like in your brain. So now we would like to limit the scope of calculation to perform all the small scale calculations when they could have a large scale effect, but not when the could not. But to predict in advance, which small scale events can have a large scale impact, is probably impractical. I like the computational concept of Actors. These are functional language idea where all functions return immediately but return a token (or Actor) representing their result rather that the result itself. That token can in turn be embedded in the internal state of the returned token from an outer level function, etc, and at each stage actual unravelling of the delayed calculation can be postponed unless the internal detail has a realistic probability of impacting the larger scale problem to which the token is being applied. And if the token becomes unbound when it has no realistic chance of having a significant impact, then the unnecessary calculation was never done. So why do we struggle with the idea of "collapsing the wave function" and defining what constitutes "observation"?
OK, this last one is stretching it a bit, but... I think that some aspects of the relational database are a bit of fundamental math, rather than a current fashion. If I'm going to model the universe down to the particle level, the first thing that I'm going to do is to put all of the particles, at least their current state, into some sort of relational store. In order to handle then, then I'm going to need a unique key, and in order to avoid the keys taking significant storage and possible key generation contention, I'm going to do the standard trick of using the objects own properties as its key. So it looks like maybe, type, position and velocity. But this will imply a synthetic restriction that that set of parameters must be unique. Now how did that exclusion principle again?
We may not be able to see outside the model, but perhaps we can see footprints in it.