Paul,
sure you are right, that's the general spirit of the approach, which is in itself already quite controversial.
But let me take this opportunity to point out that, within this general approach, several models are available that, in my opinion, should not be considered as equivalent, and not only because of the mentioned issue of global synchonization. I am referring to what Ed Fredkin calls 'the tyranny of computational universality': many models of computation are universal, that is, they can simulate a universal Turing machine and perform any algorithm. So, why bother choosing one in particular, for the foundations of physics?
Well, when model B tries to simulate a computation of model A by using its own mode of operation, it usually needs to perform additional 'spurious' steps, that have to be filtered away in order to retain just the original steps (not to mention the fact that the original input has to be coded before being fed into the simulator). Fredkin suggests that there should be a one-to-one correspondence between the 'states and function' of the model and those observed in the physical universe: so, the choice of a specific (universal) model is indeed relevant, because we would select one, or the one, whose features have a clear physical counterpart, and viceversa.
In my work I have adopted this nice, economical idea by Fredkin, specializing it to a one-to-one correspondence between the events of physical specetime and those of a causal set from a formal computation. And, again, the choice among different causal sets from different models is far from being irrelevant: for example, some causal sets end up being totally ordered, or admit nodes with unbounded degree, others don't. These properties have clearly an impact on the emergent physics.
Regards
Tommaso