Hi Israel. Thanks for the comments.
Suppose one 'borrows' some constants (for example, Planck h, or universal gravitation G) from existing theories, and uses them in a new theory such that:
(1) the predictions of the 'old' theories are confirmed by the new theory, yielding even better agreement with experimental results, and
(2) more phenomena can be explained and predicted with high accuracy by the new theory, that fall even ouside the application scope of the 'old' ones.
What's wrong with that? The idea is that the new theory 'absorbs' the old theories as special cases -- of more limited applicability and lower accuracy. I do not see the inheritance of physical constants from theory to theory as a problem, but as a nice feature of scientific progress.
But perhaps you are addressing the problem of whether a theory is autonomously capable of justifying/determining the value of its constants?
I am indeed fascinated by this problem, although it is a bit out of the scope of this contest. In my opinion, the most ambitious form of ToE (if it exists) should be able to do without any physical constant: all of them should be derivable -- should emerge from the rules of the game. It is nice to think that those values had not to be chosen and fine-tuned by Someone, before switching on the Universe... And I believe that theories fundamentally based on a discrete substratum, on computation, and on emergence -- the type I discuss in my essay -- have much higher chances to achieve this goal, almost by definition.