Hi Sean! Yes, of course I remember you, and thanks for your very thoughtful comments.
Your GR points are all perfectly valid, and you're right that the LSU-style "thick sandwich problem" is unsolved (specifically, the question of whether there is one unique classical solution for a given closed-hypersurface boundary metric, and how to find it). But the "problematic questions" that I had in mind were issues involving the intersection of GR with QM... namely, the question of whether one can even impose all of the needed boundary data without violating the HUP. (If not even the Big Bang can beat the HUP, having a formal solution of Newtonian-GR for a HUP-violating initial boundary seems sort of useless to me, even leaving aside whether "lapse functions" are natural to include in an initial boundary to begin with.)
Even though the thick-sandwich problem is clearly in the LSU camp, the way it's normally phrased makes it clear that it was developed in an NSU-mindset. After all, who says that there has to be only one unique solution? (Esp. given what we know about QM.) I'd be far more interested in a result that showed *many* solutions for a given boundary, of which our actual universe would be merely one possibility. (And yet, if this result were discovered, I think many physicists would view this as a failure.) At the end of the day, though, it's not the job of LSU approaches to recover classical results, or even NSU-like results. Hopefully it will lead to *new* insights that actually look like our GR+QM universe.
As for the measurement problem, the clearest discussion I've written so far is my entry in the previous FQXi contest, but even reading that you'll still probably have many of the same questions. A stronger argument will require a fully-fleshed out toy model, one that I'm still plugging away on, but for now I'll leave you with the following thoughts: If I'm right that configuration spaces live in our heads, not in reality, then we need to delve deeper into the foundations of the path integral to find a realistic LSU story. The good news is that the action itself lives in spacetime. Sinha and Sorkin (1991) pointed out that by doubling the particle path integral (folding the paths back on themselves in time), you no longer need to square the integral to get out probabilities -- and at this point it almost looks like the configuration spaces used in stat mech (which I assume you'll agree would be perfectly acceptable for a realistic theory). But those particle paths still need negative probabilities to get interference. However, fields can interfere in spacetime, so extending these ideas to fields can arguably solve this problem (although this will require further alterations of the path integral, with some master restriction on field configurations to make it mathematically well-defined). The last piece of the puzzle -- how to objectively define an external measurement in the first place -- is addressed in the previous contest entry. The key is that a measurement on a subsystem is not merely the future boundary constraint itself, but also the chain of correlations that link it to the cosmological boundary. Thus, in a quantum eraser experiment, the future chain is broken, and all observers eventually concur that no "measurement" was ever made in the first place. Note this only works in an LSU; in a realistic NSU theory, one needs to know "right away" whether a given interaction is a measurement or not.
For your final idea, about some "Oracle" choosing amongst possibilities, I'm entirely on board with that notion and am growing more confident about it all the time. But it only makes sense for the choice to happen *once*; some global choice that picks one reality out of all possible universes (given all the boundary data, and some master constraint on the total Lagrangian density). Maybe it's now clearer why I'd prefer lots of solutions to the thick-sandwich problem. From our perspective, individual quantum experiments all might seem to have a different random choice, but really it's all one big choice that manifests itself as computable patterns between similar experiments (computable from the size of the different solution spaces.) Getting the probabilities right will be the ultimate test of all this, but if you read my conclusion again with this in mind, you might see what I'm going for.
I'm looking forward to reading your own essay... And I hope we cross paths again soon!
Cheers,
Ken