Dear Steve Dufourny,

Thank you very much for the nice words! Yes, I also think that a very convincing and vital search for dark matter has to be performed in particle physics and it is quite puzzling that all searches so far have not lead to a significant, clear detection.

All the best for your research as well to shed further light on the dark side of quantum gravity!

You are welcome Dr Wagner,

I agree , we need to have results and proofs, it is not easy to prove it and technologically it is very difficult due to fact that it does not interact with our ordinary matter. Its detection will be an exciting moment inside the sciences Community, I wait this detection.

Thanks also for your Words.

Best Regards

8 days later

Interesting account of the relationship between what's 'really' going on and what can reliably be inferred about what's going on in the context of gravitational lensing. The fact that the mass distribution of the lens is severely underdetermined reminds me of similar problems in biophysics/systems biology; there, one might be trying to infer a gene regulatory network whose behavior and parameters are severely underconstrained by experiment.

Your central insight, (which I understood to be) that you can only reliably infer the parts of the mass distribution that you have enough data for, seems like the right takeaway. At the end of the day, if you don't have the data to support your claims, there's no way to know whether you're right or wrong. Though it seems like common sense, it's a point that's easy to forget about when working on tough scientific problems. I see people making strong claims based on bad data all the time.

By the way, do you think this situation will ever improve? Do we just have to wait for new telescopes/observations/sources of data to constrain these lens models? Or will even those not be enough to reliably understand galaxy-scale mass distributions, without some additional (possibly hard to verify) assumptions?

John

    Dear John,

    thanks for the nice words! Yes, you correctly understood the message that I am trying to convey. My usual slogan is "Lensing of '69 -- use data not models". You are right, that system biology is definitely another good example for under-constrained problems and a lot of results are highly based on models (judging from my experience working on quality control for a special peptide array assembly method).

    Although I am focussing on the data-driven way of science, I think, at early stages of research in a certain field, a courageous, bold assumption/model is needed to start out with. Without some concrete claim to test, it seems hard to establish a rough idea about a phenomenon and gain enough useful observational evidence, so that the data-driven approach can be set up. Maybe, system biology still needs some time to grow into this state. Luckily, in gravitational lensing, we are on the verge of being able to shift from the model-driven approach to the data-driven approach:

    For galaxy-clusters as gravitational lenses, we are expecting a multitude of multiple images (about 1000 per cluster, as estimated to be observed by the JWST) in the near future, complemented by ongoing X-ray surveys that will also deliver a lot of additional data to break degeneracies.

    For galaxies as gravitational lenses, the number of multiple images per galaxy is not expected to grow so much, but, on the other hand, the multiple images we already have, already give us a lot of information about such lenses. The lens morphology of one galaxy is much simpler than the one of an entire cluster of galaxies. For these lenses, increasing the resolution of the telescopes to resolve small-scale features in the multiple images will allow us to infer small-scale properties on sub-galaxy scale on top of the knowledge we already have e.g. about masses enclosed by the giant arc images on the scale of the entire galaxy lens.

    Best regards,

    Jenny

    4 days later

    Hi Jenny!

    This is a fantastic physical example of something that would be unknowable (an object behind a large object in space) but it actually is (thanks to gravitational lensing). I had never thought of this example before, so I enjoyed reading your essay a lot! Also, I love this point you make: "Beyond that, incorporating new evidence into the model to tighten the prediction or refute underlying assumptions is a computationally intensive endeavor." This is extremely true, and relates to the idea behind Kolmogorov complexity. Basically, it says that data created by a small computer program (or algorithm or physical law) is not complex, but things that could only be created by a very large program are very complex. Because we humans are computationally limited, we are only able to make so many observations that are energetically feasible, which forces us to be more algorithmically-minded.

    Cheers!

    Alyssa

      Dear Alyssa,

      thank you very much for the inspirations and motivating words to go on in this research direction!

      All the best for your essay as well!

      Cheers!

      Jenny

      Dear Jenny Wagner!

      Thanks for the interesting information.

      We are happy to inform you that the essay "The Cosmological Cheshire Cat Predictable and Unpredictable Dark Matter Properties" we really liked it.

      We also believe that cosmology and astrophysics ignore the problems of the methodology of science. Therefore, there are arbitrary constructions. Pavel Poluian published in Russia a monograph "Death of dark matter: philosophical principles in physical knowledge".

      We wish you successful research!

      Truly yours,

      Pavel Poluian and Dmitry Lichargin,

      Siberian Federal University.

        Dear Professor Poluian and Professor Lichargin,

        Thanks you very much for the encouraging words and the so positive ranking!

        All the best for your essay in this contest as well and let's hope that we will live to see a clear evidence for or against dark matter!

        Sincerely,

        Jenny

        Hi Jenny,

        Interesting essay. The basic concept of gravitational lensing is simple, but you nicely describe the complexity of modeling the source and lens structures to match observations. I completely agree with you that model assumptions should be driven by empirical data.

        In my essay, I argued that assumptions should be driven by the empirical data, and that we should eliminate assumptions that are consistent with empirical data but not logically implied by the data.

        In the case you describe, eliminating unnecessary assumptions that are not supported by empirical evidence and therefore do not (currently) make testable predictions, has the highly practical result of greatly reducing the computational complexity of modeling empirical observations.

        Nicely done.

        Harrison

          Dear Harrison,

          thanks a lot for the motivating words and the good synopsis of the relationships between data and model assumptions.

          I just saw that you also replied to my questions below your essay, so let's continue our discussion there.

          Best wishes,

          Jenny

          11 days later

          Hello Jenny,

          How does your essay address the question of the limits imposed by Gödel's and Turing's findings. You do address the practical computability of gravitational-lensing models; however, am I missing the theoretical connection to Gödel and Turing?

          Please let me know,

          Best,

          Luis F Patino

            Dear Luis,

            interesting question! I do not directly mention Gödel and Turing, but the theoretical uncomputabilities are tackled in Section 4: using model assumptions to obtain equations that are not under-constrained anymore, we run into inconsistencies when looking at the resulting lens reconstructions obtained by using different models. Resolving the problem, which lens reconstruction is closest to the real lens, we find that these model assumptions are often hard to support by observational evidence. Hence, I conclude that the best way to resolve the inconsistencies is to only compute the lens properties at positions where data is available. This implies that the lens contains regions in which we cannot compute its properties. Hence, the lens properties there are theoretically uncomputable and all we can get are model-based predictions based on models from which we might never know whether they apply to these cases. So, the theoretical uncomputabilities boil down to a missing-data issue.

            Best regards,

            Jenny

            Write a Reply...