Steven Andresen
Thanks for providing a summary by ChatGPT-4. It looks good overall. But clearly it does not have a mathematical mind. It does not really understand the credit system I proposed in the essay, in particular, for anything and its merits related to the math details, even though only very elementary math is involved. That is kind of one of the shortcomings of the new AI system.

    Wanpeng Tan Yes Chat GPT 4 has its limits. I think though that it provides an indication of what is likely to come. I don't think those shortcomings are going to be for much longer. Whether we have to wait months or a few years I couldn't say.

    But even in its current form it's still useful for science review. It can already identify logic, and provide an assessment of documents on this basis. My essay puts Chat GPT 4 through its passes in a novel conversation, testing its response. The result impressed me.

    6 days later

    quote
    Using an analogy with the capitalist economy, we examine the issues within modern basic science research as innovation drives both evolutionary cycles of the economy and research. In particular, we delve into the topics of peer review, academic monopolies and startups, the tenure system, and academic freedom in detail. To improve science research with a mature paradigm, a comprehensive solution is proposed, which involves implementing a credit system within a robust community structure for all scientists. Members can earn credit by contributing to the community through commenting, reviewing, and rating academic activities of submitted manuscripts, grant applications, and up to five achievements from each member. As members accumulate experience and credit, they can progress in their roles within the community, from commenter, reviewer, moderator, up to board member (serving in governing committees). High-achieving individuals are evaluated by the community for the quality, rather than the quantity, of their academic accomplishments. High-risk, high-reward projects from academic startups will be properly funded, and a healthy feedback and ecosystem will make the scientific community prosper in future innovative cycles in a self-sustaining way
    end of quote
    What strikes me is that this is what the Ivy league schools do in science departments already, The problem is that in smaller schools, in more isolated environments the judging community lacks the gravitas and the ranking to do this. It is a clever idea, but it will naturally favor the Harvards, Standfords MITS and Caltechs, with many other schools cut out. . I.e. this could actually increase the centralization of output of presumed quality papers, and have the reverse impact so desired by the author.

      Andrew Beckwith

      You quote the whole abstract so that I am not exactly sure what you meant in your comments. But I assume that you meant the part on the evaluation of individual accomplishment up to five pieces of achievements. Then I respectfully disagree or it seems that you did not quite understand what the measures the essay proposes. Indeed, the problem for smaller schools is that they don't have ways to truly evaluate academic achievements, so in practice, you often favor big brand names instead of actual merits. What is proposed in the essay is exactly to solve this issue: it is the entire community instead of individual institution to do such evaluation. Here the community is meant to be a whole field of participating scientists in the whole nation, maybe even of the whole world, definitely not some isolated small community. That is, it should consist of all relevant scientists across all regional barriers.

      In addition, such evaluation system could also prevent too much interference from other factors such as personal relations, elite circle membership, political, and many other unspoken rules.

      I did understand what you were driving at from the beginning, but you actually made my point
      A. Smaller schools lack the BENCH, of reference points to implement your proposal
      B. Large schools, and or very famous ones, would have a crushing advantage
      C. The only way out of what I see as a difficulty would be to have a NATION wide endeavor as to the evaluative BENCH.

      If the entire nation state were not involved with this, it would merely be a case that the rich get richer and the poor get poorer

      The only way out of highlighting the good points would be to have a NATION wide evaluative bench. You highlighted that in your reply. So you inadvertanly made my point

        Andrew Beckwith
        Thanks for your interest. Now I see where the confusion came from. Nowhere in the essay does it suggest that the proposed credit/role system be implemented at the INSTITUTIONAL level. Rather, it must be based on the entire community of all scientists in a given field (see, for example, the 2nd paragraph of Sect. 4 on page 7). In fact, such structures are readily available in some fields. For example, arXiv.org has been used by almost all physicists in the world and will be an ideal INTERNATIONAL base for implementing the proposed credit/role system. The only obstacle is the bigotry of its top administration.

        This community-based system is supposed to (largely) replace the institution-based tenure system, as part of the proposed solution. As a matter of fact, all schools, large and small, will ultimately benefit, but they all have nothing to do with the system, which should be implemented FIELD-wise rather than institution-wise.

        The life sciences, as another example, may be ready for such a system. They have recently established an international platform, reviewcommons.org, supported by both the European (EMBO) and American (ASAPbio) communities, as well as many preprint and publication services in their field. They may beat our physicists to the punch and be the first to implement a similar system.

        There is no need to build a national or international community structure first from scratch. For any basic science field, a widely used preprint platform would be a perfect starting point. We don't need to implement everything all at once. But the basic credit/role mechanism as proposed should be in place first. We could start with the review of preprints/publications first. Then we could add the review of grant proposals. Finally, the evaluation of individual achievements (mostly for synthetic ones, as single-paper achievements are automatically evaluated in the first step).

        More details can be seen in the full version of this essay submitted as an anonymous preprint to OpenReview.net: https://openreview.net/forum?id=E144GC5Vgw6

        If it were done on the national level then your proposal would have a fighting chance

        Vladimir Rogozhin
        Thanks a lot for your support. I'd like to take the opportunity to elaborate it a bit more, hopefully encouraging the community to give it a try.

        The proposed community-based credit/role system for peer review could be implemented either on large preprint servers or on dedicated review platforms. Most existing services and platforms, including the most recent ones, either lack incentives or do not provide the right incentives - the monetary/honorary incentive some of them have used does not really work for reviewers. The best incentive, in my opinion, is to increase their role in the community in exchange for their quality review work. This will motivate most scientists to review each other's work more.

        In addition to the right incentives, such a system could actually make the platform financially self-sustaining. For example, they can receive donations or fees by helping overlay journals on peer-reviewed preprints; by assisting funding agencies, especially private foundations, in reviewing proposals; by providing academic institutions with more reliable merit evaluation of candidates for their hiring and promotion decisions.

        By gradually implementing the system from paper review, proposal review, to achievement evaluation, the credit/role system will eventually make the implemented platform the most attractive one for all scientists and researchers, especially the up-and-coming young ones. The most important factor for the growth of a platform is the size of its user base. This will do it. In addition, a successful platform could further expand its scope, for example, in the business of organizing conferences (for example, determining topics and who should be invited), and reviewing proposals for experimental facilities, etc. That would be a dream come true for me.

        Regarding the downsides, I have received criticism against a quantitative system that I should defend a little more. The development of science itself is an evolutionary process of becoming more and more quantitative and rigorous. If we never try to make a field more quantitative, then it stands no chance of becoming part of the science. I dare to propose an initial endeavor to quantify measures that could be applied in peer review, in the hope of establishing a first quantitative paradigm for peer review. It would be a pity if rigorous science could not be assessed in a quantitative manner.

          There are many aspects I agree with in this essay, in particular the need for a comprehensive approach to scientific advancement. However, the analogy to a Capitalist Economy needs to take into account the problems of that economy, especially monopolies and power struggles. Adam Smith’s Invisible Hand of a free market system has shown itself to be quite visible to those in power. While the statement is made “Fortunately, the capital economy has addressed this issue by establishing antitrust laws and regulations to prevent monopolies.”, the statement presumes an entire governmental structure imposed on the Capitalist Economy such that it can direct and control that economy. And, as can be seen in the world today, that governmental structure can be corrupted, which then allows the capitalist economy to be corrupted.
          In parts of your essay, you appear to be discussing the governmental side more than the capitalist economy side - without explicitly acknowledging the importance of an entity that can enforce rules and antitrust laws in a capitalist economy.

          There is also the concern that capitalist economies run on the presumption of consumerism - that people will buy ‘things’ (any ‘things’). One of the difficulties of science is that only technology produces ‘things’ that the general populace can consume. ‘Ideas’, as concepts are not marketable - they are too ethereal to have intrinsic value. So it appears the analogy is that research papers and articles become the ‘things’ that science consumes. The reference to peer-reviewed journals and arXiv (and viXra) would appear to support papers and articles as the ‘things’ the scientific economy would run on. Are these items adequate ‘scientific things’ that can be associated with value and transferred between people, as analogy with a capitalist economy? (What else could there be?)

          I think your problem with identifying ‘start-ups’ comes from this issue of identifying what can be consumed and provide value in the scientific economy. Technology is required for the general population, however identifying the ‘scientific things’ that can provide value to other sceintists would appear to need some attention. Furthermore, this media age has greatly expanded the possible ’scientific things’ beyond papers and articles (e.g., videos, animated simulations, annotated digital twins). To insist on only preprints, publications, and grant proposals maybe a starting point, but misses the changes in the larger world happening today.

          Returning to the traditional ‘scientific things’ of preprints and publications - how could your credit system work for this essay contest? For FQXi? I will note that a couple years ago for the FQXi essay contest an attempt was made to rate essays on a scale - which, in my opinion, failed through people gaming the system and down-grading essays that competed with their own or which an author disagreed.
          I would be quite interested in attempting a system similar to what you propose, it just brings me back to the governmental structure that is required to ensure fairness that a free market system lacks. Unless self-correcting feedback systems can ensure fairness (economists have made suggestions), I believe such an enforcement structure is needed.

            Vladimir Rogozhin
            It is not a single project per se. The credit/role system should be implemented field-wise. So there could be many projects. Everyone is welcome to start experiments with the proposed quantitative system, including small eprint servers, dedicated peer review platforms, for-profit or non-profit. I don't have the resources to play any of the leading roles, but I can definitely be a consultant for any of these projects.

            I have just updated the full preprint posted at OpenReview.net:
            https://openreview.net/forum?id=E144GC5Vgw6
            You may find relevant new discussions at the end of the article.

            Donald Palmer
            Thanks a lot for your detailed comments. You may take a look at my just updated full preprint posted at OpenReview.net:
            https://openreview.net/forum?id=E144GC5Vgw6
            You may find many of your questions addressed in the 2nd section of the appendix. Here is a recap.

            What is the point of the analogy to a capitalist economy? The basis of the analogy is that both systems are driven by cycles of innovation. However, there are fundamental differences between the tangible products of a capitalist economy and the much less tangible outcomes of research. These differences necessitate a complex peer review process for evaluating scientific progress, which is the central focus of this article. Nonetheless, addressing this unique peer review procedure requires considering potential issues associated with start-ups and monopolies that are common to both and that could impede innovation. The point is that capitalist economies have been more successful in dealing with these issues than scientific research, which is both unfortunate and inexplicable.

            How can the scientific community do better than the governmental structure that is meant to prevent monopolies in a capitalist economy, which can often be corrupted? While it is true that the capitalist economy and its regulatory government are not flawless, they do have a democratic mechanism in place that promotes fair competition and fosters healthy innovation. In contrast, the scientific community can be perceived as more authoritarian than democratic, i.e., not even up at the level of the capitalist economies. As a result, unorthodox ideas in science often face gatekeeping barriers to recognition and acceptance.

            The proposed system advocates the principles of democracy and diversity that can truly preserve the freshness and vitality of the driving force of innovation. Considering the rigorous nature of scientific research, there is good reason to believe that a properly implemented system could significantly enhance the self-regulating structure within the scientific research community, making it more robust.

            In scientific research, ideas embodied in preprints, publications, and proposals can be considered the ``products''. But are there more tangible products in science, comparable to the marketable goods produced in the economy? In the realm of basic science research, ideas are undoubtedly the most important output, and the proposed review system is particularly well-suited for evaluating such results. On the contrary, when it comes to research focused on applications and technology, more tangible products emerge, some of which even lead to the creation of start-up companies in the capitalist economy. We argue that applied research progresses more effectively than basic research, precisely because of the existence of such tangible products. The real challenges lie in basic science research, where progress seems to be increasingly stagnant, and cases of meaningless work or fraud have become all too common.

            Simple rating systems like what FQXi has done are flawed and easy to game upon. Fortunately, the proposed system has built-in measures to counteract such attempts. In particular, the credit rewarding process is dynamic and can be continuously fine-tuned, rendering any gaming efforts ultimately ineffective. In addition, with the aid of advanced machine learning techniques, we can further enhance the system's performance and robustness as we accumulate a larger dataset of statistical information.

            Your response has addressed several items I pointed out - thank you.
            I am all for finding a better way - and it will likely take some stumbles in the process.
            Having been in a few 'develop a concept and implement it' situations, attempting the 'Big Bang' approach where everything is changed all at once has proven quite ineffective. So aiming initially at arXiv might not prove the better approach. Might there be a small 'test' or Proof of Concept situation to try it out on?

              Donald Palmer
              Thanks again. I actually think that smaller new platforms could do better. Here are relevant paragraphs of discussion quoted from my preprint ( Sect. B2 on page 21):

              Ideally, the most suitable place to implement the system would be on large preprint
              service platforms that are widely used within a given field. However, due to its entrenched
              dominance and inertia, the largest eprint server, arXiv.org, does not allow comments or
              reviews, let alone a quantitative review system. Although this path would have been the
              most efficient, it appears to be a long shot.

              Conversely, emerging smaller preprint servers like bioRxiv and medRxiv are more
              willing to try new ideas and could play a more significant role in the adoption of the pro-
              posed system. In addition, newly established dedicated review platforms such as PRE-
              review.org, ReviewCommons.org (non-profit), and ReviewerCredits.com (for-profit) could
              gain increased recognition and significantly expand their user base by implementing the
              new system. Interestingly, a for-profit company called ScienceOpen.com, which offers
              both preprint/publishing and peer review services, has already implemented most of the
              required structures except for the new credit/role system. It may soon demonstrate the
              desired effect through a relatively straightforward integration of the new quantitative sys-
              tem.

              Any of the aforementioned platforms would be suitable for starting experiments with
              the new system. There is no need to first build a national or international community
              structure from scratch. Nor is it necessary to implement all aspects simultaneously. How-
              ever, it is crucial to first establish the basic credit/role mechanism as proposed.

              I've just got in contact with these platforms. Some immediately showed interest. Hopefully at least some of the proposed ideas will be tried out soon.

              Interesting solution of community-based structure for all members with proper scientific training but wonder about the equal participation. The analogy to that of a capitalistic economy also interesting in terms of evolutionary cycles and formation of monopolies. Your reference to corporations and the business community disruptive innovation seems similar to the Externalities essay. Yours is a more structured approach regarding the solution of community-based structure driven by the scientific community. The scope of your solution does not mention competing command economies that are mention in the Externalities essay where a community-based solution would not be tolerated. Your essay does focus on how science could be different and does cite the influence of capitalist development but not its overpowering influence. Both your essay and the Externalities essay do mention the required scientist's role in adopting change. I will rate your essay high in its direct focus on the problem posed by the contest.

                IvoryLungfish
                I look forward to watching (at least) these platforms you mention. Thank you.

                  The author elaborates on a system similar to a capitalist economy. We need a change of science now, which is really too bureaucratic.
                  In my essay, I also mentioned a comparison with the capitalist economy and with the communist economy in a few sentences. But the author made a full analysis. I hope that some changes will be make one day.
                  But we should know that such a system is only the beginning and needs to be corrected and upgraded. For instance, not all A0 and A-1 will deserve this assessment, a big majority will, but some will not, etc.

                    8 days later

                    I really appreciate your proposal and I too have often thought of a similar system capable of guaranteeing due recognition to the work of a scientist, also in analogy with the economic credit system that you propose. However, I always get stuck on the following problems, someone is already highlighted in this forum.
                    1) A credit-based system would bring into science all the distortions of economics, including many injustices. The good scientist would now try to do a good job, perhaps innovative and groundbreaking, hoping to get the right recognition. A credit-based system would push the scientist to act for "profit". The upstarts would be incentivized. I give an example. In 2012 an article was published on arXiv (not peer-reviewed) proclaiming the experimental discovery of superluminary neutrinos. In the following weeks we saw on arXiv an indecent proliferation of preprints with the most bizarre theories of violation of general relativity. All these authors had wide visibility in those days and perhaps even managed to publish in journals, while an honest scientist would rightly have waited for the peer-review of the research before embarking on such speculations to collect "credit". According to your model these authors would have earned credit when their works are actually rubbish in the test of fact.
                    2) Credit monopolies and credit trade would be created (I have personally already noted the existence of a not-always-honest citation traffic in scientific literature). We are all familiar with the opportunistic nature of man. Authors or institutions amassing a huge volume of credits would crush any competitor, regardless of the validity of the proposed ideas.
                    3) The economy is extremely short-sighted and is only interested in the profit it will have tomorrow without worrying about the catastrophes that will happen the day after tomorrow (see the responsibility of the consumerist system in the problem of global warming and its inability to limit itself to avoid it). On the other hand, science is absolutely forward-looking. "Why, sir, there is every probability that you will soon be able to tax it!" Said Faraday to William Gladstone, the Chancellor of the Exchequer, when he asked about the practical worth of electricity. Peter Higgs had to wait 50 years to see his theory "confirmed", Einstein had to wait years before being noticed, and some scientists were unable to see the fruit of their ideas before their death. A credit-based mechanism could reduce science to a profit calculus, obscuring the thirst for knowledge and unconditional intellectual curiosity that should instead be a scientist's main motivation.

                    arXiv can be considered a very partial realization of your project if you consider that each preprint on arXiv (or arXiv moderation) corresponds to credits. Thus in recent years, we have seen a growing increase in the volume of publications under the logic of "publish or perish" rarely accompanied by real scientific progress or new ideas (I would say that this has prevented new ideas). Thanks to arXiv, theoretical physics is stuck at 20 / 30 years ago.
                    All these distortions I talk about and the negative consequences they are bringing to science are mentioned in my essay "The Name of the arXiv: when too much zeal is an obstacle to science".

                      James Hoover
                      Thanks a lot for your interest. I also wish you good luck in the competition. My goal is not so much about winning in this essay contest, but rather about gaining needed attention to implementation of the proposed system in the real world.