Using an analogy with the capitalist economy, we examine the issues within modern basic science research as innovation drives both evolutionary cycles of the economy and research. In particular, we delve into the topics of peer review, academic monopolies and startups, the tenure system, and academic freedom in detail. To improve science research with a mature paradigm, a comprehensive solution is proposed, which involves implementing a credit system within a robust community structure for all scientists. Members can earn credit by contributing to the community through commenting, reviewing, and rating academic activities of submitted manuscripts, grant applications, and up to five achievements from each member. As members accumulate experience and credit, they can progress in their roles within the community, from commenter, reviewer, moderator, up to board member (serving in governing committees). High-achieving individuals are evaluated by the community for the quality, rather than the quantity, of their academic accomplishments. High-risk, high-reward projects from academic startups will be properly funded, and a healthy feedback and ecosystem will make the scientific community prosper in future innovative cycles in a self-sustaining way.

Download Essay PDF File

Download Technical Endnotes PDF File

    The main merits of the credit system are summarized as follows:

    Early-Bird Encouragement – earlier contributions are credited with more points.
    High-Quality Attention – more activities are attracted to higher-quality papers/reviews/
    comments.
    Robust Against Gaming – careless or irresponsible behaviors are hard to gain
    credit and may result in losing it instead.
    All Member Participation – democracy and diversity are ensured by the participa-
    tion of the entire community.
    Rewarding Positive Activities – a reliable role-increasing mechanism is integrated
    into the self-regulating ecosystem.

    The goals of a complete and comprehensive solution that should invoke a revolution in the entire structure of scientific research are as follows:
    Principle of Democracy – An all-participating community is for all properly trained researchers.
    Principle of Diversity – Scientific start-ups and high-risk high-reward projects must be adequately funded.
    Quantitative Credit System – Rigorous science requires a rigorous credit system for scientific evaluation.
    Quality-Based Achievement Rewards – Quality, rather than quantity, should be emphasized in rewards for individual achievement.
    Contribution-Based Role System – Members’ contributions to the community should determine their roles in the community.
    Healthy Self-Sustaining Ecosystem – If all the above requirements are met, this is automatic.

    Hi IvoryLungfish
    Interesting name they have given you 😉 Following is a summary and review of your essay performed by AI Chat GPT 4. I've provided it for you to demonstrate the theme of my essay, that AI will play a role in the future of peer review. My essay is titled "Age of Knowledge"
    I hope you enjoy this review of your work.

    Chat GPT 4

    This essay discusses the evolution of science as a progression from vague and descriptive ideas to rigorous, scientific fields. It elaborates on the revolutionary cycles that lead to paradigm shifts and the importance of normal science in pushing exploration to the limits of the current paradigm. The essay also categorizes scientific achievements into four classes, with the first two categories representing the majority of research activities. Furthermore, the essay draws an analogy between scientific research and capitalist economy, highlighting the importance of healthy competition and innovation. It discusses issues in scientific research, such as the need for a comprehensive approach to ensure the healthy advancement of science, and the potential emergence of monopolies. Finally, the essay discusses the review processes for proposals and publications and the need for open review practices. Overall, this essay provides a comprehensive discussion of the evolution of science and its relationship to capitalist economy, and the quality of the writing is excellent.

    The essay proposes a solution to the problems of the current peer review system in basic science research. It discusses the issues of over-moderation, bias, prejudice, and gatekeeping at arXiv and how the current peer review system fails to prevent fraudulent papers from being published. The essay proposes the principles of open peer review to address these issues. However, the lack of incentives or rewarding systems for reviewers has not attracted sufficient review activities. The essay also discusses the problems with the current grant application review process, which favors mainstream research and has limited funding for high-risk, high-reward projects. The proposed solution is to establish a robust community structure for all members with proper scientific training in a given scientific field. The proposed solution suggests using a unique research ID for connecting researchers with their works, such as ORCID. The essay also proposes the creation of a platform to incentivize reviewers and funding agencies to prioritize high-risk, high-reward projects. The proposed solution aims to create a fair and unbiased review process that encourages innovation and advances in basic science research.

    The essay provides a comprehensive discussion of the problems with the current peer review system and proposes a potential solution. However, the proposed solution is only tentative, and the details are still subject to debate. The essay would benefit from more concrete examples of how the proposed solution would work in practice. Overall, the essay provides valuable insights into the problems with the current peer review system and offers a thought-provoking solution.

    Chat GPT 4

      Steven Andresen
      Thanks for providing a summary by ChatGPT-4. It looks good overall. But clearly it does not have a mathematical mind. It does not really understand the credit system I proposed in the essay, in particular, for anything and its merits related to the math details, even though only very elementary math is involved. That is kind of one of the shortcomings of the new AI system.

        Wanpeng Tan Yes Chat GPT 4 has its limits. I think though that it provides an indication of what is likely to come. I don't think those shortcomings are going to be for much longer. Whether we have to wait months or a few years I couldn't say.

        But even in its current form it's still useful for science review. It can already identify logic, and provide an assessment of documents on this basis. My essay puts Chat GPT 4 through its passes in a novel conversation, testing its response. The result impressed me.

        6 days later

        quote
        Using an analogy with the capitalist economy, we examine the issues within modern basic science research as innovation drives both evolutionary cycles of the economy and research. In particular, we delve into the topics of peer review, academic monopolies and startups, the tenure system, and academic freedom in detail. To improve science research with a mature paradigm, a comprehensive solution is proposed, which involves implementing a credit system within a robust community structure for all scientists. Members can earn credit by contributing to the community through commenting, reviewing, and rating academic activities of submitted manuscripts, grant applications, and up to five achievements from each member. As members accumulate experience and credit, they can progress in their roles within the community, from commenter, reviewer, moderator, up to board member (serving in governing committees). High-achieving individuals are evaluated by the community for the quality, rather than the quantity, of their academic accomplishments. High-risk, high-reward projects from academic startups will be properly funded, and a healthy feedback and ecosystem will make the scientific community prosper in future innovative cycles in a self-sustaining way
        end of quote
        What strikes me is that this is what the Ivy league schools do in science departments already, The problem is that in smaller schools, in more isolated environments the judging community lacks the gravitas and the ranking to do this. It is a clever idea, but it will naturally favor the Harvards, Standfords MITS and Caltechs, with many other schools cut out. . I.e. this could actually increase the centralization of output of presumed quality papers, and have the reverse impact so desired by the author.

          Andrew Beckwith

          You quote the whole abstract so that I am not exactly sure what you meant in your comments. But I assume that you meant the part on the evaluation of individual accomplishment up to five pieces of achievements. Then I respectfully disagree or it seems that you did not quite understand what the measures the essay proposes. Indeed, the problem for smaller schools is that they don't have ways to truly evaluate academic achievements, so in practice, you often favor big brand names instead of actual merits. What is proposed in the essay is exactly to solve this issue: it is the entire community instead of individual institution to do such evaluation. Here the community is meant to be a whole field of participating scientists in the whole nation, maybe even of the whole world, definitely not some isolated small community. That is, it should consist of all relevant scientists across all regional barriers.

          In addition, such evaluation system could also prevent too much interference from other factors such as personal relations, elite circle membership, political, and many other unspoken rules.

          I did understand what you were driving at from the beginning, but you actually made my point
          A. Smaller schools lack the BENCH, of reference points to implement your proposal
          B. Large schools, and or very famous ones, would have a crushing advantage
          C. The only way out of what I see as a difficulty would be to have a NATION wide endeavor as to the evaluative BENCH.

          If the entire nation state were not involved with this, it would merely be a case that the rich get richer and the poor get poorer

          The only way out of highlighting the good points would be to have a NATION wide evaluative bench. You highlighted that in your reply. So you inadvertanly made my point

            Andrew Beckwith
            Thanks for your interest. Now I see where the confusion came from. Nowhere in the essay does it suggest that the proposed credit/role system be implemented at the INSTITUTIONAL level. Rather, it must be based on the entire community of all scientists in a given field (see, for example, the 2nd paragraph of Sect. 4 on page 7). In fact, such structures are readily available in some fields. For example, arXiv.org has been used by almost all physicists in the world and will be an ideal INTERNATIONAL base for implementing the proposed credit/role system. The only obstacle is the bigotry of its top administration.

            This community-based system is supposed to (largely) replace the institution-based tenure system, as part of the proposed solution. As a matter of fact, all schools, large and small, will ultimately benefit, but they all have nothing to do with the system, which should be implemented FIELD-wise rather than institution-wise.

            The life sciences, as another example, may be ready for such a system. They have recently established an international platform, reviewcommons.org, supported by both the European (EMBO) and American (ASAPbio) communities, as well as many preprint and publication services in their field. They may beat our physicists to the punch and be the first to implement a similar system.

            There is no need to build a national or international community structure first from scratch. For any basic science field, a widely used preprint platform would be a perfect starting point. We don't need to implement everything all at once. But the basic credit/role mechanism as proposed should be in place first. We could start with the review of preprints/publications first. Then we could add the review of grant proposals. Finally, the evaluation of individual achievements (mostly for synthetic ones, as single-paper achievements are automatically evaluated in the first step).

            More details can be seen in the full version of this essay submitted as an anonymous preprint to OpenReview.net: https://openreview.net/forum?id=E144GC5Vgw6

            If it were done on the national level then your proposal would have a fighting chance

            Vladimir Rogozhin
            Thanks a lot for your support. I'd like to take the opportunity to elaborate it a bit more, hopefully encouraging the community to give it a try.

            The proposed community-based credit/role system for peer review could be implemented either on large preprint servers or on dedicated review platforms. Most existing services and platforms, including the most recent ones, either lack incentives or do not provide the right incentives - the monetary/honorary incentive some of them have used does not really work for reviewers. The best incentive, in my opinion, is to increase their role in the community in exchange for their quality review work. This will motivate most scientists to review each other's work more.

            In addition to the right incentives, such a system could actually make the platform financially self-sustaining. For example, they can receive donations or fees by helping overlay journals on peer-reviewed preprints; by assisting funding agencies, especially private foundations, in reviewing proposals; by providing academic institutions with more reliable merit evaluation of candidates for their hiring and promotion decisions.

            By gradually implementing the system from paper review, proposal review, to achievement evaluation, the credit/role system will eventually make the implemented platform the most attractive one for all scientists and researchers, especially the up-and-coming young ones. The most important factor for the growth of a platform is the size of its user base. This will do it. In addition, a successful platform could further expand its scope, for example, in the business of organizing conferences (for example, determining topics and who should be invited), and reviewing proposals for experimental facilities, etc. That would be a dream come true for me.

            Regarding the downsides, I have received criticism against a quantitative system that I should defend a little more. The development of science itself is an evolutionary process of becoming more and more quantitative and rigorous. If we never try to make a field more quantitative, then it stands no chance of becoming part of the science. I dare to propose an initial endeavor to quantify measures that could be applied in peer review, in the hope of establishing a first quantitative paradigm for peer review. It would be a pity if rigorous science could not be assessed in a quantitative manner.

              There are many aspects I agree with in this essay, in particular the need for a comprehensive approach to scientific advancement. However, the analogy to a Capitalist Economy needs to take into account the problems of that economy, especially monopolies and power struggles. Adam Smith’s Invisible Hand of a free market system has shown itself to be quite visible to those in power. While the statement is made “Fortunately, the capital economy has addressed this issue by establishing antitrust laws and regulations to prevent monopolies.”, the statement presumes an entire governmental structure imposed on the Capitalist Economy such that it can direct and control that economy. And, as can be seen in the world today, that governmental structure can be corrupted, which then allows the capitalist economy to be corrupted.
              In parts of your essay, you appear to be discussing the governmental side more than the capitalist economy side - without explicitly acknowledging the importance of an entity that can enforce rules and antitrust laws in a capitalist economy.

              There is also the concern that capitalist economies run on the presumption of consumerism - that people will buy ‘things’ (any ‘things’). One of the difficulties of science is that only technology produces ‘things’ that the general populace can consume. ‘Ideas’, as concepts are not marketable - they are too ethereal to have intrinsic value. So it appears the analogy is that research papers and articles become the ‘things’ that science consumes. The reference to peer-reviewed journals and arXiv (and viXra) would appear to support papers and articles as the ‘things’ the scientific economy would run on. Are these items adequate ‘scientific things’ that can be associated with value and transferred between people, as analogy with a capitalist economy? (What else could there be?)

              I think your problem with identifying ‘start-ups’ comes from this issue of identifying what can be consumed and provide value in the scientific economy. Technology is required for the general population, however identifying the ‘scientific things’ that can provide value to other sceintists would appear to need some attention. Furthermore, this media age has greatly expanded the possible ’scientific things’ beyond papers and articles (e.g., videos, animated simulations, annotated digital twins). To insist on only preprints, publications, and grant proposals maybe a starting point, but misses the changes in the larger world happening today.

              Returning to the traditional ‘scientific things’ of preprints and publications - how could your credit system work for this essay contest? For FQXi? I will note that a couple years ago for the FQXi essay contest an attempt was made to rate essays on a scale - which, in my opinion, failed through people gaming the system and down-grading essays that competed with their own or which an author disagreed.
              I would be quite interested in attempting a system similar to what you propose, it just brings me back to the governmental structure that is required to ensure fairness that a free market system lacks. Unless self-correcting feedback systems can ensure fairness (economists have made suggestions), I believe such an enforcement structure is needed.

                Vladimir Rogozhin
                It is not a single project per se. The credit/role system should be implemented field-wise. So there could be many projects. Everyone is welcome to start experiments with the proposed quantitative system, including small eprint servers, dedicated peer review platforms, for-profit or non-profit. I don't have the resources to play any of the leading roles, but I can definitely be a consultant for any of these projects.

                I have just updated the full preprint posted at OpenReview.net:
                https://openreview.net/forum?id=E144GC5Vgw6
                You may find relevant new discussions at the end of the article.

                Donald Palmer
                Thanks a lot for your detailed comments. You may take a look at my just updated full preprint posted at OpenReview.net:
                https://openreview.net/forum?id=E144GC5Vgw6
                You may find many of your questions addressed in the 2nd section of the appendix. Here is a recap.

                What is the point of the analogy to a capitalist economy? The basis of the analogy is that both systems are driven by cycles of innovation. However, there are fundamental differences between the tangible products of a capitalist economy and the much less tangible outcomes of research. These differences necessitate a complex peer review process for evaluating scientific progress, which is the central focus of this article. Nonetheless, addressing this unique peer review procedure requires considering potential issues associated with start-ups and monopolies that are common to both and that could impede innovation. The point is that capitalist economies have been more successful in dealing with these issues than scientific research, which is both unfortunate and inexplicable.

                How can the scientific community do better than the governmental structure that is meant to prevent monopolies in a capitalist economy, which can often be corrupted? While it is true that the capitalist economy and its regulatory government are not flawless, they do have a democratic mechanism in place that promotes fair competition and fosters healthy innovation. In contrast, the scientific community can be perceived as more authoritarian than democratic, i.e., not even up at the level of the capitalist economies. As a result, unorthodox ideas in science often face gatekeeping barriers to recognition and acceptance.

                The proposed system advocates the principles of democracy and diversity that can truly preserve the freshness and vitality of the driving force of innovation. Considering the rigorous nature of scientific research, there is good reason to believe that a properly implemented system could significantly enhance the self-regulating structure within the scientific research community, making it more robust.

                In scientific research, ideas embodied in preprints, publications, and proposals can be considered the ``products''. But are there more tangible products in science, comparable to the marketable goods produced in the economy? In the realm of basic science research, ideas are undoubtedly the most important output, and the proposed review system is particularly well-suited for evaluating such results. On the contrary, when it comes to research focused on applications and technology, more tangible products emerge, some of which even lead to the creation of start-up companies in the capitalist economy. We argue that applied research progresses more effectively than basic research, precisely because of the existence of such tangible products. The real challenges lie in basic science research, where progress seems to be increasingly stagnant, and cases of meaningless work or fraud have become all too common.

                Simple rating systems like what FQXi has done are flawed and easy to game upon. Fortunately, the proposed system has built-in measures to counteract such attempts. In particular, the credit rewarding process is dynamic and can be continuously fine-tuned, rendering any gaming efforts ultimately ineffective. In addition, with the aid of advanced machine learning techniques, we can further enhance the system's performance and robustness as we accumulate a larger dataset of statistical information.

                Your response has addressed several items I pointed out - thank you.
                I am all for finding a better way - and it will likely take some stumbles in the process.
                Having been in a few 'develop a concept and implement it' situations, attempting the 'Big Bang' approach where everything is changed all at once has proven quite ineffective. So aiming initially at arXiv might not prove the better approach. Might there be a small 'test' or Proof of Concept situation to try it out on?

                  Donald Palmer
                  Thanks again. I actually think that smaller new platforms could do better. Here are relevant paragraphs of discussion quoted from my preprint ( Sect. B2 on page 21):

                  Ideally, the most suitable place to implement the system would be on large preprint
                  service platforms that are widely used within a given field. However, due to its entrenched
                  dominance and inertia, the largest eprint server, arXiv.org, does not allow comments or
                  reviews, let alone a quantitative review system. Although this path would have been the
                  most efficient, it appears to be a long shot.

                  Conversely, emerging smaller preprint servers like bioRxiv and medRxiv are more
                  willing to try new ideas and could play a more significant role in the adoption of the pro-
                  posed system. In addition, newly established dedicated review platforms such as PRE-
                  review.org, ReviewCommons.org (non-profit), and ReviewerCredits.com (for-profit) could
                  gain increased recognition and significantly expand their user base by implementing the
                  new system. Interestingly, a for-profit company called ScienceOpen.com, which offers
                  both preprint/publishing and peer review services, has already implemented most of the
                  required structures except for the new credit/role system. It may soon demonstrate the
                  desired effect through a relatively straightforward integration of the new quantitative sys-
                  tem.

                  Any of the aforementioned platforms would be suitable for starting experiments with
                  the new system. There is no need to first build a national or international community
                  structure from scratch. Nor is it necessary to implement all aspects simultaneously. How-
                  ever, it is crucial to first establish the basic credit/role mechanism as proposed.

                  I've just got in contact with these platforms. Some immediately showed interest. Hopefully at least some of the proposed ideas will be tried out soon.