Dear Sabine,
Tommaso Bolognesi suggested we read your essay since he saw many similarities between yours and ours. In a standard cop-out that you obviously will recognize, we only wish we had the time to devote to our essay to make it is clear as yours, although, we still are likely to have fallen short of your compelling and passionate (and at times very funny!) proposal.
We agree that there are clear parallels in our summaries of current problems, although our proposed solutions are quite different. You doubt human abilities to derive meaning from large data sets and to think about long-distance and long-term consequences of our actions, and you say "what little grasp we have is prone to cognitive biases and statistical errors." And then you crystallize these issues thusly: "These cognitive shortcomings are not only obstacles to solving our problems, they are the problem". We couldn't agree more or have stated the problem more clearly.
We also are in complete agreement with your analysis of education: "It is time to wake up. We've tried long enough to educate them. It doesn't work. The idea of the educated and well - informed citizen is a[n] utopia. It doesn't work because education doesn't please people. They don't like to think. It is too costly and it's not the information they want." But, then it appears your solution is to provide training and feedback frameworks, and education on how to implement them. You state very clearly that technology is not an integral piece of the solution: "We have a social problem, not a technological one."
The primary differences between your proposal and ours are as follows:
- We believe the solution to our problems is partly priority and partly technology, i.e. first deciding that we need better brains and other thinking machines, and then creating the technologies to implement the solutions. You appear to agree on the first part, but not the second.
- You believe that training will work this time if we just do it a certain way.
- To guide long-term decisions, your proposal places trust in current human priorities, and thus current levels of rationality. Substantial research by Kahneman and Tversky, Keith Stanovich, Tom Gilovich, Dan Ariely, and many others convince us that people are neither rational nor good forecasters, even when they are focused on the future. You clearly understand these issues, but this doubts about bias and irrationality seem to be overwhelmed by the apparent practicality of your proposal.
After saying the problem is not technological, your essay appears to acknowledge that an engineering solution is preferred: "In the future, information about matches with personal priorities may be delivered wirelessly to brain implants, constituting an upgrade of humanity for global interactions." But given current limitations, you rely on a less reliable but practical solution "With presently existing technology we have to settle for visualizing a match or mismatch rather than feeling it." But this is what education has been doing for centuries: translating data into an intuitive graph, or a problem into another kind of problem, using analogies and other abstractions.
Here is the key question for you: why not simply commit to fixing the intrinsic problems you identify, rather than playing tricks of substitution of a long-term goal for a short-term one, or making someone believe they are eating ice cream when they are actually eating vegetables? Why not instead engineer the brain to be better at understanding data (or to like vegetables more than ice cream)? Without question these are challenging undertakings, but why continue to apply band-aids on top of dozens of previously applied band-aids?
We think our proposal is the only truly efficient (albeit, long-term) approach and, while our proposed solution is not as specific as yours, we want people to engage in a serious conversation about the issues we raise and how to create better brains and other thinking machines - especially the best scientists and engineers who typically have little motivation to consider these issues because they are comfortable with their own intellects. However, we think that this is another illusion of spacetime proximity (close), which is why our essay asks the reader to consider an imaginary being with god-like powers to do science and engineering (distant). The most rational and intelligent people only feel satisfied with their present mental status because human perceptions are selected to be relativistic about abilities, but the problems highlighted by both of our essays apply to everyone.
I hope your essay does well since the parallels of our two essays are striking and it is important that these issues take center stage. I also hope that you read our essay and that we persuade you to some degree of the soundness of our proposal. Whatever the outcome, we're glad to have made your acquaintance through this competition.
All the best,
Preston Estep (and Alex Hoekstra)