Abstract
Reductionism in science, looking for the smallest possible entities and 'causes' has been discredited philosophically for not being able to explain effects such as 'intuition'. That assumption may now be argued to be false. Learning in artificial intelligence is ever advancing. Reading last years finalist fqXi essays (at least the peer scored top dozen or so) a number of credible schema now exist to model human neural networks and outcomes. https://fqxi.org/community/forum/category/31425?sort=community Even the imperfect subconscious process outcomes we label 'intuitive' can be causally & mechanistically recreated with feedback loops. Are we sure we're using reductionism enough, to go deep enough? Not just to observational limits but to rationalization of findings taking us beyond those limits. I argue that we probably haven't yet found, or at least recognized, what's really fundamental in nature.
richard kingsley nixey