Irreducible Complexity and Functional Intermediates
William Brookfield wrote a comment at Telic Thoughts, within a thread discussing irreducible complexity, which I think merits some attention. The quote:
It seem to me that if a (mouse) trap is functioning within a biological system not — as a "trap" — but as "a blunt instrument" then NS will be selecting for the optimum "blunt instrument" and RM will just be scrambling in no specific direction (randomly). The result is that the function of this "(Mouse) trap" has no causal anticedent and its appearance by RM&NS or any other such material agents must be taken on faith. It seems to me that NS optimizes for existing function not future function and that these are two divergent directions.
Attempts to refute intelligent design inferences, drawn from Behe's irreducible complexity, are based to a great extent on homology arguments and the cooption concept. Brookfield correctly points out that functional discontinuity is associated with a selection perspective. Although he does not explicitly state it, the discontinuity is linked to cooption and homology based explanations. The explanations lack the detail needed to establish a functional trail. Precusor function is evident when the coopted entity is identified and the function of the relevant IC system is also clear. However, intermediate functions are unclear. If precursor and IC functions are disparate then the lack of clarity becomes problematic for theoretical applications. How predictive are theoretical models lacking identifiable functional intermediates?
Labels: Irreducible Complexity