notes-cog-ai-reasoning-modularParaconsistent

i guess i'll call this idea 'modular paraconsistent logic'. Dunno if that name is already being used for something else or not.

The idea is just that you maintain a graph where you remember the inference steps needed to infer each proposition, and then when you get a contradiction you isolate the other facts needed to infer each 'side' of the contradiction from each other into modules.

In other words, say you've found a contradiction, which means that you proved both P and not-P; make a list of the facts which were needed in the proof of P but not not-P, call this list 'module 0', and then make a list of the facts which were needed in the proof of not-P but not P, call this list 'module 1'. Note that there will be many facts in your database which are in neither module, either because they were needed in both proofs, or because they were used in neither proof. Now in future reasoning, do the reasoning assuming the facts in module 0 but not in module 1, or with the facts in module 1 but not in module 0, but never both.

This probably isn't exactly right, and it leaves lots of implementation issues unsolved (eg how to determine which facts were 'needed', as a proof can contain steps that aren't actually used? what if the source of the contradiction was in the facts used in both sides of the proof, in that case the separation between module 0 and module 1 is worthless and should be reconsidered later, perhaps after we find the true culprit fact(s)?). But it's a start.

The intuition is that reality does not contain logical contradictions, so therefore if a contradiction was derived, then actually there must be one or more facts in the database which are incorrect. It would be nice to think carefully about everything and try and determine which facts are most likely to be wrong, and then remove those from the database. But this thought process would be expensive (some sort of NP-hard thing). So instead of deciding which side we believe, we just provisionally accept both sides, but isolate them from each other so that we can't derive contradictions from them in the future.

The intution is that this is somewhat how the human mind works, when we 'compartmentalize' our beliefs to allow ourselves to believe in two contradictory things at the same time (the motivation being, again, to allow us to reason 'logically' when we are in one compartment or another, without incurring the computational cost of resolving the contradiction).

This is similar to annotating source facts as 'uncertain' because a contradiction found using them -- but in this model all 'facts' are assumed uncertain because we are doing nonmonotonic reasoning.

Could we reduce the NP-completeness of Bayesian nets and interval Bayesian nets with modularization?

These 'modules' can also be thought of as scenarios/subsets of possible world, which is what i used in my old college "antirelaxation" stochastic 3sat solver project (that didn't work, at least not faster than usual).