python-nnf
python-nnf copied to clipboard
Manipulate NNF (Negation Normal Form) logical sentences
```python import nnf x = nnf.Var('x') T = x | ~x nnf.dsharp.compile(T.to_CNF(), smooth=True).model_count() ``` Should equal 2, but it's equal to 1. This is because `T.to_CNF()` evaluates to `true`
Similar to the `dsharp` one, it would be useful to have `d4` wrapped as well. Many instances are more readily solved with it instead of `dsharp`
If we're in a form that allows for quick counting (e.g., d-DNNF), it would be useful to be able to compute the marginals (likelihood each proposition is true/false) for all...
Dynamically decide for user based on the provided NNF to convert to CNF using tseitin transformation or naive transformation.
Not sure if this should be optional or the default, but a common pattern in building theories has raised an issue... ```python T = true T &= T &= ......
Following [this discussion](https://github.com/QuMuLab/python-nnf/issues/7#issuecomment-670661445), [semantic emulation](http://www.semantic-web-journal.net/system/files/swj236_2.pdf) could be a very useful addition.
Sentences where many nodes have multiple parents can be slow to compare to each other. This happens when two sentences are identical or almost identical, but exist as separate objects....
These kinds of testing data are currently available: - Sentences randomly generated by `hypothesis` - CNF sentences from [SATLIB benchmark problems](https://www.cs.ubc.ca/~hoos/SATLIB/benchm.html) - d-DNNF sentences compiled from the previous sentence with...
When a theory doesn't mention all of the variables, then counting, comparing, etc all become incorrect. Ultimately, a fix will need to come in the form of an NNF potentially...