Peter Teichman

Results 14 comments of Peter Teichman

And here's a goroutine dump while it's hanging. At least this time it was in reflect.DeepEqual, though that function itself runs fine on my two inputs. Maybe an infinite loop...

This may be a bug / pathological input in golcs instead.

Unfortunately not–by the time it hits the brain file, each input has been chopped up into trigrams and any additional context has been lost. It would be possible to write...

You can pass `loop_ms=0` to reply() to get a single candidate. With the current code, that reply will be run through cobe's scorer, but it's a lot closer to what...

Ah, ok. There isn't a way to do that with the current API, but it wouldn't be difficult to add.

Without knowing anything about the rest of your training data, I think this is a combination of: 1) None of the ngrams in that line are in common with other...

Interesting, thanks for the report. What version of the `irc` library do you have installed?

@pizzamaker Aargh, I'm so sorry I've been unresponsive on this. The version of python-irc that cobe 2.1.2 has been tested with is 12.1.1, and 8.5.3 is about two years old....

Unfortunately the SQLite bindings aren't thread safe, and as a result Brain.learn and Brain.reply always need to be called from the same thread. This is a pain, for sure. Maybe...

There's nothing currently in the API, though depending on the statistics this may be easy to add. I've wanted to add estimated probabilities for n-grams (P(token|token1,token2,token3)) and possibly some deeper...