JoshuaC3

Results 13 issues of JoshuaC3

I have raised an Feature Request [here](https://github.com/conda/conda/issues/10539) with Conda/Anaconda to add this as an option on installs/creates etc. I haven't given it any though, but maybe there is a wrapper...

I think it would be really helpful to allow a mixture of degrees for a model. This could be passed as a list or numpy array to the degree parameter....

When training an EBM the parameter, `interactions=10`, helps improve the predictive power of the model. However, in certain scenarios, this can cause problems, see [#184](https://github.com/interpretml/interpret/issues/184#issuecomment-822702385) as a single example. This...

Amazing stuff. From what I can tell, you use simple decision trees as your [base estimator](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingRegressor.html#sklearn.ensemble.BaggingRegressor). Is it possible to use linear models, polynomial regression models or even cubic splines?...

enhancement

What loss functions are being used for the boosting of the EBMs, both regression and classification? I searched the repo and could only find [this](https://github.com/interpretml/interpret/blob/f298c734f3fedce522795e12da4e7a9788596f06/python/interpret-core/interpret/glassbox/ebm/ebm.py#L22) but wasn't sure how `_merged_pair_score_fn`...

I am reading through the GAMI-Net paper referenced on the main page. This model trains on the "main effects" or 1st order terms first. Then in a 2nd iteration, trains...

From section 4 of the [paper](https://kaggle2.blob.core.windows.net/forum-message-attachments/225952/7441/high%20cardinality%20categoricals.pdf) sited in TargetEncoding. > Instead of choosing the prior probability of the target as the _null hypothesis_, it is reasonable to replace it with...

enhancement

Although this is very simple to implement in Pandas or similar, it would be very nice to have this in here as a scikit compatible transformer. It is a good...

enhancement

The [flake8 per-file-ignores](https://github.com/snoack/flake8-per-file-ignores) add-in works in CLI but not in AtomLinter. My `tox.ini` config file looks like so, ``` [flake8] per-file-ignores = /tests/test__checks.py: D103 ``` Could be related to https://github.com/AtomLinter/linter-flake8/issues/622...

Really interesting stuff! Can this be used as a loss function for boosted learners, I'm thinking GBMs/GBDTs. The requirements are twice continuously differentiable. Is this the case with DILATE?