Deep-Autoencoders-For-Collaborative-Filtering icon indicating copy to clipboard operation
Deep-Autoencoders-For-Collaborative-Filtering copied to clipboard

Using Deep Autoencoders for predictions of movie ratings.

Results 5 Deep-Autoencoders-For-Collaborative-Filtering issues
Sort by recently updated
recently updated
newest added

Hello! I've found a performance issue in dataset.py: `batch()` should be called before `map()`, which could make your program more efficient. Here is [the tensorflow document](https://tensorflow.google.cn/guide/data_performance?hl=zh_cn#vectorized_mapping) to support it. Detailed...

Hello,I found a performance issue in the definition of `_get_training_data` , artem-oppermann_Deep-Autoencoders-For-Collaborative-Filtering/src/data/dataset.py, [dataset = dataset.map(parse)](https://github.com/artem-oppermann/Deep-Autoencoders-For-Collaborative-Filtering/blob/5b8bec5a8c04fbf0e0302d23de8796ffa60b8ba9/src/data/dataset.py#L14) was called without **num_parallel_calls**. I think it will increase the efficiency of your program if...

Hello: Why I count the number of movies the users rated is 3695, not 3952, and the number of training samples is 5954, not 5953. how to count it? Thank...

@LucRyan Thanks for your reply. I've tried the approach #2 mentioned before but still get overfitting. [split.py](https://github.com/artem-oppermann/Restricted-Boltzmann-Machine/blob/master/data/train_test_split.py) It's this possible sharing your train/test.dat? _Originally posted by @kalashnlkov in https://github.com/artem-oppermann/Deep-Autoencoders-For-Collaborative-Filtering/issues/5#issuecomment-449217066_

Modified `l2_loss` term so that it does not count biases, only weights. Typically we do not want to regularize biases as they are: 1. additive, not multiplicative, and so are...