Shunsuke KITADA
Shunsuke KITADA
> Residual Attention Networkは注意機構を挟み分類に重要な領域に集中しノイズ由来のエラーの伝搬を止める。AttentionがResNetと同じ形をとり消失問題を回避。CIFAR ImageNetのSoTA https://twitter.com/hillbig/status/887113195202072578
> ニューラル機械翻訳は、連続値(のベクトル)で表現されることや、ニューラルネットが非線形であることから解釈が難しい。この研究ではLayer-wise Relevance Propagationを用いて(文脈の)単語の隠れ状態への貢献度を計算することを提案した。出力層や隠れ層のあるノードが、入力層の各ノードからどれぐらい影響を受けているかが色の濃さで可視化される。 > https://developers.cyberagent.co.jp/blog/archives/9908/
[ Mikolovのword2vec論文3本(2013)まとめ](http://hytae.hatenablog.com/entry/2015/05/15/Mikolov%E3%81%AEword2vec%E8%AB%96%E6%96%873%E6%9C%AC%E3%81%BE%E3%81%A8%E3%82%81)
Thank you very much for publishing your excellent research results. I am also interested in reproducing the layout-to-image model as well. Is there any reproduction code available? Thank you in...
Hi, I have built a pipeline with diffusers based on the reference implementation. Here is my implementation of the Structured Diffusion Guidance: https://github.com/shunk031/training-free-structured-diffusion-guidance. However, I am not confident in my...
Hi, @VSAnimator I'd like to know about training textual inversion too. In particular, I'd appreciate details regarding the `FineTuneConcept` class shown in the notebook. Thanks!
How can I solve this problem? I have tried the method mentioned in the following link but could not solve it: - https://stackoverflow.com/questions/71940179/error-lib-x86-64-linux-gnu-libc-so-6-version-glibc-2-34-not-found
Thanks for the reply. I have found the original paper and its Matlab code but am finding it a bit difficult to reproduce it. I wish you good luck with...
Hi, @helia95 Thank you for the comment. Is the `Eq. (4, 7)` that you are pointing out actually `Eq. (7)` in the paper? I'm writing about the `Eq. (7)` in...
@elvisnava Thank you very much for pointing this out. To be honest, the code in this repository is based on an earlier release by the author in the [OpenReview](https://openreview.net/forum?id=PUIqjT4rzq7) as...