Results 13 issues of jyp

under java1.7. any advice ? thans!

Hi, Bgshih, What's dataset the pretrained model use? Is it http://www.robots.ox.ac.uk/~vgg/data/text/mjsynth.tar.gz ?[this file is too big....] I'm confused about it. If you have time, I hope I can receive you...

Hi, Have you tried shared weight between different pyramid levels(as the LapSRN paper said), the code of you uploaded now seems doesn't do it.

我使用vs2013进行编译 把msvrc100 替换成msvcr120 为什么不行啊 报错: $ make sanity 2>&1 | tee /cygdrive/f/openjdk/build.log jdk/make/common/shared/Defs-windows.gmk:324: WARNING: No VS2010 available. No VS 100COMNTOOLS found on system. No WINDOWSSDKDIR found on system. jdk/make/common/shared/Defs-windows.gmk:337: **\*...

Hi, there, as the title said, what about the performance? thanks.

Hi, Kayousterhout! I am new to Spark recently. But I am confused in the performance metric of Spark. I know you are focus on doing the metric job in spark...

after following the readme step by step until using the update command in sbt enviroment, the screen logs the follwing msg: ``` [error] a module is not authorized to depend...

我看CogroupRDD的实现,没看懂narrowdependency或shuffledependency对cogrouprdd中partition的影响... 不知道如果a.cogroup(b) , a分别是rangepartitioner和hashpartitioner的话,中间生成的cogrouprdd的分区数莫非和rdd a的一样多?因为cogroup这个算子不能指定numPartitons呀 我看您在JobLogicalPlan章节中对dependency分了4类(或者说两打类), 而且看cogroupRDD的对于依赖的处理,似乎并没有这么复杂,完全无视了所谓的N:1 NarrowDependency。 > override def compute(s: Partition, context: TaskContext): Iterator[(K, Array[Iterable[_]])] = { > val sparkConf = SparkEnv.get.conf > val externalSorting = sparkConf.getBoolean("spark.shuffle.spill", true)...