Dongjie Shi
Dongjie Shi
> When run "bash work/start-scripts/start-spark-local-sql-sgx.sh", also get this error: > 21/09/01 22:24:33 INFO DAGScheduler: Job 13 failed: runJob at PythonRDD.scala:153, took 453.136422 s > Traceback (most recent call last): >...
similar problem when run customloss.py
can we merge nfs and orca-job 2 yaml to one deployment yaml?
in spark doc, / is not a option, please try hdfs:// or file://. application-jar: Path to a bundled jar including your application and all dependencies. The URL must be globally...
> > in spark doc, / is not a option, please try hdfs:// or file://. > > application-jar: Path to a bundled jar including your application and all dependencies. The...
We will consider this and have a discussion. Thanks.
> Also what is the exact commit hash you've run this on? the commit hash is fb71e4376a1fa797697832ca5cbd7731dc7f8793 and also on 1.2-rc1, same issue happens and on previous commit 1b8848bdae09fa0e92e4644b7fbd4e8c6cd1718c which...
> @llly How can we quickly reproduce it? What is the Python script you tried and is there anything special you did in `python.manifest.template`? actually we use bash to start...
> That commit is 5 months old... > Please try the newest master from the new repository (at the time of writing: [gramineproject/gramine@ff5a2da](https://github.com/gramineproject/gramine/commit/ff5a2da0dd577f47204a546465aab6797bce5d83)) sorry, I pasted the wrong commit, actually...
[test-orca-tf-text-sgx.log](https://github.com/gramineproject/graphene/files/7224508/test-orca-tf-text-sgx.log) trace log is attached. also you can run it with below docker image: ``` export ENCLAVE_KEY=xxx/enclave-key.pem export LOCAL_IP=x.x.x.x docker run -itd \ --privileged \ --net=host \ --cpuset-cpus="26-30" \ --oom-kill-disable...