ihc童鞋@提不起劲

Results 78 comments of ihc童鞋@提不起劲

I'm wondering if the following 2 solutions may be accepted? 1. Call madvise with io_uring: It saves syscall but I don't know if it will work to avoid scaling bottleneck....

> > Call madvise with io_uring: It saves syscall but I don't know if it will work to avoid scaling bottleneck. > > As I understand it the IPI is...

> I might be misunderstanding, but isn't that what the pooling allocator does? The pool inside the wasmtime allocates a big memory one time in a single call, and manages...

> We found that the cost of an madvise scaled with the size of the region being advised Thank you for sharing past experiments. But it still confuse me that...

> You can't merge the regions to madvise - they're not contiguous. I mean maybe we can make the indices as contiguous as possible and then we can do madvise...

I finished my experiment and result shows the same conclusion. What I did: clear stack pool in batch(https://github.com/ihciah/wasmtime/tree/main) Result: running return 1 demo(on a KVM vm) - Original 1 core:...

Thanks for the explanation. But I guess IPI cost(or other bottleneck) is not linear with syscall times but the memory ranges since doing madvise with twice bigger memory a little...

> Not calling madvise is effectively equivalent to reusing the Instance while resetting all tables and wasm globals (actual global variables are in the linear memory and as such not...

Just ignore lmdb. Replace imread function of dataset.py with ``` def imread(self, path): # key = hashlib.md5(path.encode()).digest() # img_buffer = self.txn.get(key) # img_buffer = np.frombuffer(img_buffer, np.uint8) # img = cv2.imdecode(img_buffer,...

嗯,这个bug是存在的,因为没有设计确认重传机制。属于设计上的缺陷了。。 之前的issue里也提到过,感觉需要加一下。不过鉴于当前这个代码抽象太挫,感觉需要重写下。 一个比较快的方案是直接把kcp改成over tcp就好。 然而我目前没啥时间搞==