No improvement in performance using @codon.jit
Here is my test code:
import collections
import codon
import random
import time
class RollingMedianCodon:
n: int
data: collections.deque[float]
def __init__(self, n: int = 10):
self.n = n
self.data = collections.deque(maxlen=n)
def input(self, value: float):
self.data.append(value)
return self.get_median()
@codon.jit
def get_median(self):
sorted_data = sorted(self.data)
mid = len(sorted_data) // 2
if len(sorted_data) % 2 == 0:
return (sorted_data[mid - 1] + sorted_data[mid]) / 2.0
else:
return sorted_data[mid]
class RollingMedian:
n: int
data: collections.deque[float]
def __init__(self, n: int = 10):
self.n = n
self.data = collections.deque(maxlen=n)
def input(self, value: float):
self.data.append(value)
return self.get_median()
def get_median(self):
sorted_data = sorted(self.data)
mid = len(sorted_data) // 2
if len(sorted_data) % 2 == 0:
return (sorted_data[mid - 1] + sorted_data[mid]) / 2.0
else:
return sorted_data[mid]
def test_performance():
# 创建两个实例
rolling_median_jit = RollingMedianCodon(n=20)
rolling_median_normal = RollingMedian(n=20)
# 生成测试数据
test_data = [random.uniform(0, 1000) for _ in range(100000)]
# 测试JIT版本
start_time = time.time()
for value in test_data:
rolling_median_jit.input(value)
jit_time = time.time() - start_time
# 测试普通版本
start_time = time.time()
for value in test_data:
rolling_median_normal.input(value)
normal_time = time.time() - start_time
print(f"JIT Version{jit_time:.4f}秒")
print(f"Normal Version {normal_time:.4f}秒")
print(f"Performance Inprovement {(normal_time/jit_time - 1)*100:.2f}%")
if __name__ == "__main__":
test_performance()
The result is
JIT Version0.4462秒
Normal Version 0.0741秒
Performance Inprovement -83.38%
Why it is slower than the normal version?
You are converting from python to codon a list of 100000 elements (test_data) one by one. Better do the entire loop in codon
You are converting from python to codon a list of 100000 elements (test_data) one by one. Better do the entire loop in codon
I just want to test whether @codon.jit would improve the performance of get_median(), if I do the entire loop in codon, it is not fair for the python code.
You are converting from python to codon a list of 100000 elements (test_data) one by one. Better do the entire loop in codon
I just want to test whether @codon.jit would improve the performance of
get_median(), if I do the entire loop in codon, it is not fair for the python code.
get_median will only improve if you are not converting data from python to codon inside the function. But you are not testing the function isolated, you are testing the whole loop. Even if get_median did NOTHING in codon's version, it will probably be slower.
After doing some more tests, even this will be slower as it needs to start the JIT:
@codon.jit
def do_nothing():
return 1
# 测试JIT版本
start_time = time.time()
do_nothing()
jit_time = time.time() - start_time
So for a "fair" comparation you should call do_nothing() to warm up the JIT before running your test
After doing some more tests, even this will be slower as it needs to start the JIT:
@codon.jit def do_nothing(): return 1 # 测试JIT版本 start_time = time.time() do_nothing() jit_time = time.time() - start_timeSo for a "fair" comparation you should call do_nothing() to warm up the JIT before running your test
So I think maybe the best way is to compile the codon code into python extention and use it in python code instead of JIT way. Could you please help me with the import error described in #605
That will not help either, as this test is doing lots of allocations and codon is not optimized for that as well as python is.
My tests show that you will get similar performance as with the non warmed jitted full loop
That will not help either, as this test is doing lots of allocations and codon is not optimized for that as well as python is.
My tests show that you will get similar performance as with the non warmed jitted full loop
In #605 I have uploaded a test in a repo which is a lot faster than the python code, you can check that. But I cannot import the class into python, the code is compiled in codon file.
Weird, I have run your code (removing the import codon and the @codon.jit) directly with codon and I do not see any improvements.
In this case, every call to/from Codon within a Python loop will be penalized (as each Python object is cast to Codon object and vice-versa). You either need to do the whole loop in Codon or make sure that the gain of Codon within a method is larger than the cost of bridging. For small calculations (e.g., get_median) this is probably not worth it (esp. since Python can nowadays optimize such cases).