tvm icon indicating copy to clipboard operation
tvm copied to clipboard

[Bug] How to migrate from te.create_schedule and auto_scheduler to TVM v0.20’

Open JieGH opened this issue 7 months ago • 2 comments

I am using the release of v0.20. migrating from V0.19. The following function seems to have been deprecated. Can you show me a migration guide or functions that can replace it? Thanks


AttributeError: module 'tvm.te' has no attribute 'create_schedule'

s = te.create_schedule(C.op)

ImportError: cannot import name 'auto_scheduler' from 'tvm' (/home/doc/tvm/python/tvm/init.py). Did you mean: 'meta_schedule'?

from tvm import te, auto_scheduler

The simple TVM code that work with v0.19 are:

import tvm
from tvm import te

import numpy as np

# Define the computation
n = te.var("n")  # symbolic variable
A = te.placeholder((n,), name="A")
B = te.placeholder((n,), name="B")
C = te.compute((n,), lambda i: A[i] + B[i], name="C")

# Schedule the computation
s = te.create_schedule(C.op)

# Build the function
fadd = tvm.build(s, [A, B, C], target="llvm", name="vector_add")

# Prepare input data
n_val = 8
a_np = np.random.uniform(size=n_val).astype("float32")
b_np = np.random.uniform(size=n_val).astype("float32")
c_np = np.zeros(n_val, dtype="float32")

# Allocate TVM buffers
ctx = tvm.cpu()
a_tvm = tvm.nd.array(a_np, ctx)
b_tvm = tvm.nd.array(b_np, ctx)
c_tvm = tvm.nd.array(c_np, ctx)

# Run the function
fadd(a_tvm, b_tvm, c_tvm)

# Validate correctness
np.testing.assert_allclose(c_tvm.asnumpy(), a_np + b_np)
print("Success! Output:", c_tvm.asnumpy())

JieGH avatar May 02 '25 14:05 JieGH

I am also a bit lost with the new API, but the following worked for me:

import tvm
from tvm import te
import numpy as np

# Define the computation
n = te.var("n")
A = te.placeholder((n,), name="A")
B = te.placeholder((n,), name="B")
C = te.compute((n,), lambda i: A[i] + B[i], name="C")

# Create a PrimFunc
fadd_pf = te.create_prim_func([A, B, C])
mod = tvm.IRModule({"main": fadd_pf})


# Create a TIR schedule
sch = tvm.tir.Schedule(mod)

# Optional: Apply transformations
block_c = sch.get_block("C")
i = sch.get_loops(block_c)[0]
i_outer, i_inner = sch.split(i, factors=[None, 32])
sch.parallel(i_outer)

# Build the function
# Pass the schedule and tensors correctly, with target as a keyword argument
fadd = tvm.tir.build(sch.mod, target="llvm")

# Prepare input data
n_val = 8
a_np = np.random.uniform(size=n_val).astype("float32")
b_np = np.random.uniform(size=n_val).astype("float32")
c_np = np.zeros(n_val, dtype="float32")

# Allocate TVM buffers
dev = tvm.cpu()
a_tvm = tvm.nd.array(a_np, dev)
b_tvm = tvm.nd.array(b_np, dev)
c_tvm = tvm.nd.array(c_np, dev)

# Run the function
fadd(a_tvm, b_tvm, c_tvm)

# Validate correctness
np.testing.assert_allclose(c_tvm.numpy(), a_np + b_np)
print("Success!")

Can you test it?

thiagotei avatar May 08 '25 16:05 thiagotei

What changed in TVM 0.20

te.create_schedule() (the old TE schedule object) is now deprecated.
The new workflow is as follows: Image
The first step is to create an IRModule to be optimized and compiled, which contains a collection of functions that internally represent the model.

Minimal before / after

# BEFORE (≤0.19)
s = te.create_schedule(C.op)
fadd = tvm.build(s, [A, B, C], target="llvm", name="vector_add")

# AFTER (0.20+)
# Create a primitive function
prim_func = te.create_prim_func([A, B, C]).with_attr("global_symbol", "vector_add")
# Put the primitive function in an IRModule
mod = tvm.IRModule({"vector_add": prim_func})
# Build the module
lib = tvm.build(mod, target="llvm") # returns tvm.runtime.Module
fadd = lib["vector_add"] # pull out the packed function

I think this code below is what you intended.

import tvm
from tvm import te
import numpy as np

# TE description
n = te.var("n")
A = te.placeholder((n,), name="A")
B = te.placeholder((n,), name="B")
C = te.compute((n,), lambda i: A[i] + B[i], name="C")

# Create a primitive function
prim_func = te.create_prim_func([A, B, C]).with_attr("global_symbol", "vector_add")
# Put the primitive function in an IRModule
mod = tvm.IRModule({"vector_add": prim_func})
# Build the module
lib = tvm.build(mod, target="llvm") # returns tvm.runtime.Module
fadd = lib["vector_add"] # pull out the packed function

# Prepare input data
n_val = 8
a_np = np.random.uniform(size=n_val).astype("float32")
b_np = np.random.uniform(size=n_val).astype("float32")
c_np = np.zeros(n_val, dtype="float32")

# Allocate TVM buffers
dev = tvm.cpu()
a_tvm = tvm.nd.array(a_np, dev)
b_tvm = tvm.nd.array(b_np, dev)
c_tvm = tvm.nd.array(c_np, dev)

# Run the function
fadd(a_tvm, b_tvm, c_tvm)

# Validate correctness
np.testing.assert_allclose(c_tvm.numpy(), a_np + b_np)
print("Success! Output:", c_tvm.numpy())

vacu9708 avatar May 09 '25 00:05 vacu9708