TVM Python Project Organization after Unified IR

We are in the progress of refactoring the codebase to introduce a unified infra. While most of the current code refactoring is wrt to the c++ side without touching the frontends, it would be great for us to start thinking about code organizations in the python side.

This RFC discusses the following points:

Q1: Whether to Reflect the C++ folder structure

As we start to introduce more concepts into the project, it is no longer sustainable to continue put things under the root scope.

One potential solution is to make the python project roughly reflect the C++ folder structure as in include/tvm. We will need to bring some of the root scope files into subfolders as we start to introduce these sub-folders in C++. Of course, we can still export specific names to the root scope(see below).

Example changes:

  • ir/module.py will contain the definition of IRModule.
  • /api.py:compute -> top/api.py:compute (assuming some of the compute goes into the top namespace)

Q2: runtime namespace

We have a clear separation between runtime and compiler in the c++ codebase. All the runtime related features are under the runtime folder.

Putting python runtime related code into an explicit subfolder will help to bring a clearly isolated sub-component that might help future deployment.

Example changes:

  • We can rename _ffi to runtime(as most of the folder are runtime features).
  • /ndarray.py -> runtime/ndarray.py

Note that we can also separate runtime/contrib vs contrib as in the c++ side.

Q3: What to export to the root scope and Name conflicts

Having a separate namespace does not prevent us to re-export things back into the root. One thing we need to resolve here is potential name conflicts.

For example, both runtime and IR have their own Module and Function. It is crucial to have a clear way to distinguish between the IR data structure and the runtime data structure.

Q3.1: runtime.PackedFunc vs tvm.Function

At the moment, tvm.Function actually refers to c++ PackedFunc. It might be better to rename to runtime. PackedFunc to avoid confusion.

Q3.2: IRModule vs runtime.Module

Module is the most tricky one to name. Right now tvm.module.py corresponds to runtime.Module. We will need to introduce IRModule to the python side.

Here are some possible naming choices to avoid conflicts:

  • Just use namespace: ir.Module vs runtime.Module
    • This will mean that we prefer not to export them to the root scope.
    • Use prefix e.g IRModule (RTModule)

At the moment we use the name IRModule to distinguish it from runtime::Module in the C++ side. The class name itself is not used as frequently as global functions that load these modules. In particular, the following API

tvm.module.load('xyz.so')

might need to be changed to

tvm.runtime.load('xyz.so')

Please share your thoughts.

1 Like
  1. For structural APIs, let’s just reflect the c++ structure (like in your example)
  2. For TOPI operators, I think it is better to follow what NumPy did, that let’s just put all compute into a single namespace (because there is no fundamental difference)
  3. Let’s import several commonly used classes/apis to the root namespace, for example, ndarray. (as proposed in Q3.)

I agree that most part of the _ffi should be runtime instead, but not sure if directly renaming the entire _ffi to runtime is a good idea. Probably at least those listed below should be kept in _ffi:

  1. _ctypes
  2. _cython
  3. libtvm.so library loading logic
  4. error handling logic

I am not sure if runtime is a good name. Proposal: _core.

Another issue is related to PackedFuncs: Currently, most PackedFuncs registered in C++ are exposed to python dynamically using init_api_prefix by manipulating sys.module. This brings trouble for many editors to recognize the names. The recent editor plugin allows those compatible with LSP to understand them (iirc), but generally speaking, I think we should put all those packed functions under _ffi folder. We actually made an auto generation tool for doing this.

A rule of thumb is that we should make the names as informative as possible. For example, tvm.Function is much less informative compared with tvm.runtime.PackedFunc.

Off the topic, I didn’t understand why we are dedicated to two sets of ffis (ctypes and cython). Ideally we should consider consolidating efforts to a single set of FFI, right? Is there any case that ctype works but cython doesn’t?

The main reason to keep a ctypes case is to allow cases where Cython was relatively hard to install

Thanks @junrushao1994

Runtime vs FFI

The name runtime is used to reflect the c++ namespace runtime, so in some sense it might be as accurate as core.

There are a few ways to think about it:

  • T0 _ffi feature is in parallel with the runtime, in that case, seems it is a good idea to keep _ffi the namespace(or _core) and have another runtime space.
  • T1 _ffi feature is essentially runtime, as most runtime data structures are part of ffi exposure. In that case, simply renaming _ffi to runtime might be fine. Of course that means runtime will include additional wrappers on top of ffi related functions.
  • T3 keep the _ffi folder, but move most of the data structures to runtime (note that some of these data structures will still calls into ctypes and _ffi functions.

The advantage of T0 is a clear separation of any logics that relates to ffi calls(e.g. ctypes). On the other hand, the advantage of T1 could be removal one level of indirection that could be useful in our cases. T3 strikes a balance between the two.

Where to expose the functions

I agree that having a clear namespace for ffi functions would certainly be helpful. On the other hand, we start to have a desire to keep the code close to its users. Here are a few examples:

  • C0: keep an _ffi file under the same namespace, initialize with prefix tvm.relay.op., use relative import
# file: tvm/relay/op/transform.py

from . import _ffi

def add():
  return _ffi.add()
  • C1: same as C0, but use absolute import
# file: tvm/relay/op/transform.py

import tvm.relay.op._ffi

def add():
  return tvm.relay.op._ffi.add()
  • C2: keep a parallel structure under _ffi folder, initialize with prefix tvm.relay.op., use absolute import
# file: tvm/relay/op/transform.py

import tvm._ffi.relay.op

def add():
  return tvm._ffi.relay.op.add()
1 Like

As long as you have conda, I suppose installing those stuff should be much easier, right? Are there any specific cases you bear in mind?

there a few cases in mac and windows where a compiler is needed but an user may not have it. We also tries to just use ctypes for the bottleneck cases. If we are not expanding the features rapidly, we should be fine for now. But maybe it makes sense to rethink it later

I see. It happens often on windows. Hmm, yep, let’s revisit this issue later.

would be great to get everyone’s though about these choices: T0 vs T1 vs T1, C0 vs C1 vs C2

I would vote for T3 even if most data structures will be moved to runtime namespace as we still need to have a folder to keep _ctypes and _cython.

I don’t have preference on C options.

I would prefer C2 because it did the clear separation between ffi code and hand written code.

Indeed all the three C options have their pros and cons. One one hand, one could argue the code should be closer to its consumer(in this case favors C0 and C1). On the other hand, one could also argue for a separation.

There is also a question of code conciseness, C0 would produce the most concise code in a deep nest. While on the other hand, fullpath would show a clear namespace(although that namespace is already encapsulated in the function that exposes the ffi).

All the three choices have ffi in the path somewhere so it might be enough to serve as an indicator.

My take is that if we have clear separation like C2, and those packed functions are automatically generated via a simple bash/python script, then this would simplify the logic. For example, when doing code review, we can just safely ignore the changes under the _ffi folder :slight_smile:

I agree that for anything that is automatically generated it makes sense to put in a separate namespace. atm the ffi namespace does not contain anything, except for the calls to initialize the raw packed calls.

Personally I still like C0 as it is relatively shorter in terms of indirection. But I agree that if we want to generate a typed wrapper somewhere, they should be in the ffi folder(or some special namespace)

I prefer C0 a little bit over C2, though I think both solution looks good. The reason that I like C0 is that it’s easier to find the corresponding FFI API and then backtrace to C++ src code. It’s also more consistent with current FFI APIs that are defined in the same folder with underscore in front of the file name.

I think both solution works and doesn’t differ too much. I think C0 is fine as well :slight_smile:

Might be orthogonal to this problem, but the only concern is that (if possible) we should make sure that all editors can recognize those APIs exposed by _init_api_prefix. (Currently _init_api_prefix does runtime manipulation of sys.module so not all editor/IDE can understand) - this probably requires auto generation, right?

we should make sure that all editors can recognize those APIs exposed by init_api_prefix

Would be great if we can clarify it further, I can see a few aspects

Code Navigation

Having IDE understand FFI related functions and being able to jump around. Support these functions will still stop at the ffi python layer, https://github.com/tqchen/ffi-navigator should be able to support the init_api_prefix cases in this aspect.

FFI API Developer

A FFI API developer means someone who adds a PackedFunc on the c++ side, and want to be able to add the corresponding API on the python side. Having auto-completion certainly helps in this case, but not as much as long as these developers understand the project convention

Python Developer/User

A Python developer means someone who uses the python API, but would never call a function in _ffi directly. Because these users only calls into wrapped functions, in non-ffi namespace, most IDEs provide auto-completion and documentation.

Possible Approaches

As we can see there are two concepts here: FFI func(the function that exposed by init_api_prefix), API function(the function that wraps the FFI func).

The difference between the two are:

  • FFI func are packed functions that is type erased in general, and only accepts positional args
  • API functions are python functions that have all the python features(keyword args, docs etc).

D0: Automatically generate API functions

We can certainly go a step further to automatically generate API functions. However, that means we need to provide meta data of these functions elsewhere.

One potential problems of API generation mechanism is the potential engineering complexity of the generator, where to the documentation and informations such as types annotations.

D1: Manually Write API function by Wrapping FFI func

This approach is currently adopted in the codebase. Instead of having a mechanism to automatically generate APIs. We manually write APIs, in someway we shift the developing cost of writing meta info for generators directly into writing these APIs.

As we find that most of the effort are actually writing docs and examples, this approach is not as bad as it seems to be. Writing the wrapping in python provides the most direct way of expressing all the possible things we could expose in an API generator.

Of course, it shift a bit burden onto the FFI API developers. One of argue that it is a necessary burden to make sure that the project itself is pythonic and can be used by most of the python developers.

Of course, for certain cases maybe it makes sense to auto generate API, assuming that we can ensure the quality of documentations, in these cases. It is possible that we could use a separate namespace.

1 Like

I would prefer D1 to D0. Auto generating documentation is somewhat ad hoc and prone to over-engineering.

Thanks for everyone’s participation, we seems to converge on: T3, C0 and D1