TVM runtime completely freezes system causing reboot (RPi 4)

For reference, I’m using the official 64-bit Ubuntu 18.04 for RPi4 4GB overclocked to 2GHz, and I’m running my scripts on Python 3.6.9. I cloned the newest TVM from source and modified the main function of the deploy/tx2_run_tvm.py script so it would properly run (attached below).

The first model I used (https://github.com/dwofk/fast-depth) causes a complete crash on the first image read. I thought this might be due to the fact that the network was originally tuned for a Jetson TX2, but still thought it was strange since both architectures are the same. I then tried my own tuned network, which is slightly better yet still leads to a crash and reboot after ~10-20 images. I can’t debug with pdb because it smashes through that as well. Could anyone attempt running the network on their own ARM64 machine? If not, are there any pointers I could use for debugging TVM in this situation?

import tvm
from tvm.contrib import graph_runtime

def run_model(model_dir, input_fp, output_fp, warmup_trials, run_trials, cuda, try_randin):
    print("=> [TVM on TX2] using model files in {}".format(model_dir))
    assert(os.path.isdir(model_dir))

    print("=> [TVM on TX2] loading model lib and ptx")
    loaded_lib = tvm.module.load(os.path.join(model_dir, "deploy_lib.o"))

    print("=> [TVM on TX2] loading model graph and params")
    loaded_graph = open(os.path.join(model_dir,"deploy_graph.json")).read()
    loaded_params = bytearray(open(os.path.join(model_dir, "deploy_param.params"), "rb").read())

    print("=> [TVM on TX2] creating TVM runtime module")
    ctx = tvm.cpu(0)
    module = graph_runtime.create(loaded_graph, loaded_lib, ctx)

    print("=> [TVM on TX2] feeding inputs and params into TVM module")
    rgb_np = np.load(input_fp) # HWC
    x = np.zeros([1,3,224,224]) # NCHW
    x[0,:,:,:] = np.transpose(rgb_np, (2,0,1))
    module.set_input(0, tvm.nd.array(x.astype('float32')))
    module.load_params(loaded_params)

    print("=> [TVM on TX2] running TVM module, saving output")
    module.run()
    out = module.get_output(0).asnumpy()

    print("=> [TVM on TX2] benchmarking: {} warmup, {} run trials".format(warmup_trials, run_trials))
    for i in range(warmup_trials):
        module.run()
        ctx.sync()

    ftimer = module.time_evaluator("run", ctx, number=1, repeat=run_trials)
    profile_result = ftimer()
    profiled_runtime = profile_result[0]

    print("=> [TVM on TX2] profiled runtime (in ms): {:.5f}".format(1000*profiled_runtime))