Autotuning warrnings/errors on RK3399 Mali using RPC

Hi,

I met a lot of warning/errors messages while I was autotuing my model. I enclosed the Gist below which includes

  1. autotune_rpc_mali_error.py

    An autotuning script with a downscale model having only one Conv2D layer built from Keras.

  2. device_clinfo_mali.log

    clinfo report from the targeted RK3399 Mali(MP4) device.

  3. host_error.log

    Host a small portion of warning/error messages during tuning.

The autotuning still finished at the end with a tune log. However, when I used the tune log to cross compile the model and run on RK3399 Mali directly, I found the inference time is slower than using ARM CPU. I am not sure if its caused by the warnings/errors.

Your error log includes debug info of 4 configs. Two of them encountered Runtime error, one of them was schedule error, and the last one was out of host memory error. While I have no idea about the runtime error by looking at the message, the schedule error looks like a bug, and the out of host memory error could be possibly expected.

@merrymercy would you take a look at this case as well? Thanks.

1 Like

HI @merrymercy,

Just want to follow up to see if you have some suggestion on this.

Thanks, Joey

Your log tells that all tuning trials failed. So the tuning actually did not have any effect. The error message shows something wrong on the device. But you said you can run opencl model on your device, so your runtime should be ok.

I suspect the timeout for measurement is too small. Could you try a larger value for timeout in L112 of your script?

For @comaniac, the scheduler error is not a bug. The too large unroll factor is detected and this config is successfully dropped.

HI @merrymercy,

Thanks for your suggestion! However, I’ve change the timeout to 20/50/100/1000 and still got the same messages.

Joey

You’re right. I missed the error message in that config.