I followed the page(https://tvm.apache.org/docs/tutorials/autotvm/tune_relay_cuda.html#scale-up-measurement-by-using-multiple-devices) to manage jetson xavier nx, and the output is
Tracker address 127.0.0.1:9190
Server List
----------------------------
server-address key
----------------------------
192.168.140.26:32786 server:nx
----------------------------
Queue Status
---------------------------
key total free pending
---------------------------
nx 1 1 0
---------------------------
However, when I ran the code:
import tvm
from tvm.autotvm.measure.measure_methods import check_remote
check_remote(target= tvm.target.create('cuda -model=tx2'), device_key='nx', host='0.0.0.0', port=9190)
the error raised:
Exception in thread Thread-316:
Traceback (most recent call last):
File "/home/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/autotvm/measure/measure_methods.py", line 580, in _check
while not ctx.exist: # wait until we get an available device
File "/home/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/_ffi/runtime_ctypes.py", line 186, in exist
self.device_type, self.device_id, 0) != 0
File "/home/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/_ffi/runtime_ctypes.py", line 180, in _GetDeviceAttr
device_type, device_id, attr_id)
File "tvm/_ffi/_cython/./packed_func.pxi", line 308, in tvm._ffi._cy3.core.PackedFuncBase.__call__
File "tvm/_ffi/_cython/./packed_func.pxi", line 243, in tvm._ffi._cy3.core.FuncCall
File "tvm/_ffi/_cython/./packed_func.pxi", line 232, in tvm._ffi._cy3.core.FuncCall3
File "tvm/_ffi/_cython/./base.pxi", line 159, in tvm._ffi._cy3.core.CALL
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (7) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(TVMFuncCall+0x61) [0x7fadcd8dff61]
[bt] (6) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0xc501c8) [0x7fadcd8de1c8]
[bt] (5) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::runtime::RPCDeviceAPI::GetAttr(DLContext, tvm::runtime::DeviceAttrKind, tvm::runtime::TVMRetValue*)+0x224) [0x7fadcd935df4]
[bt] (4) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0xcba730) [0x7fadcd948730]
[bt] (3) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::runtime::RPCSession::HandleUntilReturnEvent(tvm::runtime::TVMRetValue*, bool, tvm::runtime::PackedFunc const*)+0x13f) [0x7fadcd9485bf]
[bt] (2) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0xcc7bfc) [0x7fadcd955bfc]
[bt] (1) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::support::Socket::Error(char const*)+0x90) [0x7fadcd94a640]
[bt] (0) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x32) [0x7fadcd06c5c2]
File "/home/data/git/tvm/src/runtime/rpc/../../support/socket.h", line 362
TVMError: Socket SockChannel::Recv Error:Connection reset by peer
and the error message in the nx is
INFO:RPCServer:connection from ('192.168.159.31', 48598)
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Please tell me the reason why the error raised, how to get the detail error message in nx and how to solve it. Looking forward for your reply.