It seems XGBoost supports GPU acceleration via cuda (9?) with the gpu_hist
parameter to xgb_params
In xgboost_code_model.py I added: ‘tree_method
’: ‘gpu_hist
’ and ran a few tests (16 core, 1080gtx)
WITH 'gpu_hist
'
First run:
[Task 1/42] Current/Best: 178.70/2169.96 GFLOPS | Progress: (256/256) | 901.07 s Done.
Second run:
[Task 1/42] Current/Best: 1669.95/1804.79 GFLOPS | Progress: (256/256) | 904.57 s Done.
WITHOUT 'gpu_hist
'
First run:
[Task 1/42] Current/Best: 48.44/1714.60 GFLOPS | Progress: (256/256) | 980.04 s Done.
Second Run:
[Task 1/42] Current/Best: 113.77/1672.49 GFLOPS | Progress: (256/256) | 1038.44 s Done.
Even though I only run each test twice, you do see the ‘gpu_hist
’ does complete a bit faster.
I did see the cuda usage on my GPU use about 2 - 4% when running the xgboost cost model.
Is this something that should be exposed in the public API or was there a reason why it was excluded?
Can someone else verify benefit?