A CPU memory leak is observed when inferencing using GPU, even when NativeOps is not used (by removing libllm_sharp_ops.so).
CPU memory continues to grow while inferencing. Diagnostics from torch.Tensor.TotalCount and torch.Tensor.PeakCount is stable during multiple turns of chat. GPU memory is also stable, and no GPU memory leak is observed.
Profiled the program with valgrind massif and memcheck. No obvious clues for the leakage from the logs.
massif.out.gz
vgdump.gz
A CPU memory leak is observed when inferencing using GPU, even when
NativeOpsis not used (by removing libllm_sharp_ops.so).CPU memory continues to grow while inferencing. Diagnostics from
torch.Tensor.TotalCountandtorch.Tensor.PeakCountis stable during multiple turns of chat. GPU memory is also stable, and no GPU memory leak is observed.Profiled the program with valgrind massif and memcheck. No obvious clues for the leakage from the logs.
massif.out.gz
vgdump.gz