View Single Post
Old 16th October 2019, 21:36   #8  |  Link
brucethemoose
Registered User
 
Join Date: Sep 2016
Posts: 67
Quote:
Originally Posted by poisondeathray View Post
I think so...I can only use 1x_Desharpen on smaller dimensions . You can look at GPU caps viewer or similar utilities and it looks like it's using all

I wonder if there is a way to share system memory with CUDA memory ? eg. Although it's slower, some 3D/CG renderers can offload graphics card memory to system memory when doing calculations (like a shared pool) enabling you to complete if the scene is too large
I believe graphics drivers automatically do this in 3D programs, but that might not be the case with CUDA.

PyTorch does have a "empty cache" function. Maybe you could call it every few frames with FrameEval.

https://pytorch.org/docs/stable/cuda...ory-management
brucethemoose is offline   Reply With Quote