I have CUDA compatible GPU (Nvidia GeForce 1060) in my system. While analyzing a bigger dataset, I often have to use pair plot function of the seaborn library, it consumes a lot
Yes, you totally can! But just not with seaborn.
You can use the RAPIDS library and ecosystem (cudf and the GPU accelerated visualization library cuxfilter, with its connections to holoviews, datashader, and plot.ly dash api). Here is a great quick start guide to cuxfilter: https://docs.rapids.ai/api/cuxfilter/stable/10%20minutes%20to%20cuxfilter.html
Here is a blog of cuxfilter with dash api: https://medium.com/rapids-ai/plotly-census-viz-dashboard-powered-by-rapids-1503b3506652
We're about to do a tutorial at JupyterCon this week, if you have time to see it. https://cfp.jupytercon.com/2020/schedule/presentation/242/using-rapids-and-jupyter-to-accelerate-visualization-workflows/
Not sure whether your GPU is supported but there is now (Q3 2020) options to do data manipulation on GPU using libraries such as cudf or cupy.
I am just starting down this path and from little I've seen, you will have to do some "extra" work in transferring results to a format that Seaborn can handle, but calculations with cudf (and I assume cupy) are much faster (I've seen 5x to 25x improvements so far, and read of even more extreme cases).
You will have to write a little more code though...
Is there a way I can run my entire notebook on GPU. I mean, apart from seaborn, I want to run all of my code on GPU, is it possible?
In a word, no there is not. There is no way to run generic Python code or libraries on the GPU.
I am aware that
tensorflow
andkeras
can be run on GPU.
Neither tensorflow nor keras can be "run on" a GPU. They can accelerate parts of their computations with GPUs, but that process doesn't involve running a single line of Python on the GPU.