For python dataframe, info() function provides memory usage. Is there any equivalent in pyspark ? Thanks
How about below? It's in KB, X100 to get the estimated real size.
df.sample(fraction = 0.01).cache().count()