I have an array that looks like:
k = numpy.array([(1.,0.001), (1.1, 0.002), (None, None),
(1.2, 0.003), (0.99, 0.004)])
I
You can use numpy.nan instead of None.
import matplotlib.pyplot as pyplot
import numpy
x = range(5)
k = numpy.array([(1.,0.001), (1.1, 0.002), (numpy.nan, numpy.nan),
(1.2, 0.003), (0.99, 0.004)])
Fig, ax = pyplot.subplots()
# This plots a gap---as desired
ax.plot(x, k[:,0], 'k-')
ax.plot(range(len(y)), y[:,0]+y[:,1], 'k--')
Or you could mask the x value as well, so the indices were consistent between x and y
import matplotlib.pyplot as pyplot
import numpy
x = range(5)
y = numpy.array([(1.,0.001), (1.1, 0.002), (numpy.nan, numpy.nan),
(1.2, 0.003), (0.99, 0.004)])
Fig, ax = pyplot.subplots()
ax.plot(range(len(y)), y[:,0]+y[:,1], 'k--')
import matplotlib.pyplot as pyplot
import numpy
x = range(5)
k = numpy.array([(1.,0.001), (1.1, 0.002), (None, None),
(1.2, 0.003), (0.99, 0.004)])
Fig, ax = pyplot.subplots()
# This plots a gap---as desired
ax.plot(x, k[:,0], 'k-')
# I'd like to plot
# k[:,0] + k[:,1]
# but I can't add None
arr_none = np.array([None])
mask = (k[:,0] == arr_none) | (k[:,1] == arr_none)
ax.plot(numpy.arange(len(y))[mask], k[mask,0]+k[mask,1], 'k--')
You can filter you array doing:
test = np.array([None])
k = k[k!=test].reshape(-1, 2).astype(float)
And then sum up the columns and make the plot. The problem of your approach is that you did not convert the None
type to a numpy array, which did not allow the proper creation of the mask.