Korakot's answer is not correct.
You can indeed render OpenAi Gym in colaboratory, albiet kind of slowly using none other than matplotlib.
Heres how:
install xvfb & other dependancies
!apt-get install -y xvfb python-opengl > /dev/null 2>&1
& install pyvirtual display:
!pip install gym pyvirtualdisplay > /dev/null 2>&1
then import all your libraries, including matplotlib & ipythondisplay:
import gym
import numpy as np
import matplotlib.pyplot as plt
from IPython import display as ipythondisplay
then you want to import Display from pyvirtual display & initialise your screen size, in this example 400x300... :
from pyvirtualdisplay import Display
display = Display(visible=0, size=(400, 300))
display.start()
last but not least, using gym's "rgb_array" render functionally, render to a "Screen" variable, then plot the screen variable using Matplotlib! (rendered indirectly using Ipython display)
env = gym.make("CartPole-v0")
env.reset()
prev_screen = env.render(mode='rgb_array')
plt.imshow(prev_screen)
for i in range(50):
action = env.action_space.sample()
obs, reward, done, info = env.step(action)
screen = env.render(mode='rgb_array')
plt.imshow(screen)
ipythondisplay.clear_output(wait=True)
ipythondisplay.display(plt.gcf())
if done:
break
ipythondisplay.clear_output(wait=True)
env.close()
Link to my working Colaboratory notebook demoing cartpole:
https://colab.research.google.com/drive/16gZuQlwxmxR5ZWYLZvBeq3bTdFfb1r_6
Note: not all Gym Environments support "rgb_array" render mode, but most of the basic ones do.