I would like to know why Perlin noise is still so popular today after Simplex came out. Simplex noise was made by Ken Perlin himself and it was suppose to take over his old algo
"If it ain't broke, don't fix it."
See if you can find anyone telling you why Simplex is better. "It's faster and extends to multiple dimensions" and "simplex noise attempts to reduce the complexity of higher dimensional noise functions" were what I found. Most of us work in 2 or 3 dimensions, maybe 4 if we're lucky enough to be doing something with time.
I think its fair to say there is little enough real-time usage of Perlin that is too slow to handle, that for most purposes standard Perlin noise is sufficient. In pre-renderings (such as used in the movie industry) time isn't really important since renderings are slow anyway; and in real-time simulations, we have enough ways to reduce the scope of ongoing processing that it's unlikely you're going to be generating massive noise maps every few nano/milliseconds -- that's just basic real-time optimisation.
simplex noise looks worse imho, and lots of people think it looks "increasingly bad" in higher dimensions. I'd still recommend it over perlin for most applications, as most won't be using just raw simplex but octaves of it which looks roughly the same as octaves of perlin and is significantly faster for octaves.