The question has already been answered for Lisp, so I'll just comment on Prolog.
Prolog was designed for two things: natural language processing and logical reasoning. In the GOFAI paradigm of the early 1970s, when Prolog was invented, this meant:
- constructing symbolic grammars for natural language that would be used to construct logical representations of sentences/utterances;
- using these representations and logical axioms (not necessarily those of classical logic) to infer new facts;
- using similar grammars to translate logical representation back into language.
Prolog is very good at this and is used in the ISS for exactly such a task. The approach got discredited though, because
- "all grammars leak": no grammar can catch all the rules and exceptions in a language;
- the more detailed the grammar, the higher the complexity (both big O and practical) of parsing;
- logical reasoning is both inadequate and unnecessary for many practical tasks;
- statistical approaches to NLP, i.e. "word counting", have proven much more robust. With the rise of the Internet, adequate datasets are available to get the statistics NLP developers need. At the same time, memory and disk costs has declined while processing power is still relatively expensive.
Only recently have NLP researchers developed somewhat practical combined symbolic-statistical approaches, sometimes using Prolog. The rest of the world uses Java, C++ or Python, for which you can more easily find libraries, tools and non-PhD programmers. The fact that I/O and arithmetic are unwieldy in Prolog doesn't help its acceptance.
Prolog is now mostly confined to domain-specific applications involving NLP and constraint reasoning, where it does seem to fare quite well. Still, few software companies will advertise with "built on Prolog technology" since the language got a bad name for not living up to the promise of "making AI easy."
(I'd like to add that I'm a great fan of Prolog, but even I only use it for prototyping.)