I would just like to add that good or bad is relative to the corpus you are working on and the scores for the other clusters.
In the link that Sara provided the article shows 33 topics as optimal with a coherence score of ~0.33, but as the author mentions there maybe repeated terms within that cluster. In that case you would have to compare terms/snippets from the optimal cluster decomposition to a lower coherence score to see if the results are more or less interpretable.
Of course you should adjust the parameters of your model but the score contextually dependent, and I don't think you can necessarily say a specific coherence score clustered your data optimally without first understanding what the data looks like. That said, as Sara mentioned ~1 or ~0 are probably wrong.
You could compare your model against a benchmark dataset and if it has a higher coherence, then you have a better gauge of how well your model is working.
This paper was helpful to me: https://rb.gy/kejxkz