Is there a library in python that can convert words (mainly names) to Arpabet phonetic transcription?
BARBELS -> B AA1 R B AH0 L Z
BARBEQUE -> B AA1 R B IH0
What you want is variously called "letter to sound" or "grapheme to phoneme" engine. There are a few around, including one in every text-to-speech system.
I usually deal with non-US accents, for which I use espeak. It doesn't output arpabet directly (which is restricted to US sounds anyway), but you can coax it to attempt an American accent, and convert from IPA to arpabet later.
>>> from subprocess import check_output
>>> print check_output(["espeak", "-q", "--ipa",
'-v', 'en-us',
'hello world']).decode('utf-8')
həlˈoʊ wˈɜːld
You can use -x
rather than --ipa
for espeak's own phone representation (it's ascii):
>>> check_output(["espeak", "-q", "-x", '-v', 'en-us', 'hello world'])
h@l'oU w'3:ld
Converting to arpabet isn't quite as simple as a character look-up though; for example "tʃ" should be converted to "CH", not the "T SH" that a greedy conversion would give you (except, that is, in odd cases like "swˈɛtʃɑːp" for "sweatshop").
Get the cmu pronouncing dictionary and then you can use nltk to get the associated
arpabet phonetic transcription for any word from that dictionary itself like this
>>> entries = nltk.corpus.cmudict.entries()
>>> len(entries)
127012
>>> for entry in entries[39943:39951]:
... print entry
...
('fir', ['F', 'ER1'])
('fire', ['F', 'AY1', 'ER0'])
('fire', ['F', 'AY1', 'R'])
('firearm', ['F', 'AY1', 'ER0', 'AA2', 'R', 'M'])
('firearm', ['F', 'AY1', 'R', 'AA2', 'R', 'M'])
('firearms', ['F', 'AY1', 'ER0', 'AA2', 'R', 'M', 'Z'])
('firearms', ['F', 'AY1', 'R', 'AA2', 'R', 'M', 'Z'])
('fireball', ['F', 'AY1', 'ER0', 'B', 'AO2', 'L'])
You can use a tiny utility from my listener project to do this. It uses espeak under the covers (to generate IPA), then uses a mapping extracted from the CMU dictionary to produce the set of ARPABet mappings that could match the IPA generated, for instance:
$ listener-arpa
we are testing
we
W IY
are
ER
AA
testing
T EH S T IH NG
That produces exact-matches on the CMU dictionary about 45% of the time (I got around 36% using the documented correspondence in CMU/Wikipedia) while producing ~3 matches per word (on average). That said, we see a "close match" about 99% of the time, that is, while we might not precisely match the hand-marked-up word every time, we are generally off by only a few phonemes.
$ sudo apt-get install espeak
$ pip install -e git+https://github.com/mcfletch/listener.git#egg=listener
Using nltk with the cmudict
corpus installed:
arpabet = nltk.corpus.cmudict.dict()
for word in ('barbels', 'barbeque', 'barbequed', 'barbequeing', 'barbeques'):
print(arpabet[word])
yields
[['B', 'AA1', 'R', 'B', 'AH0', 'L', 'Z']]
[['B', 'AA1', 'R', 'B', 'IH0', 'K', 'Y', 'UW2']]
[['B', 'AA1', 'R', 'B', 'IH0', 'K', 'Y', 'UW2', 'D']]
[['B', 'AA1', 'R', 'B', 'IH0', 'K', 'Y', 'UW2', 'IH0', 'NG']]
[['B', 'AA1', 'R', 'B', 'IH0', 'K', 'Y', 'UW2', 'Z']]
To install the cmudict
corpus in the python interpreter type:
>>> import nltk
>>> nltk.download()
Use GUI to install
corpora>cmudict