I\'m using Python 3.5.2
I have two lists
Perhaps Python is not the right tool here. Here is one with the Unix toolchain
sed G file |
tr ' ' '\n' |
grep -vf blacklist |
awk -v RS= -v OFS=' ' '{$1=$1}1'
assuming your blacklist file is preprocessed with the word boundaries added. The steps are: convert the file to double spaced, split each sentence to one word per line, mass delete the blacklist words from the file, and merge back the lines.
This should run at least an order of magnitude faster.
For preprocessing the blacklist file from words (one word per line)
sed 's/.*/\\b&\\b/' words > blacklist