Using the gensim.models.Word2Vec
library, you have the possibility to provide a model and a \"word\" for which you want to find the list of most similar words:
I don't think what you're trying to achieve could ever give an accurate answer. Simply because the two models are trained separately. And although both the English and the German model will have similar distances between their respective word vectors. There's no guarantee that the word vector for 'House' will have the same direction as the word vector for 'Haus'.
In simple terms, if you trained both models with vector size=3. And 'House' has vector [0.5,0.2,0.9], there's no guarantee that 'Haus' will have vector [0.5,0.2,0.9] or even something close to that.
In order to solve this, you could first translate the English word to German and then use the vector for that word to look for similar words in the German model.
TL:DR; You can't just plug in vectors from one language model into another and expect to have accurate results.
The method similar_by_vector returns the top-N most similar words by vector:
similar_by_vector(vector, topn=10, restrict_vocab=None)