Before using R, I used quite a bit of Perl. In Perl, I would often use hashes, and lookups of hashes are generally regarded as fast in Perl.
For example, the followi
If you are trying to hash 10,000,000+ things in R using the hash package, then building the hash takes a very very long time. It crashed R, despite the fact that the data is taking less than 1/3 of my memory.
I had much better performance with the package data.table using setkey. If you are not familiar with data.table and setkey, you might start here: https://cran.r-project.org/web/packages/data.table/vignettes/datatable-keys-fast-subset.html
I realize the original question referred to 10,000 things, but google directed me here a couple days ago. I tried to use the hash package and had a really hard time. Then I found this blog post which suggests that building the hash can take hours for 10M+ things and this aligns with my experience:
https://appsilon.com/fast-data-lookups-in-r-dplyr-vs-data-table/?utm_campaign=News&utm_medium=Community&utm_source=DataCamp.com