How to adaptively add and use face images collected while authentication to improve performance of face authentication?

前端 未结 2 607
说谎
说谎 2020-12-17 06:32

My current project is to build a face authentication system. The constraint I have is: during enrollment, the user gives single image for training. However, I can add and us

相关标签:
2条回答
  • 2020-12-17 07:14

    I would recommend that you give SOM(self-organizing maps) a close look. I think it contains the solutions to all the problems and constraints you have mentioned.

    You can employ it for the single image per person problem. Also, using the multiple SOM-face strategy, you can adapt it for cases when additional images are available for training. Whats pretty neat about the whole concept is that when a new face is encountered, only the new one rather than the whole original database is needed to be re-learned.

    A few links which you might find helpful along the way:

    http://en.wikipedia.org/wiki/Self-organizing_map (wiki)

    http://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/tnn05.pdf (An interesting research paper which demonstrates the above mentioned technique)

    Good Luck

    0 讨论(0)
  • 2020-12-17 07:19

    To make your classifier robust you need to use condition independent features. For example, you cannot use face color since it depends on lighting conditions and state of a person itself. However, you can use distance between eyes since it is independent of any changes.

    I would suggest building some model of such independent features and retrain classifier each time person starts authentication session. Best model I can think of is Active Appearance Model (one of implementations).

    0 讨论(0)
提交回复
热议问题