نوع مقاله : مقاله کامل پژوهشی

نویسندگان

1 دانشجوی دکتری، دانشکده مهندسی پزشکی، دانشگاه صنعتی امیرکبیر (پلی تکنیک)

2 دانشیار، دانشکده مهندسی برق، دانشگاه صنعتی امیرکبیر(پلی تکنیک)

3 دانشیار، دانشکده مهندسی پزشکی، دانشگاه صنعتی امیرکبیر (پلی تکنیک)

10.22041/ijbme.2012.13117

چکیده

در این مقاله، روشی مبتنی بر دانش اولیه از شخص جدید با هدف افزایش قدرت تعمیم‌دهی سیستم بازشناسی جلوه‌های هیجانی چهره پیشنهاد شده است. به منظور بازشناسی مناسب، ترکیبی از ویژگی‌های هندسی و توصیفگرهای بافت چهره استفاده شد. این ویژگی‌ها با ویژگی‌های کل‌نگر(تحلیل مؤلفه‌های مستقل هسته-محور تصویر چهره و خودِ تصویر چهره) مقایسه شدند. برای تحلیل ویژگی‌های پیشنهادی، حساسیت نرخ بازشناسی آنها نسبت به تغییر نویز و تغییرات بین فردی بررسی شد. نتایج نشان داد با وابسته کردن سیستم به شخص بروز دهنده می‌توان نرخ بازشناسی را تا 96% افزایش داد که این نتیجه مربوط به ویژگی‌های کل‌نگر است. بعلاوه روش کل‌نگر تحلیل مؤلفه‌های مستقل هسته- محور در مقایسه با دیگر ویژگی‌ها نسبت به تغییرات بین فردی حساسیت بیشتری داشته‌ است. بر اساس دانش محدود از فرد جدید نمونه‌های مجازی تولید و برای تقویت یادگیری سیستم بازشناسی استفاده شد. نتیجه بازشناسی مستقل از فرد این روش در مقایسه با روش پایه به‌صورت معنی‌داری (P<0.05) بهبود داشته و مقدار صحت تشخیص آن 91.39% است.

کلیدواژه‌ها

موضوعات

عنوان مقاله [English]

Person-independent facial expression recognition based on prior knowledge from the new subject

نویسندگان [English]

  • Amin Mohammadian 1
  • Hasan Aghaeinia 2
  • Farzad Towhidkhah 3

1 Ph.D. Student, Bioelectric Department, Faculty of Biomedical Engineering, Amirkabir University of Technology

2 Associate Professor, Department of Electrical engineering, Amirkabir University of Technology

3 Associate Professor, Bioelectric Department, Faculty of Biomedical Engineering, Amirkabir University of Technology

چکیده [English]

In this paper, a method is proposed based on the prior knowledge from a new subject to improve the performance of person-independent facial expression recognition. First, in order to obtain a basic system, a combination of geometric features and texture descriptor is compared with global features (i.e., mapped face images using the Kernel-PCA and raw data of face images). The results of comparison under noisy conditions were investigated and evaluated by person-dependent/independent cross-validation method. The obtained basic system was evaluated by leave-one-subject-out cross-validation. Since the same subjects are not introduced in both training and test phases, the basic recognition system is person-independent and its performance is substantially lower than that of person-dependent cross-validation case. To improve the performance of the basic system, a method is proposed in which virtual samples are generated based on the prior knowledge from the new subject and are used in learning process. The results show that the recognition rate increases up to 96% for the person-dependent basic system, kernel-PCA method is more sensitive than the others to interpersonal variability, and the recognition rate is significantly (P<0.05) improved up to 91.39% compared to that of person-independent case. 

کلیدواژه‌ها [English]

  • Facial expression recognition
  • person-independent system
  • prior knowledge
  • virtual samples
[1] Lien J., Automatic Recognition of Facial Expressions Using Hidden Markov Models and Estimation of Expression Intensity; Electrical Engineering, Washington University, 1998.
[2] Gallegos D.R., Tranel D., Positive facial affect facilitates the identification of famous faces; Brain and Language, 2005; 93(3): 338-348.
[3] Bartlett M.S., Littlewort G., Frank M., et al., Fully Automatic Facial Action Recognition in Spontaneous Behavior; in Automatic Face and Gesture Recognition, 2006. FGR 2006. 7th International Conference on, 2006: 223-230.
[4] Littlewort G., Bartlett M.S., Fasel I., et al., Dynamics of facial expression extracted automatically from video; Image and Vision Computing, 2006; 24: 615-625.
[5] Kotsia I.,  Pitas I., Facial expression recognition in image sequences using geometric deformation features and support vector machines; IEEE Trans. Image Processing, 2007; 16(1): 172-187.
[6] Pantic M., Patras I., Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences; Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 2006; 36(2): 433-449.
[7] Calder A.J., Young A.W., Understanding the recognition of facial identity and facial expression; Nature Reviews Neuroscience, 2005; 6(8): 641-651.
[8] Zhang Y., Ji Q., Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences; Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2005; 27(5): 699-714.
[9] Ulukaya S., Affect recognition from facial expressions for human computer interaction; MScİstanbul, 2011.
[10] Cohen I., Sebe N., Garg A., et al., Facial expression recognition from video sequences: temporal and static modeling; Computer Vision and Image Understanding, 2003; 91: 160-187.
[11] Xuan T.W., Rapid speaker adaptation by variable reference model subspace; Université de Rennes 2008.
[12] Niyogi P., Girosi F., Poggio T., Incorporating prior information in machine learning by creating virtual examples; Proceedings of the IEEE, 1998; 86(11): 2196-2209.
[13] Mohammadzade H., Hatzinakos D., Projection into Expression Subspaces for Face Recognition from Single Sample per Person; Affective Computing, IEEE Transactions on, 2013; 4(1): 69-82.
[14] Gu W., C. X. Ã, Venkatesh Y.V., et al., Facial expression recognition using radial encoding of local Gabor features and classifier synthesis; Pattern Recognition, 2012; 45(1): 80-91.
[15] Fan W., Bouguila N., Novel approaches for synthesizing video textures; Expert Systems with Applications, 2012; 39(1): 828-839.
[16] Mohammadzade H., Hatzinakos D., An expression transformation for improving the recognition of expression-variant faces from one sample image per person; in Biometrics: Theory Applications and Systems (BTAS), 2010 Fourth IEEE International Conference on, 2010: 1-6.
[17] Spangler S.M., Schwarzer G., Korell M., et al., The relationships between processing facial identity, emotional expression, facial speech, and gaze direction during development; Journal of Experimental Child Psychology, 2010; 105(1-2): 1-19.
[18] Yang Y., Zheng N., Liu Y., et al., Expression transfer for facial sketch animation; Signal Processing, 2011;  91(11): 2465-2477.
[19] Tian Y.I.,, Kanade T., Cohn J.F., Recognizing action units for facial expression analysis; Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2001; 23(2): 97-115.
[20] Freitag C., Schwarzer G., Influence of emotional facial expressions on 3–5-year-olds’face recognition; Cognitive Development, 2011; 26(3): 230-247.
[21] Lucey P., Cohn J.F., Kanade T., et al., The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression; in Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE Computer Society Conference on, 2010: 94-101.
[22] Mohammadian A., Aghaeinia H., Towhidkhah F., Geometric and texture based facial expression recognition,” in BioMedical Engineering, 19th Iranian Conference on, Tehran, 2012.
[23] Van der Maaten L., Postma E., Van Den Herik H., Dimensionality reduction: A comparative review,” Journal of Machine Learning Research, 2009; 10: 1-41.
[24] Friedman T.H.J., Tibshirani R., Additive logistic regression: A statistical view of boosting; The Annals of Statistics, 200; 38(2): 337–374.
[25] Chen G.Y., Bui T.D., Krzyżak A., Invariant pattern recognition using radon, dual-tree complex wavelet and Fourier transforms; Pattern Recognition, 2009;  42(9): 2013-2019.
[26] Gu W., Xiang C., Venkatesh Y.V., et al., Facial expression recognition using radial encoding of local Gabor features and classifier synthesis; Pattern Recognition, 2012; 45(1): 80-91.
[27] Shan C., Gong S., McOwan P.W., Facial expression recognition based on Local Binary Patterns: A comprehensive study; Image and Vision Computing, 2009; 27(6): 803-816.
[28] Jafarzadeh M., Mohammadian A., Using Dimension Reduction of Local Binary Patterns To improve Facial Expression Recognition; in Electrical Engineering, 20th Iranian Conference on, Tehran, 2012.