DETEKSI EKSPRESI WAJAH MANUSIA MENGGUNAKAN CONVOLUTION NEURAL NETWORK PADA CITRA PEMBELAJARAN DARING

Main Article Content

Evan Tanuwijaya
Timotius
David Christian Kartamihardja
Timotius Leonardo Lianoto

Abstract

During the pandemic, many activities that were originally carried out face-to-face were turned into virtual face-to-face due to reducing the spread of the virus. In conducting face-to-face activities, video conferencing applications are widely used for meetings and conducting activities such as learning. To support learning, sometimes teachers or lecturers find it difficult to observe whether the participants understand or not. In this study, used a dataset from KDEF which has seven classes. To find out the expressions of the participants, a model was made using the Convolution Neural Network that can detect human facial expressions. This convolution neural network has a contribution where the model used is a YOLO-face which is pipelined with CNN for classifications that have Alexnet architecture and modifications of Alexnet. The workings of this model is that the image is processed into a YOLO-face then the results from the YOLO-face are used by CNN classification to be able to classify the facial expressions of the participants. Then the photos will be classified using a CNN modification of the Alexnet architecture. The accuracy is 0.94 during training, precision is 0.92 during training, and recall is 0.96 during training. In this study, the face was successfully detected and classified the expressions on the face. For further development, it is necessary to optimize and increase the accuracy of the model to be able to classify facial expressions properly.

Article Details

How to Cite
Tanuwijaya, E., Timotius, Kartamihardja, D. C., & Lianoto, T. L. (2021). DETEKSI EKSPRESI WAJAH MANUSIA MENGGUNAKAN CONVOLUTION NEURAL NETWORK PADA CITRA PEMBELAJARAN DARING. JURNAL ILMIAH BETRIK : Besemah Teknologi Informasi Dan Komputer, 12(3), 224-230. https://doi.org/10.36050/betrik.v12i3.357
Section
Articles

References

Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). YOLOv4: Optimal Speed and Accuracy of Object Detection. ArXiv.
Chen, W., Huang, H., Peng, S., Zhou, C., & Zhang, C. (2020). YOLO-face: a real-time face detector. Visual Computer. https://doi.org/10.1007/s00371-020-01831-7
Goeleven, E., De Raedt, R., Leyman, L., & Verschuere, B. (2008). The Karolinska directed emotional faces: A validation study. Cognition and Emotion, 22(6), 1094–1118. https://doi.org/10.1080/02699930701626582
Jamaluddin, D., Ratnasih, T., Gunawan, H., & Paujiah, E. (2020). Pembelajaran Daring Masa Pandemik Covid-19 Pada Calon Guru : Hambatan, Solusi dan Proyeksi. Karya Tulis Ilmiah UIN Sunan Gunung Djjati Bandung, 1–10. http://digilib.uinsgd.ac.id/30518/
Jayaraman, U., Gupta, P., Gupta, S., Arora, G., & Tiwari, K. (2020). Recent development in face recognition. Neurocomputing, 408, 231–245. https://doi.org/10.1016/j.neucom.2019.08.110
Li, B., & Lima, D. (2021). Facial expression recognition via ResNet-50. International Journal of Cognitive Computing in Engineering, 2(February), 57–64. https://doi.org/10.1016/j.ijcce.2021.02.002
Prasetyawan, D., & ’Uyun, S. (2020). Penentuan Emosi Pada Video Dengan Convolutional Neural Network. JISKA (Jurnal Informatika Sunan Kalijaga), 5(1), 23. https://doi.org/10.14421/jiska.2020.51-04
Qi, C., Li, M., Wang, Q., Zhang, H., Xing, J., Gao, Z., & Zhang, H. (2018). Facial Expressions Recognition Based on Cognition and Mapped Binary Patterns. IEEE Access, 6, 18795–18803. https://doi.org/10.1109/ACCESS.2018.2816044
Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (n.d.). You Only Look Once: Unified, Real-Time Object Detection. Retrieved May 18, 2021, from http://pjreddie.com/yolo/
Shustanov, A., & Yakimov, P. (2017). CNN Design for Real-Time Traffic Sign Recognition. Procedia Engineering, 201, 718–725. https://doi.org/10.1016/j.proeng.2017.09.594
Wati, V., Kusrini, & Fatta, H. Al. (2019). Real time face expression classification using convolutional neural network algorithm. 2019 International Conference on Information and Communications Technology, ICOIACT 2019, Clm, 497–501. https://doi.org/10.1109/ICOIACT46704.2019.8938521
Zhang, J., He, L., Karkee, M., Zhang, Q., Zhang, X., & Gao, Z. (2018). Branch detection for apple trees trained in fruiting wall architecture using depth features and Regions-Convolutional Neural Network (R-CNN). Computers and Electronics in Agriculture, 155(August), 386–393. https://doi.org/10.1016/j.compag.2018.10.029