Minimum variance method to obtain the best shot in video for face recognition

Kazuo Ohzeki, Ryota Aoyama, Yutaka Hirakawa

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper describes a face recognition algorithm using feature points of face parts, which is classified as a feature-based method. As recognition performance depends on the combination of adopted feature points, we utilize all reliable feature points effectively. From moving video input, well-conditioned face images with a frontal direction and without facial expression are extracted. To select such well-conditioned images, an iteratively minimizing variance method is used with variable input face images. This iteration drastically brings convergence to the minimum variance of 1 for a quarter to an eighth of all data, which means 3.75-7.5 Hz by frequency on average. Also, the maximum interval, which is the worst case, between the two values with minimum deviation is about 0.8 seconds for the tested feature point sample.

Original languageEnglish
Title of host publicationProceedings of the 2015 Federated Conference on Computer Science and Information Systems, FedCSIS 2015
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages869-874
Number of pages6
ISBN (Print)9788360810651
DOIs
Publication statusPublished - 2015
EventFederated Conference on Computer Science and Information Systems, FedCSIS 2015 - Lodz, Poland
Duration: 2015 Sep 132015 Sep 16

Other

OtherFederated Conference on Computer Science and Information Systems, FedCSIS 2015
CountryPoland
CityLodz
Period15/9/1315/9/16

Fingerprint

Face recognition

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Ohzeki, K., Aoyama, R., & Hirakawa, Y. (2015). Minimum variance method to obtain the best shot in video for face recognition. In Proceedings of the 2015 Federated Conference on Computer Science and Information Systems, FedCSIS 2015 (pp. 869-874). [2015F398] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.15439/2015F398

Minimum variance method to obtain the best shot in video for face recognition. / Ohzeki, Kazuo; Aoyama, Ryota; Hirakawa, Yutaka.

Proceedings of the 2015 Federated Conference on Computer Science and Information Systems, FedCSIS 2015. Institute of Electrical and Electronics Engineers Inc., 2015. p. 869-874 2015F398.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ohzeki, K, Aoyama, R & Hirakawa, Y 2015, Minimum variance method to obtain the best shot in video for face recognition. in Proceedings of the 2015 Federated Conference on Computer Science and Information Systems, FedCSIS 2015., 2015F398, Institute of Electrical and Electronics Engineers Inc., pp. 869-874, Federated Conference on Computer Science and Information Systems, FedCSIS 2015, Lodz, Poland, 15/9/13. https://doi.org/10.15439/2015F398
Ohzeki K, Aoyama R, Hirakawa Y. Minimum variance method to obtain the best shot in video for face recognition. In Proceedings of the 2015 Federated Conference on Computer Science and Information Systems, FedCSIS 2015. Institute of Electrical and Electronics Engineers Inc. 2015. p. 869-874. 2015F398 https://doi.org/10.15439/2015F398
Ohzeki, Kazuo ; Aoyama, Ryota ; Hirakawa, Yutaka. / Minimum variance method to obtain the best shot in video for face recognition. Proceedings of the 2015 Federated Conference on Computer Science and Information Systems, FedCSIS 2015. Institute of Electrical and Electronics Engineers Inc., 2015. pp. 869-874
@inproceedings{8c899b8cdeb747b8951504911fdd0589,
title = "Minimum variance method to obtain the best shot in video for face recognition",
abstract = "This paper describes a face recognition algorithm using feature points of face parts, which is classified as a feature-based method. As recognition performance depends on the combination of adopted feature points, we utilize all reliable feature points effectively. From moving video input, well-conditioned face images with a frontal direction and without facial expression are extracted. To select such well-conditioned images, an iteratively minimizing variance method is used with variable input face images. This iteration drastically brings convergence to the minimum variance of 1 for a quarter to an eighth of all data, which means 3.75-7.5 Hz by frequency on average. Also, the maximum interval, which is the worst case, between the two values with minimum deviation is about 0.8 seconds for the tested feature point sample.",
author = "Kazuo Ohzeki and Ryota Aoyama and Yutaka Hirakawa",
year = "2015",
doi = "10.15439/2015F398",
language = "English",
isbn = "9788360810651",
pages = "869--874",
booktitle = "Proceedings of the 2015 Federated Conference on Computer Science and Information Systems, FedCSIS 2015",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Minimum variance method to obtain the best shot in video for face recognition

AU - Ohzeki, Kazuo

AU - Aoyama, Ryota

AU - Hirakawa, Yutaka

PY - 2015

Y1 - 2015

N2 - This paper describes a face recognition algorithm using feature points of face parts, which is classified as a feature-based method. As recognition performance depends on the combination of adopted feature points, we utilize all reliable feature points effectively. From moving video input, well-conditioned face images with a frontal direction and without facial expression are extracted. To select such well-conditioned images, an iteratively minimizing variance method is used with variable input face images. This iteration drastically brings convergence to the minimum variance of 1 for a quarter to an eighth of all data, which means 3.75-7.5 Hz by frequency on average. Also, the maximum interval, which is the worst case, between the two values with minimum deviation is about 0.8 seconds for the tested feature point sample.

AB - This paper describes a face recognition algorithm using feature points of face parts, which is classified as a feature-based method. As recognition performance depends on the combination of adopted feature points, we utilize all reliable feature points effectively. From moving video input, well-conditioned face images with a frontal direction and without facial expression are extracted. To select such well-conditioned images, an iteratively minimizing variance method is used with variable input face images. This iteration drastically brings convergence to the minimum variance of 1 for a quarter to an eighth of all data, which means 3.75-7.5 Hz by frequency on average. Also, the maximum interval, which is the worst case, between the two values with minimum deviation is about 0.8 seconds for the tested feature point sample.

UR - http://www.scopus.com/inward/record.url?scp=84958074343&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84958074343&partnerID=8YFLogxK

U2 - 10.15439/2015F398

DO - 10.15439/2015F398

M3 - Conference contribution

AN - SCOPUS:84958074343

SN - 9788360810651

SP - 869

EP - 874

BT - Proceedings of the 2015 Federated Conference on Computer Science and Information Systems, FedCSIS 2015

PB - Institute of Electrical and Electronics Engineers Inc.

ER -