Home | People | Important Dates |
Challenge Details |
Workshop Details |
Submitting Results |
Paper Submission |
FAQ |
Frequently Asked Questions 1.
Is it necessary to use both audio-video channels?
The challenge data contains: audio, video and meta-data. The meta-data
is composed of actor identity, age and gender. The participants are
welcome to use any combination of modalities. 2.
Can scene information other than face information
be used?
Context analysis in FER is a hot topic. Participants can use
scene/background/body pose etc. information along with the face
information. 3.
Which face and fiducial points detector have you used?
For face detection, we found Zhu and Ramanan's mixture-of-parts based detector useful in
our experiments. The authors have made public ally available an
implementation of their method
on: LINK. The tracking was performed using the Intraface tracker [LINK]. 4.
Can I use both train and validate data for learning my model?
For evaluating a method on the Test set, data from both, Train and Val can be used for learning the model. 5.
Is the use of commercial face detector such as the Google Picasa OK?
Any face detector whether commercial or academic can be used to
participate in the challenge. The paper accompanying the challenge
result submission should contain clear details of the
detectors/libraries used. 6.
Can I learn my model on both labelled train and unlabelled test data?
No, the datasets are subject independent, test data is supposed to be
used for testing purposes only. 7.
Can I use external data for training along with the one provided?
The participants are free to use external data for trainig along with
the AFEW Train and Val partitions. However, this should be clearly discussed in the accompanying paper.
8. Will the review process be anyonomous?
The review process is double-blind. |