NDLI logo
  • Content
  • Similar Resources
  • Metadata
  • Cite This
  • Log-in
  • Fullscreen
Log-in
Do not have an account? Register Now
Forgot your password? Account recovery
  1. Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  2. Year: 2013, Volume: 2013
  3. Year: 2013, Volume: 2013, Issue: Unspecified
  4. Selective Transfer Machine for Personalized Facial Action Unit Detection
Loading...

Please wait, while we are loading the content...

Year: 2015, Volume: 2015
Year: 2014, Volume: 2014
Year: 2013, Volume: 2013
Year: 2013, Volume: 2013, Issue: Unspecified
Selective Transfer Machine for Personalized Facial Action Unit Detection
Computing Diffeomorphic Paths for Large Motion Interpolation*
Year: 2012, Volume: Unspecified
Year: 2012, Volume: 2012
Year: 2011, Volume: Unspecified
Year: 2011, Volume: 2011
Year: 2010, Volume: Unspecified
Year: 2010, Volume: 2010
Year: 2010, Volume: 13-18 June
Year: 2009, Volume: Unspecified
Year: 2009, Volume: 2009
Year: 2009, Volume: 20-25
Year: 2008, Volume: Unspecified
Year: 2008, Volume: 2008
Year: 2007, Volume: 4679
Year: 2007, Volume: 2007
Year: 2006, Volume: Unspecified
Year: 2006, Volume: 2006
Year: 2006, Volume: 2
Year: 2006, Volume: 1
Year: 2004, Volume: 1
Year: 2000, Volume: 2000

Similar Documents

...
Selective Transfer Machine for Personalized Facial Action Unit Detection

Article

...
Joint Patch and Multi-label Learning for Facial Action Unit Detection

Article

...
Facial Action Unit Event Detection by Cascade of Tasks

Article

...
Selective Transfer Machine for Personalized Facial Action Unit Detection

...
Selective transfer machine for personalized facial action unit detection.

...
Multi-view dynamic facial action unit detection

Article

...
Facial Action Unit Detection on ICU Data for Pain Assessment

Article

...
Upper, Middle and Lower Region Learning for Facial Action Unit Detection

Article

...
Facial Action Unit Detection using 3D Facial Landmarks

Article

Selective Transfer Machine for Personalized Facial Action Unit Detection

Content Provider PubMed Central
Author Chu, Wen-sheng Torre, Fernando De La Cohn, Jeffery F.
Abstract Automatic facial action unit (AFA) detection from video is a long-standing problem in facial expression analysis. Most approaches emphasize choices of features and classifiers. They neglect individual differences in target persons. People vary markedly in facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) and behavior. Individual differences can dramatically influence how well generic classifiers generalize to previously unseen persons. While a possible solution would be to train person-specific classifiers, that often is neither feasible nor theoretically compelling. The alternative that we propose is to personalize a generic classifier in an unsupervised manner (no additional labels for the test subjects are required). We introduce a transductive learning method, which we refer to Selective Transfer Machine (STM), to personalize a generic classifier by attenuating person-specific biases. STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject. To evaluate the effectiveness of STM, we compared STM to generic classifiers and to cross-domain learning methods in three major databases: CK+ [20], GEMEP-FERA [32] and RU-FACS [2]. STM outperformed generic classifiers in all.
Related Links http://dx.doi.org/10.1109/cvpr.2013.451
Starting Page 3515
File Format PDF
ISSN 10636919
Journal Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Volume Number 2013
Language English
Publisher Date 2013-06-01
Access Restriction Open
Subject Keyword Research in Higher Education
Content Type Text
Resource Type Article
Subject Computer Vision and Pattern Recognition Software
  • About
  • Disclaimer
  • Feedback
  • Sponsor
  • Contact
  • Chat with Us
About National Digital Library of India (NDLI)
NDLI logo

National Digital Library of India (NDLI) is a virtual repository of learning resources which is not just a repository with search/browse facilities but provides a host of services for the learner community. It is sponsored and mentored by Ministry of Education, Government of India, through its National Mission on Education through Information and Communication Technology (NMEICT). Filtered and federated searching is employed to facilitate focused searching so that learners can find the right resource with least effort and in minimum time. NDLI provides user group-specific services such as Examination Preparatory for School and College students and job aspirants. Services for Researchers and general learners are also provided. NDLI is designed to hold content of any language and provides interface support for 10 most widely used Indian languages. It is built to provide support for all academic levels including researchers and life-long learners, all disciplines, all popular forms of access devices and differently-abled learners. It is designed to enable people to learn and prepare from best practices from all over the world and to facilitate researchers to perform inter-linked exploration from multiple sources. It is developed, operated and maintained from Indian Institute of Technology Kharagpur.

Learn more about this project from here.

Disclaimer

NDLI is a conglomeration of freely available or institutionally contributed or donated or publisher managed contents. Almost all these contents are hosted and accessed from respective sources. The responsibility for authenticity, relevance, completeness, accuracy, reliability and suitability of these contents rests with the respective organization and NDLI has no responsibility or liability for these. Every effort is made to keep the NDLI portal up and running smoothly unless there are some unavoidable technical issues.

Feedback

      Sponsor

      Ministry of Education, through its National Mission on Education through Information and Communication Technology (NMEICT), has sponsored and funded the National Digital Library of India (NDLI) project.

      Contact National Digital Library of India
      Central Library (ISO-9001:2015 Certified)
      Indian Institute of Technology Kharagpur
      Kharagpur, West Bengal, India | PIN - 721302
      See location in the Map
      03222 282435
      Mail: support@ndl.gov.in
      Sl. Authority Responsibilities Communication Details
      1 Ministry of Education (GoI),
      Department of Higher Education
      Sanctioning Authority https://www.education.gov.in/ict-initiatives
      2 Indian Institute of Technology Kharagpur Host Institute of the Project: The host institute of the project is responsible for providing infrastructure support and hosting the project https://www.iitkgp.ac.in
      3 National Digital Library of India Office, Indian Institute of Technology Kharagpur The administrative and infrastructural headquarters of the project Dr. B. Sutradhar  bsutra@ndl.gov.in
      4 Project PI / Joint PI Principal Investigator and Joint Principal Investigators of the project Dr. B. Sutradhar  bsutra@ndl.gov.in
      Prof. Saswat Chakrabarti  will be added soon
      5 Website/Portal (Helpdesk) Queries regarding NDLI and its services support@ndl.gov.in
      6 Contents and Copyright Issues Queries related to content curation and copyright issues content@ndl.gov.in
      7 National Digital Library of India Club (NDLI Club) Queries related to NDLI Club formation, support, user awareness program, seminar/symposium, collaboration, social media, promotion, and outreach clubsupport@ndl.gov.in
      8 Digital Preservation Centre (DPC) Assistance with digitizing and archiving copyright-free printed books dpc@ndl.gov.in
      9 IDR Setup or Support Queries related to establishment and support of Institutional Digital Repository (IDR) and IDR workshops idr@ndl.gov.in
      I will try my best to help you...
      Cite this Content
      Loading...