Sign Language Tutor (İşaret Dili Eğitmeni)

In this project, we have conducted research on the analysis of Turkish Sign Language and developed educational tools for teaching Turkish Sign Language, with the support of the Scientific and Technological Research Council of Turkey. Turkish Sign Language (TID) is a visual language, used by the hearing impaired, which consists of hand gestures and facial expressions. The study of sign language is a currently interesting research field both by computer vision researchers and linguists. Beyond the research interest, the development of educational tools for teaching sign language has benefits to the general community. The aim of this project is the development of educational tools for sign language while doing research on sign analysis and recognition from videos. With this purpose, we have worked in five different areas:

  • Development of Turkish Sign language databases for research and education
  • One of the databases collected has been used for a website and stand-alone application program for a TID sign dictionary
  • Research on analysis of sign language from videos and the development of a prototype of an interactive sign tutor
  • The development of Signiary, a Sign Dictionary which uses the Turkish Radio Television’s news for the hearing impaired
  • Research on distributed sign language recognition in a multi-agent environment.

The output of the project is in the form of theses, reports, journal papers and conference papers. In addition, we have produced databases and demonstrator programs.

Some sample figures and screenshots from the software developed

Publications related to the project

Theses

  • Oya Aran, “Vision Based Sign Language Recognition: Modeling and Recognizing Isolated Signs With Manual and Non-manual Components”, PhD Thesis, Bogazici University, 2008.
  • İsmail Arı, “Facial Feature Tracking and Expression Recognition for Sign Language”, MS Thesis, Bogazici University, 2008.
  • Pınar Santemiz, “Alignment And Multimodal Analysis In Signed Speech”, MS Thesis, Bogazici University, 2009.
  • İlker Yıldırım, “Cooperative Sign Language Tutoring: A Multiagent Approach”, MS Thesis, Bogazici University, 2009.

Journal Articles

  • Oya Aran, Lale Akarun, “A Multi-class Classification Strategy for Fisher Scores: Application to Signer Independent Sign Language Recognition”, Pattern Recognition, Vol. 43, no. 5, pp. 1717-1992, May 2010.
  • Cem Keskin, Lale Akarun, " Input-output HMM based 3D hand gesture recognition and spotting for generic applications”, accepted for publication, Pattern Recognition Letters, vol. 30, no. 12, pp. 1086-1095, September 2009.
  • Oya Aran, M.S. Thomas Burger, Alice Caplier, Lale Akarun, “A Belief-Based Sequential Fusion Approach for Fusing Manual and Non-Manual Signs”, Pattern Recognition, vol.42 no.5, pp. 812-822, May 2009.
  • Oya Aran, Ismail Ari, Alexandre Benoit, Pavel Campr, Ana Huerta Carrillo, Franois-Xavier Fanard, Lale Akarun, Alice Caplier, Michele Rombaut, and Bulent Sankur, “Signtutor: An Interactive System for Sign Language Tutoring”. IEEE Multimedia, Volume: 16, Issue: 1, Pages: 81-93, Jan-March 2009.
  • Ebru Arisoy, Dogan Can, Siddika Parlak, Hasim Sak, and Murat Saraclar, Turkish Broadcast News Transcription and Retrieval, in IEEE Transactions on Audio, Speech and Language Processing, Special issue on morphologically rich languages, 2009, to appear
  • Ilker Yildirim and Pinar Yolum. Hybrid Models for Achieving and Maintaining Cooperative Symbiotic Groups. Mind and Society (2008).
  • Oya Aran, Ismail Ari, Pavel Campr, Erinc Dikici, Marek Hruz, Siddika Parlak, Lale Akarun & Murat Saraclar, Speech and Sliding Text Aided Sign Retrieval from Hearing Impaired Sign News Videos , Journal on Multimodal User Interfaces, vol. 2, n. 1, Springer, 2008.
  • Alice Caplier, Sébastien Stillittano, Oya Aran, Lale Akarun, Gérard Bailly, Denis Beautemps, Nouredine Aboutabit & Thomas Burger, Image and video for hearing impaired people, EURASIP Journal on Image and Video Processing, Special Issue on Image and Video Processing for Disability, 2007.

Book Chapters

  • Oya Aran, Thomas Burger, Lale Akarun & Alice Caplier, Gestural Interfaces for Hearing-Impaired Communication, in Multimodal user interfaces: from signals to interaction, Dimitrios Tzovaras (Ed.) Springer, 2008.

Proceedings in International Conferences

  • Oya Aran, Thomas Burger, Alice Caplier, Lale Akarun, “Sequential belief-based fusion of manual and non-manual signs”, Gesture Workshop, Lisbon, April 2007.
  • Thomas Burger, Alexandra Urankar, Oya Aran, Lale Akarun & Alice Caplier, Cued Speech Hand Shape Recognition , 2nd International Conference on Computer Vision Theory and Applications (VISAPP’07), Spain, 2007.
  • Oya Aran,  Ismail Ari, Pavel Campr, Erinç Dikici, Marek Hruz, Deniz Kahramaner, Siddika Parlak, Lale Akarun, Murat Saraçlar, Speech and Sliding Text Aided Sign Retrieval from Hearing Impaired Sign News Videos, eNTERFACE’07 The Summer Workshop on Multimodal Interfaces, Istanbul, Turkey, 2007.
  • Siddika Parlak, Murat Saraçlar, Spoken Term Detection for Turkish Broadcast News, ICASSP, Las Vegas, Nevada, USA, 2008.
  • Oya Aran, Ismail Ari, Erinc Dikici, Siddika Parlak, Lale Akarun,  Murat Saraclar; Bogazici University, Turkish Sign Language Dictionary, ICASSP Show and Tell, Las Vegas, Nevada, USA, 2008.
  • Oya Aran, Lale Akarun, “Multi-class Classification Strategies for Fisher Scores of Gesture and Sign Sequences”, International Conference on Pattern Recognition (ICPR) 2008, Florida.
  • Koray Balci, Lale Akarun, “Clustering Poses of Motion Capture Data Using Limb Centroids”, ISCIS 2008, Istanbul, Oct. 2008.
  • Koray Balci, Lale Akarun, “Generating Motion Graphs From Clusters of Individual Poses”, ISCIS 2009, Istanbul, Sep. 2009.
  • Ismail Ari, Asli Uyar, Lale Akarun, “Facial Feature Tracking and Expression Recognition for Sign Language”, ISCIS 2008, Istanbul, Oct. 2008.
  • İlker Yıldırım, Pınar Yolum, “Hybrid Models for Achieving and Maintaining Collaborative Symbiotic Groups”, In Proceedings of the 5th European Social Simulation Association Conference, 2008
  • Santemiz P., Aran O., Saraçlar M., and Akarun L. Extraction of Isolated Signs from Sign Language Videos via Multiple Sequence Alignment, in Proceedings of the 13th International Conference on Speech and Computer (SPECOM’09), St.Petersburg, Russia, 2009
  • Som T., Can D., and Saraclar M., HMM-based Sliding Video Text Recognition for Turkish Broadcast News, ISCIS 2009, METU Northern Cyprus, 2009.
  • Pavel Campr, Marek Hruz, Alexey Karpov, Pinar Santemiz, Milos Zelezny, and Oya Aran, Sign-language-enabled information kiosk, in Proceedings of the 4th International Summer Workshop on Multimodal Interfaces (eNTERFACE08), pp.24-33, Paris, France, 2008.
  • Pavel Campr, Marek Hruz, Alexey Karpov, Pinar Santemiz, Milos Zelezny, and Oya Aran, Input and output modalities used in a sign-language-enabled information kiosk, in Proceedings of the 13th International Conference on Speech and Computer (SPECOM09), St.Petersburg, Russia, 2009.
  • Yıldırım İ., Cooperative Sign Language Tutoring: A Multiagent Approach, 10th International Workshop “Engineering Societies in the Agents’ World”, ESAW (2009).
  • Pinar Santemiz, Oya Aran, Murat Saraclar and Lale Akarun , Automatic Sign Segmentation from Continuous Signing via Multiple Sequence Alignment, Proc. IEEE Int. Workshop on Human-Computer Interaction, Oct. 4, 2009, Kyoto, Japan.

Proceedings in Local Conferences

  • Oya Aran, İsmail Arı, Amaç Güvensan, Hakan Haberdar, Zeyneb Kurt, İrem Türkmen, Aslı Uyar, Lale Akarun, Türk İşaret Dili Yüz İfadesi ve Baş Hareketi Veritabanı, Sinyal İşleme ve Uygulamaları Konferansı (SIU2007), Haziran 2007, Eskişehir.
  • Oya Aran, Lale Akarun, İşaret Dili İşleme ve Etkileşimli İşaret Dili Eğitim Araçları, Sinyal İşleme ve Uygulamaları Konferansı (SIU2007), Haziran 2007, Eskişehir.
  • Hamdi Dibeklioğlu, Erinç Dikici, Pınar Santemiz, Koray Balcı, Lale Akarun, İşaret Dili Hareketlerinin İzlenmesi ve İki Boyutlu Özniteliklerden İşaret Dili Hareketi Sentezlenmesi, Sinyal İşleme ve Uygulamaları Konferansı (SIU2007), Haziran 2007, Eskişehir.
  • Oya Aran, Lale Akarun, “Etkileşimli Parçacık süzgeci yöntemi ile kapatmaya dayanıklı yüz ve el takibi”, Sinyal İşleme ve Uygulamaları Konferansı (SIU2008), Haziran 2008, Didim.
  • Pınar Santemiz, Oya Aran, Murat Saraçlar, Lale Akarun , İşaret Dili Videolarından Hizalama ile Ayrık İşaret Çıkarımı”, Sinyal İşleme ve Uygulamaları Konferansı (SIU2009), Nisan 2009, Antalya.
  • İsmail Arı, Lale Akarun, “Yüz Özniteliklerinin Takibi ve İşaret Dili için İfade Tanıma”, Sinyal İşleme ve Uygulamaları Konferansı (SIU2009), Nisan 2009, Antalya.
  • E. Dikici, M. Saraçlar, “Sliding Text Recognition in Broadcast News”, IEEE 16th Signal Processing and Communications Applications Conference (SIU 2008), Didim, Turkey, 2008.

Software

Database

  • BUHMAP-DB: A video database of non-manual signs where both facial expressions and global head motion are included.