BACK TO INDEX

Publications of year 2021
Articles in journal, book chapters
  1. Ergun Biçici. Parallel Feature Weight Decay Algorithms for Fast Development of Machine Translation Models. Machine Translation, 35:239–-263, 2021. ISSN: 0922-6567. [doi:10.1007/s10590-021-09275-z] Keyword(s): Machine Translation, Natural Language Processing.
    Abstract:
    Parallel feature weight decay algorithms, parfwd, are engineered for language- and task-adaptive instance selection to build distinct machine translation (MT) models and enable the fast development of accurate MT using fewer data and less computation. parfwd decay the weights of both source and target features to increase their average coverage. In a conference on MT (WMT), parfwd achieved the lowest translation error rate from French to English in 2015, and a rate $11.7\%$ less than the top phrase-based statistical MT (PBSMT) in 2017. parfwd also achieved a rate $5.8\%$ less than the top in TweetMT and the top from Catalan to English. BLEU upper bounds identify the translation directions that offer the largest room for relative improvement and MT models that use additional data. Performance trends angle shows the power of MT models to convert unit data into unit translation results or more BLEU for an increase in coverage. The source coverage angle of parfwd in the 2013--2019 WMT reached +6 extdegree \, better than the top with $35$ extdegree \, for translation into English, and it was +1.4 extdegree \, better than the top with $22$ extdegree \, overall.

    @article{Bicici:parfwd:MTJ2021,
    author = {Ergun Bi\c{c}ici},
    title = {Parallel Feature Weight Decay Algorithms for Fast Development of Machine Translation Models},
    journal = {Machine Translation},
    year = {2021},
    volume = {35},
    pages = {239–-263},
    issn = {0922-6567},
    doi = {10.1007/s10590-021-09275-z},
    keywords = {Machine Translation, Natural Language Processing},
    abstract = {Parallel feature weight decay algorithms, parfwd, are engineered for language- and task-adaptive instance selection to build distinct machine translation (MT) models and enable the fast development of accurate MT using fewer data and less computation. parfwd decay the weights of both source and target features to increase their average coverage. In a conference on MT (WMT), parfwd achieved the lowest translation error rate from French to English in 2015, and a rate $11.7\%$ less than the top phrase-based statistical MT (PBSMT) in 2017. parfwd also achieved a rate $5.8\%$ less than the top in TweetMT and the top from Catalan to English. BLEU upper bounds identify the translation directions that offer the largest room for relative improvement and MT models that use additional data. Performance trends angle shows the power of MT models to convert unit data into unit translation results or more BLEU for an increase in coverage. The source coverage angle of parfwd in the 2013--2019 WMT reached +6	extdegree \, better than the top with $35$	extdegree \, for translation into English, and it was +1.4	extdegree \, better than the top with $22$	extdegree \, overall.},
    
    }
    


Conference articles
  1. Ergun Biçici. RTM Super Learner Results at Quality Estimation Task. In Proc. of the Sixth Conf. on Machine Translation (WMT21), Online, 11 2021. Keyword(s): Machine Translation, Machine Learning, Performance Prediction.
    Abstract:
    We obtain new results using referential translation machines (RTMs) with predictions mixed and stacked to obtain a better mixture of experts prediction. Our super learner results improve the results and provide a robust combination model.

    @InProceedings{Bicici:RTM:WMT2021,
    author = {Ergun Bi\c{c}ici},
    title = {{RTM} Super Learner Results at Quality Estimation Task},
    booktitle = {Proc. of the {S}ixth {C}onf. on {M}achine {T}ranslation ({WMT21})},
    month = {11},
    year = {2021},
    address = {Online},
    keywords = {Machine Translation, Machine Learning, Performance Prediction},
    abstract = {We obtain new results using referential translation machines (RTMs) with predictions mixed and stacked to obtain a better mixture of experts prediction. Our super learner results improve the results and provide a robust combination model.},
    
    }
    



BACK TO INDEX




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Les documents contenus dans ces répertoires sont rendus disponibles par les auteurs qui y ont contribué en vue d'assurer la diffusion à temps de travaux savants et techniques sur une base non-commerciale. Les droits de copie et autres droits sont gardés par les auteurs et par les détenteurs du copyright, en dépit du fait qu'ils présentent ici leurs travaux sous forme électronique. Les personnes copiant ces informations doivent adhérer aux termes et contraintes couverts par le copyright de chaque auteur. Ces travaux ne peuvent pas être rendus disponibles ailleurs sans la permission explicite du détenteur du copyright.




Last modified: Sun Feb 5 17:37:19 2023
Author: ebicici.


This document was translated from BibTEX by bibtex2html