BACK TO INDEX

Publications of year 2010
Conference articles
  1. Ergun Biçici and S. Serdar Kozat. Adaptive Model Weighting and Transductive Regression for Predicting Best System Combinations. In ACL 2010 Joint Fifth Workshop on Statistical Machine Translation and Metrics MATR, Uppsala, Sweden, 7 2010. [PDF] [PDF] [POSTSCRIPT] Keyword(s): Machine Translation, Machine Learning.
    Abstract:
    We analyze adaptive model weighting techniques for reranking using instance scores obtained by L1 regularized transductive regression. Competitive statistical machine translation is an on-line learning technique for sequential translation tasks where we try to select the best among competing statistical machine translators. The competitive predictor assigns a probability per model weighted by the sequential performance. We define additive, multiplicative, and loss-based weight updates with exponential loss functions for competitive statistical machine translation. Without any pre-knowledge of the performance of the translation models, we succeed in achieving the performance of the best model in all systems and surpass their performance in most of the language pairs we considered.

    @InProceedings{BiciciKozat2010:WMT,
    author = {Ergun Bi\c{c}ici and S. Serdar Kozat},
    title = {Adaptive Model Weighting and Transductive Regression for Predicting Best System Combinations},
    booktitle = {ACL 2010 Joint {F}ifth {W}orkshop on {S}tatistical {M}achine {T}ranslation and {M}etrics {MATR}},
    month = {7},
    year = {2010},
    address = {Uppsala, Sweden},
    url = {http://www.aclweb.org/anthology/W/W10/W10-1740.pdf},
    keywords = {Machine Translation, Machine Learning},
    pdf = {bicici.github.io/publications/2010/WMT2010_TRBMT_CSMT.pdf},
    ps = {bicici.github.io/publications/2010/WMT2010_TRBMT_CSMT.ps},
    abstract = {We analyze adaptive model weighting techniques for reranking using instance scores obtained by L1 regularized transductive regression. Competitive statistical machine translation is an on-line learning technique for sequential translation tasks where we try to select the best among competing statistical machine translators. The competitive predictor assigns a probability per model weighted by the sequential performance. We define additive, multiplicative, and loss-based weight updates with exponential loss functions for competitive statistical machine translation. Without any pre-knowledge of the performance of the translation models, we succeed in achieving the performance of the best model in all systems and surpass their performance in most of the language pairs we considered.},
    
    }
    


  2. Ergun Biçici and Deniz Yuret. L1 Regularization for Learning Word Alignments in Sparse Feature Matrices. In Computer Science Student Workshop, CSW'10, Koc University, Istinye Campus, Istanbul, Turkey, 2 2010. [WWW] [PDF] Keyword(s): Machine Translation, Machine Learning.
    Abstract:
    Sparse feature representations can be used in various domains. We compare the effectiveness of L1 regularization techniques for regression to learn mappings between features given in a sparse feature matrix. We apply these techniques for learning word alignments commonly used for machine translation. The performance of the learned mappings are measured using the phrase table generated on a larger corpus by a state of the art word aligner. The results show the effectiveness of using L1 regularization versus L2 used in ridge regression.

    @InProceedings{BiciciYuret2010:CSW,
    author = {Ergun Bi\c{c}ici and Deniz Yuret},
    title = {L1 Regularization for Learning Word Alignments in Sparse Feature Matrices},
    booktitle = {Computer Science Student Workshop, CSW'10},
    month = {2},
    year = {2010},
    address = {Koc University, Istinye Campus, Istanbul, Turkey},
    url = {http://myweb.sabanciuniv.edu/csw/},
    keywords = {Machine Translation, Machine Learning},
    pdf = {http://bicici.github.io/publications/2010/L1forWordAlignment.pdf},
    abstract = {Sparse feature representations can be used in various domains. We compare the effectiveness of L1 regularization techniques for regression to learn mappings between features given in a sparse feature matrix. We apply these techniques for learning word alignments commonly used for machine translation. The performance of the learned mappings are measured using the phrase table generated on a larger corpus by a state of the art word aligner. The results show the effectiveness of using L1 regularization versus L2 used in ridge regression.},
    
    }
    


  3. Ergun Biçici and Deniz Yuret. L1 Regularized Regression for Reranking and System Combination in Machine Translation. In ACL 2010 Joint Fifth Workshop on Statistical Machine Translation and Metrics MATR, Uppsala, Sweden, 7 2010. [PDF] [PDF] [POSTSCRIPT] Keyword(s): Machine Translation, Machine Learning.
    Abstract:
    We use L1 regularized transductive regression to learn mappings between source and target features of the training sets derived for each test sentence and use these mappings to rerank translation outputs. We compare the effectiveness of L1 regularization techniques for regression to learn mappings between features given in a sparse feature matrix. The results show the effectiveness of using L1 regularization versus L2 used in ridge regression. We show that regression mapping is effective in reranking translation outputs and in selecting the best system combinations with encouraging results on different language pairs.

    @InProceedings{BiciciYuret2010:WMT,
    author = {Ergun Bi\c{c}ici and Deniz Yuret},
    title = {L1 Regularized Regression for Reranking and System Combination in Machine Translation},
    booktitle = {ACL 2010 Joint {F}ifth {W}orkshop on {S}tatistical {M}achine {T}ranslation and {M}etrics {MATR}},
    month = {7},
    year = {2010},
    address = {Uppsala, Sweden},
    url = {http://www.aclweb.org/anthology/W/W10/W10-1741.pdf},
    keywords = {Machine Translation, Machine Learning},
    pdf = {bicici.github.io/publications/2010/WMT2010_L1forSMTReranking.pdf},
    ps = {bicici.github.io/publications/2010/WMT2010_L1forSMTReranking.ps},
    abstract = {We use L1 regularized transductive regression to learn mappings between source and target features of the training sets derived for each test sentence and use these mappings to rerank translation outputs. We compare the effectiveness of L1 regularization techniques for regression to learn mappings between features given in a sparse feature matrix. The results show the effectiveness of using L1 regularization versus L2 used in ridge regression. We show that regression mapping is effective in reranking translation outputs and in selecting the best system combinations with encouraging results on different language pairs.},
    
    }
    



BACK TO INDEX




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Les documents contenus dans ces répertoires sont rendus disponibles par les auteurs qui y ont contribué en vue d'assurer la diffusion à temps de travaux savants et techniques sur une base non-commerciale. Les droits de copie et autres droits sont gardés par les auteurs et par les détenteurs du copyright, en dépit du fait qu'ils présentent ici leurs travaux sous forme électronique. Les personnes copiant ces informations doivent adhérer aux termes et contraintes couverts par le copyright de chaque auteur. Ces travaux ne peuvent pas être rendus disponibles ailleurs sans la permission explicite du détenteur du copyright.




Last modified: Sun Feb 5 17:37:19 2023
Author: ebicici.


This document was translated from BibTEX by bibtex2html