BACK TO INDEX

Publications of year 2015
Articles in journal, book chapters
  1. Ergun Biçici. Domain Adaptation for Machine Translation with Instance Selection. The Prague Bulletin of Mathematical Linguistics, 103:5-20, 2015. ISSN: 1804-0462. [doi:10.1515/pralin-2015-0001] Keyword(s): Machine Translation, Machine Learning, Domain Adaptation.
    Abstract:
    Domain adaptation for machine translation (MT) can be achieved by selecting training instances close to the test set from a larger set of instances. We consider $7$ different domain adaptation strategies and answer $7$ research questions, which give us a recipe for domain adaptation in MT. We perform English to German statistical MT (SMT) experiments in a setting where test and training sentences can come from different corpora and one of our goals is to learn the parameters of the sampling process. Domain adaptation with training instance selection can obtain $22\%$ increase in target $2$-gram recall and can gain up to $3.55$ BLEU points compared with random selection. Domain adaptation with feature decay algorithm (FDA) not only achieves the highest target $2$-gram recall and BLEU performance but also perfectly learns the test sample distribution parameter with correlation $0.99$. Moses SMT systems built with FDA selected 10K training sentences is able to obtain $F_1$ results as good as the baselines that use up to 2M sentences. Moses SMT systems built with FDA selected 50K training sentences is able to obtain 1 $F_1$ point better results than the baselines.

    @article{Bicici:DomainFDA:PBML2015,
    author = {Ergun Bi\c{c}ici},
    title = {Domain Adaptation for Machine Translation with Instance Selection},
    journal = {The Prague Bulletin of Mathematical Linguistics},
    year = {2015},
    volume = {103},
    pages = {5--20},
    issn = {1804-0462},
    doi = {10.1515/pralin-2015-0001},
    keywords = {Machine Translation, Machine Learning, Domain Adaptation},
    abstract = {Domain adaptation for machine translation (MT) can be achieved by selecting training instances close to the test set from a larger set of instances. We consider $7$ different domain adaptation strategies and answer $7$ research questions, which give us a recipe for domain adaptation in MT. We perform English to German statistical MT (SMT) experiments in a setting where test and training sentences can come from different corpora and one of our goals is to learn the parameters of the sampling process. Domain adaptation with training instance selection can obtain $22\%$ increase in target $2$-gram recall and can gain up to $3.55$ BLEU points compared with random selection. Domain adaptation with feature decay algorithm (FDA) not only achieves the highest target $2$-gram recall and BLEU performance but also perfectly learns the test sample distribution parameter with correlation $0.99$. Moses SMT systems built with FDA selected 10K training sentences is able to obtain $F_1$ results as good as the baselines that use up to 2M sentences. Moses SMT systems built with FDA selected 50K training sentences is able to obtain 1 $F_1$ point better results than the baselines.},
    
    }
    


  2. Ergun Biçici and Lucia Specia. QuEst for High Quality Machine Translation. The Prague Bulletin of Mathematical Linguistics, 103:43-64, 2015. ISSN: 1804-0462. [doi:10.1515/pralin-2015-0003] Keyword(s): Machine Translation, Machine Learning, Performance Prediction.
    Abstract:
    In this paper we describe the use of extsc{QuEst}, a framework that aims to obtain predictions on the quality of translations, to improve the performance of machine translation (MT) systems without changing their internal functioning. We apply extsc{QuEst} to experiments with: \begin{enumerate}[label=\roman*.] \item multiple system translation ranking, where translations produced by different MT systems are ranked according to their estimated quality, leading to gains of up to $2.72$ BLEU, $3.66$ BLEUs, and $2.17$ $F_1$ points; \item n-best list re-ranking, where n-best list translations produced by an MT system are re-ranked based on predicted quality scores to get the best translation ranked top, which lead to improvements on sentence NIST score by $0.41$ points; \item n-best list combination, where segments from an n-best list are combined using a lattice-based re-scoring approach that minimize word error, obtaining gains of $0.28$ BLEU points; and \item the ITERPE strategy, which attempts to identify translation errors regardless of prediction errors (ITERPE) and build sentence-specific SMT systems (SSSS) on the ITERPE sorted instances identified as having more potential for improvement, achieving gains of up to $1.43$ BLEU, $0.54$ $F_1$, $2.9$ NIST, $0.64$ sentence BLEU, and $4.7$ sentence NIST points in English to German over the top $100$ ITERPE sorted instances. \end{enumerate}

    @article{Bicici:Quest:PBML2015,
    author = {Ergun Bi\c{c}ici and Lucia Specia},
    title = {QuEst for High Quality Machine Translation},
    journal = {The Prague Bulletin of Mathematical Linguistics},
    year = {2015},
    volume = {103},
    pages = {43--64},
    issn = {1804-0462},
    doi = {10.1515/pralin-2015-0003},
    keywords = {Machine Translation, Machine Learning, Performance Prediction},
    abstract = {In this paper we describe the use of 	extsc{QuEst}, a framework that aims to obtain predictions on the quality of translations, to improve the performance of machine translation (MT) systems without changing their internal functioning. We apply 	extsc{QuEst} to experiments with: \begin{enumerate}[label=\roman*.] 
    
    \item multiple system translation ranking, where translations produced by different MT systems are ranked according to their estimated quality, leading to gains of up to $2.72$ BLEU, $3.66$ BLEUs, and $2.17$ $F_1$ points; 
    
    \item n-best list re-ranking, where n-best list translations produced by an MT system are re-ranked based on predicted quality scores to get the best translation ranked top, which lead to improvements on sentence NIST score by $0.41$ points; 
    
    \item n-best list combination, where segments from an n-best list are combined using a lattice-based re-scoring approach that minimize word error, obtaining gains of $0.28$ BLEU points; and 
    
    \item the ITERPE strategy, which attempts to identify translation errors regardless of prediction errors (ITERPE) and build sentence-specific SMT systems (SSSS) on the ITERPE sorted instances identified as having more potential for improvement, achieving gains of up to $1.43$ BLEU, $0.54$ $F_1$, $2.9$ NIST, $0.64$ sentence BLEU, and $4.7$ sentence NIST points in English to German over the top $100$ ITERPE sorted instances. \end{enumerate}},
    
    }
    


  3. Ergun Biçici and Andy Way. Referential translation machines for predicting semantic similarity. Language Resources and Evaluation, pp 1-27, 2015. ISSN: 1574-020X. [doi:10.1007/s10579-015-9322-7] Keyword(s): Referential translation machine, RTM, Semantic similarity, Machine translation, Performance prediction, Machine translation performance prediction.
    Abstract:
    Referential translation machines (RTMs) are a computational model effective at judging monolingual and bilingual similarity while identifying translation acts between any two data sets with respect to interpretants. RTMs pioneer a language-independent approach to all similarity tasks and remove the need to access any task or domain-specific information or resource. We use RTMs for predicting the semantic similarity of text and present state-of-the-art results showing that RTMs can achieve better results on the test set than on the training set. RTMs judge the quality or the semantic similarity of texts by using relevant retrieved training data as interpretants for reaching shared semantics. Interpretants are used to derive features measuring the closeness of the test sentences to the training data, the difficulty of translating them, and the presence of the acts of translation, which may ubiquitously be observed in communication. RTMs achieve top performance at SemEval in various semantic similarity prediction tasks as well as similarity prediction tasks in bilingual settings. We define MAER, mean absolute error relative to the magnitude of the target, and MRAER, mean absolute error relative to the absolute error of a predictor always predicting the target mean assuming that target mean is known. RTM test performance on various tasks sorted according to MRAER can help identify which tasks and subtasks require more work by design.

    @article{Bicici:RTM_SEMEVAL,
    author = {Ergun Bi\c{c}ici and Andy Way},
    title = {Referential translation machines for predicting semantic similarity},
    year = {2015},
    pages = {1--27},
    issn = {1574-020X},
    journal = {Language Resources and Evaluation},
    doi = {10.1007/s10579-015-9322-7},
    keywords = {Referential translation machine; RTM; Semantic similarity; Machine translation; Performance prediction; Machine translation performance prediction},
    abstract = {Referential translation machines (RTMs) are a computational model effective at judging monolingual and bilingual similarity while identifying translation acts between any two data sets with respect to interpretants. RTMs pioneer a language-independent approach to all similarity tasks and remove the need to access any task or domain-specific information or resource. We use RTMs for predicting the semantic similarity of text and present state-of-the-art results showing that RTMs can achieve better results on the test set than on the training set. RTMs judge the quality or the semantic similarity of texts by using relevant retrieved training data as interpretants for reaching shared semantics. Interpretants are used to derive features measuring the closeness of the test sentences to the training data, the difficulty of translating them, and the presence of the acts of translation, which may ubiquitously be observed in communication. RTMs achieve top performance at SemEval in various semantic similarity prediction tasks as well as similarity prediction tasks in bilingual settings. We define MAER, mean absolute error relative to the magnitude of the target, and MRAER, mean absolute error relative to the absolute error of a predictor always predicting the target mean assuming that target mean is known. RTM test performance on various tasks sorted according to MRAER can help identify which tasks and subtasks require more work by design.},
    
    }
    


  4. Ergun Biçici and Deniz Yuret. Optimizing Instance Selection for Statistical Machine Translation with Feature Decay Algorithms. IEEE/ACM Transactions On Audio, Speech, and Language Processing (TASLP), 23:339-350, 2015. [doi:10.1109/TASLP.2014.2381882] Keyword(s): Machine Translation, Machine Learning, Artificial Intelligence, Natural Language Processing.
    Abstract:
    We introduce FDA5 for efficient parameterization, optimization, and implementation of feature decay algorithms (FDA), a class of instance selection algorithms that use feature decay. FDA increase the diversity of the selected training set by devaluing features (i.e. n-grams) that have already been included. FDA5 decides which instances to select based on three functions used for initializing and decaying feature values and scaling sentence scores controlled with $5$ parameters. We present optimization techniques that allow FDA5 to adapt these functions to in-domain and out-of-domain translation tasks for different language pairs. In a transductive learning setting, selection of training instances relevant to the test set can improve the final translation quality. In machine translation experiments performed on the $2$ million sentence English-German section of the Europarl corpus, we show that a subset of the training set selected by FDA5 can gain up to $3.22$ BLEU points compared to a randomly selected subset of the same size, can gain up to $0.41$ BLEU points compared to using all of the available training data using only $15\%$ of it, and can reach within $0.5$ BLEU points to the full training set result by using only $2.7\%$ of the full training data. FDA5 peaks at around 8M words or $15\%$ of the full training set. In an active learning setting, FDA5 minimizes the human effort by identifying the most informative sentences for translation and FDA gains up to $0.45$ BLEU points using $3/5$ of the available training data compared to using all of it and $1.12$ BLEU points compared to random training set. In translation tasks involving English and Turkish, a morphologically rich language, FDA5 can gain up to $11.52$ BLEU points compared to a randomly selected subset of the same size, can achieve the same BLEU score using as little as $4\%$ of the data compared to random instance selection, and can exceed the full dataset result by $0.78$ BLEU points. FDA5 is able to reduce the time to build a statistical machine translation system to about half with 1M words using only $3\%$ of the space for the phrase table and $8\%$ of the overall space when compared with a baseline system using all of the training data available yet still obtain only $0.58$ BLEU points difference with the baseline system in out-of-domain translation.

    @article{BiciciYuret:FDA5:TASLP,
    author = {Ergun Bi\c{c}ici and Deniz Yuret},
    title = {Optimizing Instance Selection for Statistical Machine Translation with Feature Decay Algorithms},
    journal = {IEEE/ACM Transactions On Audio, Speech, and Language Processing (TASLP)},
    volume = {23},
    issue = {2},
    pages = {339-350},
    year = {2015},
    doi = {10.1109/TASLP.2014.2381882},
    keywords = "Machine Translation, Machine Learning, Artificial Intelligence, Natural Language Processing",
    abstract = {We introduce FDA5 for efficient parameterization, optimization, and implementation of feature decay algorithms (FDA), a class of instance selection algorithms that use feature decay. FDA increase the diversity of the selected training set by devaluing features (i.e. n-grams) that have already been included. FDA5 decides which instances to select based on three functions used for initializing and decaying feature values and scaling sentence scores controlled with $5$ parameters. We present optimization techniques that allow FDA5 to adapt these functions to in-domain and out-of-domain translation tasks for different language pairs. In a transductive learning setting, selection of training instances relevant to the test set can improve the final translation quality. In machine translation experiments performed on the $2$ million sentence English-German section of the Europarl corpus, we show that a subset of the training set selected by FDA5 can gain up to $3.22$ BLEU points compared to a randomly selected subset of the same size, can gain up to $0.41$ BLEU points compared to using all of the available training data using only $15\%$ of it, and can reach within $0.5$ BLEU points to the full training set result by using only $2.7\%$ of the full training data. FDA5 peaks at around 8M words or $15\%$ of the full training set. In an active learning setting, FDA5 minimizes the human effort by identifying the most informative sentences for translation and FDA gains up to $0.45$ BLEU points using $3/5$ of the available training data compared to using all of it and $1.12$ BLEU points compared to random training set. In translation tasks involving English and Turkish, a morphologically rich language, FDA5 can gain up to $11.52$ BLEU points compared to a randomly selected subset of the same size, can achieve the same BLEU score using as little as $4\%$ of the data compared to random instance selection, and can exceed the full dataset result by $0.78$ BLEU points. FDA5 is able to reduce the time to build a statistical machine translation system to about half with 1M words using only $3\%$ of the space for the phrase table and $8\%$ of the overall space when compared with a baseline system using all of the training data available yet still obtain only $0.58$ BLEU points difference with the baseline system in out-of-domain translation.},
    
    }
    


Conference articles
  1. Ergun Biçici. RTM-DCU: Predicting Semantic Similarity with Referential Translation Machines. In SemEval-2015: Semantic Evaluation Exercises - International Workshop on Semantic Evaluation, Denver, CO, USA, pages 56-63, 6 2015. [WWW] Keyword(s): Machine Translation, Machine Learning, Performance Prediction, Semantic Similarity.
    Abstract:
    We use referential translation machines (RTMs) for predicting the semantic similarity of text. RTMs are a computational model effectively judging monolingual and bilingual similarity while identifying translation acts between any two data sets with respect to interpretants. RTMs pioneer a language independent approach to all similarity tasks and remove the need to access any task or domain specific information or resource. RTMs become the 2nd system out of 13 systems participating in Paraphrase and Semantic Similarity in Twitter, 6th out of 16 submissions in Semantic Textual Similarity Spanish, and 50th out of 73 submissions in Semantic Textual Similarity English.

    @InProceedings{Bicici:RTM:SEMEVAL2015,
    author = {Ergun Bi\c{c}ici},
    title = {{RTM-DCU}: Predicting Semantic Similarity with Referential Translation Machines},
    booktitle = {{SemEval-2015}: Semantic Evaluation Exercises - International Workshop on Semantic Evaluation},
    day = {4-5},
    month = {6},
    year = {2015},
    pages = {56--63},
    address = {Denver, CO, USA},
    url = {https://aclanthology.info/papers/S15-2010/s15-2010},
    abstract = {We use referential translation machines (RTMs) for predicting the semantic similarity of text. RTMs are a computational model effectively judging monolingual and bilingual similarity while identifying translation acts between any two data sets with respect to interpretants. RTMs pioneer a language independent approach to all similarity tasks and remove the need to access any task or domain specific information or resource. RTMs become the 2nd system out of 13 systems participating in Paraphrase and Semantic Similarity in Twitter, 6th out of 16 submissions in Semantic Textual Similarity Spanish, and 50th out of 73 submissions in Semantic Textual Similarity English.},
    keywords = "Machine Translation, Machine Learning, Performance Prediction, Semantic Similarity",
    
    }
    


  2. Ergun Biçici, Qun Liu, and Andy Way. ParFDA for Fast Deployment of Accurate Statistical Machine Translation Systems, Benchmarks, and Statistics. In Tenth Workshop on Statistical Machine Translation, Lisbon, Portugal, pages 74-78, 9 2015. [WWW] Keyword(s): Machine Translation, Machine Learning, Language Modeling.
    Abstract:
    We build parallel FDA5 (ParFDA) Moses statistical machine translation (SMT) systems for all language pairs in the workshop on statistical machine translation~\cite{WMT2015} (WMT15) translation task and obtain results close to the top with an average of $3.176$ BLEU points difference using significantly less resources for building SMT systems. ParFDA is a parallel implementation of feature decay algorithms (FDA) developed for fast deployment of accurate SMT systems. ParFDA Moses SMT system we built is able to obtain the top TER performance in French to English translation. We make the data for building ParFDA Moses SMT systems for WMT15 available: \url{github.com/bicici/ParFDAWMT15}.

    @InProceedings{Bicici:ParFDA:WMT2015,
    author = {Ergun Bi\c{c}ici and Qun Liu and Andy Way},
    title = {{ParFDA} for Fast Deployment of Accurate Statistical Machine Translation Systems, Benchmarks, and Statistics},
    booktitle = {{T}enth {W}orkshop on {S}tatistical {M}achine {T}ranslation},
    month = {9},
    year = {2015},
    address = {Lisbon, Portugal},
    pages = {74--78},
    url = {https://aclanthology.info/papers/W15-3005/w15-3005},
    keywords = "Machine Translation, Machine Learning, Language Modeling",
    abstract = {We build parallel FDA5 (ParFDA) Moses statistical machine translation (SMT) systems for all language pairs in the workshop on statistical machine translation~\cite{WMT2015} (WMT15) translation task and obtain results close to the top with an average of $3.176$ BLEU points difference using significantly less resources for building SMT systems. ParFDA is a parallel implementation of feature decay algorithms (FDA) developed for fast deployment of accurate SMT systems. ParFDA Moses SMT system we built is able to obtain the top TER performance in French to English translation. We make the data for building ParFDA Moses SMT systems for WMT15 available: \url{github.com/bicici/ParFDAWMT15}.},
    
    }
    


  3. Ergun Biçici, Qun Liu, and Andy Way. Referential Translation Machines for Predicting Translation Quality and Related Statistics. In Tenth Workshop on Statistical Machine Translation, Lisbon, Portugal, pages 304-308, 9 2015. [WWW] Keyword(s): Machine Translation, Machine Learning, Performance Prediction.
    Abstract:
    We use referential translation machines (RTMs) for predicting translation performance. RTMs pioneer a language independent approach to all similarity tasks and remove the need to access any task or domain specific information or resource. We improve our RTM models with the ParFDA instance selection model~\cite{Bicici:FDA54FDA:WMT15}, with additional features for predicting the translation performance, and with improved learning models. We develop RTM models for each WMT15 QET (QET15) subtask and obtain improvements over QET14 results. RTMs achieve top performance in QET15 ranking $1$st in document- and sentence-level prediction tasks and $2$nd in word-level prediction task.

    @InProceedings{Bicici:RTM:WMT2015,
    author = {Ergun Bi\c{c}ici and Qun Liu and Andy Way},
    title = {Referential Translation Machines for Predicting Translation Quality and Related Statistics},
    booktitle = {{T}enth {W}orkshop on {S}tatistical {M}achine {T}ranslation},
    month = {9},
    year = {2015},
    address = {Lisbon, Portugal},
    pages = {304--308},
    url = {https://aclanthology.info/papers/W15-3035/w15-3035},
    keywords = "Machine Translation, Machine Learning, Performance Prediction",
    abstract = {We use referential translation machines (RTMs) for predicting translation performance. RTMs pioneer a language independent approach to all similarity tasks and remove the need to access any task or domain specific information or resource. We improve our RTM models with the ParFDA instance selection model~\cite{Bicici:FDA54FDA:WMT15}, with additional features for predicting the translation performance, and with improved learning models. We develop RTM models for each WMT15 QET (QET15) subtask and obtain improvements over QET14 results. RTMs achieve top performance in QET15 ranking $1$st in document- and sentence-level prediction tasks and $2$nd in word-level prediction task.},
    
    }
    


  4. Antonio Toral, Xiaofeng Wu, Tommi Pirinen, Zhengwei Qiu, Ergun Bicici, and Jinhua Du. Dublin City University at the TweetMT 2015 Shared Task. In La Sociedad Española para el Procesamiento del Lenguaje Natural (SEPLN), Alicante, Spain, pages 33-39, 2015.
    @inproceedings{Bicici2015:TweetMT,
    title = {{D}ublin {C}ity {U}niversity at the {TweetMT} 2015 Shared Task},
    author = {Toral, Antonio and Wu, Xiaofeng and Pirinen, Tommi and Qiu, Zhengwei and Bicici, Ergun and Du, Jinhua},
    booktitle = {La Sociedad Española para el Procesamiento del Lenguaje Natural ({SEPLN})},
    year = {2015},
    pages = {33--39},
    address = {Alicante, Spain},
    
    }
    



BACK TO INDEX




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Les documents contenus dans ces répertoires sont rendus disponibles par les auteurs qui y ont contribué en vue d'assurer la diffusion à temps de travaux savants et techniques sur une base non-commerciale. Les droits de copie et autres droits sont gardés par les auteurs et par les détenteurs du copyright, en dépit du fait qu'ils présentent ici leurs travaux sous forme électronique. Les personnes copiant ces informations doivent adhérer aux termes et contraintes couverts par le copyright de chaque auteur. Ces travaux ne peuvent pas être rendus disponibles ailleurs sans la permission explicite du détenteur du copyright.




Last modified: Sun Feb 5 17:37:19 2023
Author: ebicici.


This document was translated from BibTEX by bibtex2html