Papers and publications
企業情報
With more than 50 years of experience in translation technologies, SYSTRAN has pioneered the greatest innovations in the field, including the first web-based translation portals and the first neural translation engines combining artificial intelligence and neural networks for businesses and public organizations.
SYSTRAN provides business users with advanced and secure automated translation solutions in various areas such as: global collaboration, multilingual content production, customer support, electronic investigation, Big Data analysis, e-commerce, etc. SYSTRAN offers a tailor-made solution with an open and scalable architecture that enables seamless integration into existing third-party applications and IT infrastructures.
Enhanced Transformer Model for Data-to-Text Generation
Enhanced Transformer Model for Data-to-Text GenerationNeural models have recently shown significant progress on data-to-text generation tasks in which descriptive texts are generated conditioned on database records. In this work, we present a new Transformer-based data-to-text generation model which learns content selection and summary generation in an end-to-end fashion. We introduce two extensions to the baseline transformer model: First, we modify the latent representation of the input, which helps to significantly improve the content correctness of the output summary; Second, we include an additional learning objective that accounts for content selection modelling. In addition, we propose two data augmentation methods that succeed to further improve performance of the resulting generation models. Evaluation experiments show that our … Continued
Li Gong, Josep Crego, Jean Senellart
Book: Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 148--156, Association for Computational Linguistics, November 2019, Hong-Kong, ChinaSYSTRAN @ WAT 2019: Russian-Japanese News Commentary task
SYSTRAN @ WAT 2019: Russian-Japanese News Commentary taskThis paper describes Systran{‘}s submissions to WAT 2019 Russian-Japanese News Commentary task. A challenging translation task due to the extremely low resources available and the distance of the language pair. We have used the neural Transformer architecture learned over the provided resources and we carried out synthetic data generation experiments which aim at alleviating the data scarcity problem. Results indicate the suitability of the data augmentation experiments, enabling our systems to rank first according to automatic evaluations.
Jitao Xu, TuAnh Nguyen, MinhQuang Pham, Josep Crego, Jean Senellart
Proceedings of the 6th Workshop on Asian Translation, pages 189--194, Association for Computational Linguistics, November 2019, Hong-Kong, ChinaSYSTRAN @ WNGT 2019: DGT Task
SYSTRAN @ WNGT 2019: DGT TaskThis paper describes SYSTRAN participation to the Document-level Generation and Translation (DGT) Shared Task of the 3rd Workshop on Neural Generation and Translation (WNGT 2019). We participate for the first time using a Transformer network enhanced with modified input embeddings and optimising an additional objective function that considers content selection. The network takes in structured data of basketball games and outputs a summary of the game in natural language.
Li Gong, Josep Crego, Jean Senellart
Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 262--267, Association for Computational Linguistics, November 2019, Hong-Kong, ChinaSYSTRAN Participation to the WMT2018 Shared Task on Parallel Corpus Filtering
SYSTRAN Participation to the WMT2018 Shared Task on Parallel Corpus FilteringThis paper describes the participation of SYSTRAN to the shared task on parallel corpus filtering at the Third Conference on Machine Translation (WMT 2018). We participate for the first time using a neural sentence similarity classifier which aims at predicting the relatedness of sentence pairs in a multilingual context. The paper describes the main characteristics of our approach and discusses the results obtained on the data sets published for the shared task.
Minh Quang Pham, Josep Crego, Jean Senellart
Third Conference on Machine Translation (WMT18), October 31 - November 1 2018, Brussels, BelgiumFixing Translation Divergences in Parallel Corpora for Neural MT
Fixing Translation Divergences in Parallel Corpora for Neural MTCorpus-based approaches to machine translation rely on the availability of clean parallel corpora. Such resources are scarce, and because of the automatic processes involved in their preparation, they are often noisy. % may contain sentence pairs that are not as parallel as one would expect. This paper describes an unsupervised method for detecting translation divergences in parallel sentences. We rely on a neural network that computes cross-lingual sentence similarity scores, which are then used to effectively filter out divergent translations. Furthermore, similarity scores predicted by the network are used to identify and fix some partial divergences, yielding additional parallel segments. We evaluate these methods for English-French and English-German machine translation … Continued
Minh Quang Pham, Josep Crego, Jean Senellart, François Yvon
2018 Conference on Empirical Methods in Natural Language Processing, October 31 – November 4 2018, Brussels, BelgiumAnalyzing Knowledge Distillation in Neural Machine Translation
Analyzing Knowledge Distillation in Neural Machine TranslationKnowledge distillation has recently been successfully applied to neural machine translation. It basically allows for building shrunk networks while the resulting systems retain most of the quality of the original model. Despite that many authors report on the benefits of knowledge distillation, few works discuss the actual reasons why it works, especially in the context of neural MT. In this paper, we conduct several experiments aiming at understanding why and how distillation impacts accuracy on an English-German translation task. We show that translation complexity is actually reduced when building a distilled/synthesized bi-text when compared to the reference bi-text. We further remove noisy data from synthesized translations and merge filtered synthesized … Continued
Dakun Zhang, Josep Crego and Jean Senellart
15th International Workshop on Spoken Language Translation, October 29-30 2018, Bruges, BelgiumOpenNMT System Description for WNMT 2018: 800 words/sec on a single-core CPU
OpenNMT System Description for WNMT 2018: 800 words/sec on a single-core CPUWe present a system description of the OpenNMT Neural Machine Translation entry for the WNMT 2018 evaluation. In this work, we developed a heavily optimized NMT inference model targeting a high-performance CPU system. The final system uses a combination of four techniques, all of them leading to significant speed-ups in combination: (a) sequence distillation, (b) architecture modifications, (c) pre-computation, particularly of vocabulary, and (d) CPU targeted quantization. This work achieves the fastest performance of the shared task, and led to the development of new features that have been integrated to OpenNMT and made available to the community.
Jean Senellart, Dakun Zhang, Bo Wang, Guillaume Klein, J.P. Ramatchandirin, Josep Crego, Alexander M. Rush
Published in "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation", pages 122-–128, Association for Computational Linguistics, July 20 2018, Melbourne, AustraliaNeural Network Architectures for Arabic Dialect Identification
Neural Network Architectures for Arabic Dialect IdentificationSYSTRAN competes this year for the first time to the DSL shared task, in the Arabic Dialect Identification subtask. We participate by training several Neural Network models showing that we can obtain competitive results despite the limited amount of training data available for learning. We report our experiments and detail the network architecture and parameters of our 3 runs: our best performing system consists in a Multi-Input CNN that learns separate embeddings for lexical, phonetic and acoustic input features (F1: 0.5289); we also built a CNN-biLSTM network aimed at capturing both spatial and sequential features directly from speech spectrograms (F1: 0.3894 at submission time, F1: 0.4235 with later found parameters); … Continued
Elise Michon, Minh Quang Pham, Josep Crego, Jean Senellart
Published in "Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects", Association for Computational Linguistics, pages 128-–136, August 20 2018, New Mexico, USABoosting Neural Machine Translation [PDF]
Boosting Neural Machine Translation [PDF]Training efficiency is one of the main problems for Neural Machine Translation (NMT). Deep networks need for very large data as well as many training iterations to achieve state-of-the-art performance. This results in very high computation cost, slowing down research and industrialisation. In this paper, we propose to alleviate this problem with several training methods based on data boosting and bootstrap with no modifications to the neural network. It imitates the learning process of humans, which typically spend more time when learning “difficult” concepts than easier ones. We experiment on an English-French translation task showing accuracy improvements of up to 1.63 BLEU while saving 20% of training time.
Dakun Zhang, Jungi Kim, Josep Crego, Jean Senellart
Published in "Proceedings of the Eighth International Joint Conference on Natural Language Processing" (Volume 2: Short Papers), Asian Federation of Natural Language Processing, 2017, Taipei, TaiwanOpenNMT: Open-Source Toolkit for Neural Machine Translation [PDF]
OpenNMT: Open-Source Toolkit for Neural Machine Translation [PDF]We describe an open-source toolkit for neural machine translation (NMT). The toolkit prioritizes efficiency, modularity, and extensibility with the goal of supporting NMT research into model architectures, feature representations, and source modalities, while maintaining competitive performance and reasonable training requirements. The toolkit consists of modeling and translation support, as well as detailed pedagogical documentation about the underlying techniques.
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, Alexander Rush
Published in "Proceedings of ACL 2017, System Demonstrations", pages 67--72, Association for Computational Linguistics, 2017, Vancouver, Canada