신문과 출판물
시스트란 정보
번역 기술 분야에서 50년 이상의 경험을 쌓은 SYSTRAN은 최초의 웹 기반 번역 포털 및 기업 및 공공 기관을 위한 인공 지능과 신경망을 결합한 최초의 신경 번역 엔진을 포함하여 이 분야에서 최고의 혁신을 개척했습니다.
SYSTRAN은 비즈니스 사용자에게 글로벌 협업, 다국어 콘텐츠 제작, 고객 지원, 전자 조사, 빅 데이터 분석, 전자 상거래 등 다양한 영역에서 고급 및 안전한 자동 번역 솔루션을 제공합니다. SYSTRAN은 기존의 타사 애플리케이션과 IT 인프라에 원활하게 통합할 수 있는 개방적이고 확장 가능한 아키텍처를 갖춘 맞춤형 솔루션을 제공합니다.
Enhanced Transformer Model for Data-to-Text Generation
Enhanced Transformer Model for Data-to-Text GenerationNeural models have recently shown significant progress on data-to-text generation tasks in which descriptive texts are generated conditioned on database records. In this work, we present a new Transformer-based data-to-text generation model which learns content selection and summary generation in an end-to-end fashion. We introduce two extensions to the baseline transformer model: First, we modify … 계속됨
Li Gong, Josep Crego, Jean Senellart
Book: Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 148--156, Association for Computational Linguistics, November 2019, Hong-Kong, ChinaSYSTRAN @ WAT 2019: Russian-Japanese News Commentary task
SYSTRAN @ WAT 2019: Russian-Japanese News Commentary taskThis paper describes Systran{‘}s submissions to WAT 2019 Russian-Japanese News Commentary task. A challenging translation task due to the extremely low resources available and the distance of the language pair. We have used the neural Transformer architecture learned over the provided resources and we carried out synthetic data generation experiments which aim at alleviating the … 계속됨
Jitao Xu, TuAnh Nguyen, MinhQuang Pham, Josep Crego, Jean Senellart
Proceedings of the 6th Workshop on Asian Translation, pages 189--194, Association for Computational Linguistics, November 2019, Hong-Kong, ChinaSYSTRAN @ WNGT 2019: DGT Task
SYSTRAN @ WNGT 2019: DGT TaskThis paper describes SYSTRAN participation to the Document-level Generation and Translation (DGT) Shared Task of the 3rd Workshop on Neural Generation and Translation (WNGT 2019). We participate for the first time using a Transformer network enhanced with modified input embeddings and optimising an additional objective function that considers content selection. The network takes in structured … 계속됨
Li Gong, Josep Crego, Jean Senellart
Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 262--267, Association for Computational Linguistics, November 2019, Hong-Kong, ChinaSYSTRAN Participation to the WMT2018 Shared Task on Parallel Corpus Filtering
SYSTRAN Participation to the WMT2018 Shared Task on Parallel Corpus FilteringThis paper describes the participation of SYSTRAN to the shared task on parallel corpus filtering at the Third Conference on Machine Translation (WMT 2018). We participate for the first time using a neural sentence similarity classifier which aims at predicting the relatedness of sentence pairs in a multilingual context. The paper describes the main characteristics … 계속됨
Minh Quang Pham, Josep Crego, Jean Senellart
Third Conference on Machine Translation (WMT18), October 31 - November 1 2018, Brussels, BelgiumFixing Translation Divergences in Parallel Corpora for Neural MT
Fixing Translation Divergences in Parallel Corpora for Neural MTCorpus-based approaches to machine translation rely on the availability of clean parallel corpora. Such resources are scarce, and because of the automatic processes involved in their preparation, they are often noisy. % may contain sentence pairs that are not as parallel as one would expect. This paper describes an unsupervised method for detecting translation divergences … 계속됨
Minh Quang Pham, Josep Crego, Jean Senellart, François Yvon
2018 Conference on Empirical Methods in Natural Language Processing, October 31 – November 4 2018, Brussels, BelgiumAnalyzing Knowledge Distillation in Neural Machine Translation
Analyzing Knowledge Distillation in Neural Machine TranslationKnowledge distillation has recently been successfully applied to neural machine translation. It basically allows for building shrunk networks while the resulting systems retain most of the quality of the original model. Despite that many authors report on the benefits of knowledge distillation, few works discuss the actual reasons why it works, especially in the context … 계속됨
Dakun Zhang, Josep Crego and Jean Senellart
15th International Workshop on Spoken Language Translation, October 29-30 2018, Bruges, BelgiumOpenNMT System Description for WNMT 2018: 800 words/sec on a single-core CPU
OpenNMT System Description for WNMT 2018: 800 words/sec on a single-core CPUWe present a system description of the OpenNMT Neural Machine Translation entry for the WNMT 2018 evaluation. In this work, we developed a heavily optimized NMT inference model targeting a high-performance CPU system. The final system uses a combination of four techniques, all of them leading to significant speed-ups in combination: (a) sequence distillation, (b) … 계속됨
Jean Senellart, Dakun Zhang, Bo Wang, Guillaume Klein, J.P. Ramatchandirin, Josep Crego, Alexander M. Rush
Published in "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation", pages 122-–128, Association for Computational Linguistics, July 20 2018, Melbourne, AustraliaNeural Network Architectures for Arabic Dialect Identification
Neural Network Architectures for Arabic Dialect IdentificationSYSTRAN competes this year for the first time to the DSL shared task, in the Arabic Dialect Identification subtask. We participate by training several Neural Network models showing that we can obtain competitive results despite the limited amount of training data available for learning. We report our experiments and detail the network architecture and parameters … 계속됨
Elise Michon, Minh Quang Pham, Josep Crego, Jean Senellart
Published in "Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects", Association for Computational Linguistics, pages 128-–136, August 20 2018, New Mexico, USABoosting Neural Machine Translation [PDF]
Boosting Neural Machine Translation [PDF]Training efficiency is one of the main problems for Neural Machine Translation (NMT). Deep networks need for very large data as well as many training iterations to achieve state-of-the-art performance. This results in very high computation cost, slowing down research and industrialisation. In this paper, we propose to alleviate this problem with several training methods … 계속됨
Dakun Zhang, Jungi Kim, Josep Crego, Jean Senellart
Published in "Proceedings of the Eighth International Joint Conference on Natural Language Processing" (Volume 2: Short Papers), Asian Federation of Natural Language Processing, 2017, Taipei, TaiwanOpenNMT: Open-Source Toolkit for Neural Machine Translation [PDF]
OpenNMT: Open-Source Toolkit for Neural Machine Translation [PDF]We describe an open-source toolkit for neural machine translation (NMT). The toolkit prioritizes efficiency, modularity, and extensibility with the goal of supporting NMT research into model architectures, feature representations, and source modalities, while maintaining competitive performance and reasonable training requirements. The toolkit consists of modeling and translation support, as well as detailed pedagogical documentation about … 계속됨
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, Alexander Rush
Published in "Proceedings of ACL 2017, System Demonstrations", pages 67--72, Association for Computational Linguistics, 2017, Vancouver, Canada