신문과 출판물
시스트란 정보
번역 기술 분야에서 50년 이상의 경험을 쌓은 SYSTRAN은 최초의 웹 기반 번역 포털 및 기업 및 공공 기관을 위한 인공 지능과 신경망을 결합한 최초의 신경 번역 엔진을 포함하여 이 분야에서 최고의 혁신을 개척했습니다.
SYSTRAN은 비즈니스 사용자에게 글로벌 협업, 다국어 콘텐츠 제작, 고객 지원, 전자 조사, 빅 데이터 분석, 전자 상거래 등 다양한 영역에서 고급 및 안전한 자동 번역 솔루션을 제공합니다. SYSTRAN은 기존의 타사 애플리케이션과 IT 인프라에 원활하게 통합할 수 있는 개방적이고 확장 가능한 아키텍처를 갖춘 맞춤형 솔루션을 제공합니다.
Rosetta-LSF: an Aligned Corpus of French Sign Language and French for Text-to-Sign Translation
Rosetta-LSF: an Aligned Corpus of French Sign Language and French for Text-to-Sign Translation
Elise Bertin-Lemée, Annelies Braffort, Camille Challant, Claire Danet, Boris Dauriac, Michael Filhol, Emmanuella Martinod, Jérémie Segouat.
13th Conference on Language Resources and Evaluation (LREC 2022), Jun 2022, Marseille, France.Joint Generation of Captions and Subtitles with Dual Decoding
Joint Generation of Captions and Subtitles with Dual DecodingAs the amount of audio-visual content increases, the need to develop automatic captioning and subtitling solutions to match the expectations of a growing international audience appears as the only viable way to boost throughput and lower the related post-production costs. Automatic captioning and subtitling often need to be tightly intertwined to achieve an appropriate level … 계속됨
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022), May 2022, Dublin, Ireland
SYSTRAN @ WMT 2021: Terminology Task
SYSTRAN @ WMT 2021: Terminology TaskThis paper describes SYSTRAN submissions to the WMT 2021 terminology shared task. We participate in the English-to-French translation direction with a standard Transformer neural machine translation network that we enhance with the ability to dynamically include terminology constraints, a very common industrial practice. Two state-of-the-art terminology insertion methods are evaluated based (i) on the use … 계속됨
MinhQuang Pham, Antoine Senellart, Dan Berrebbi, Josep Crego, Jean Senellart
Proceedings of the Sixth Conference on Machine Translation (WMT), Online, November 10-11, 2021Revisiting Multi-Domain Machine Translation
Revisiting Multi-Domain Machine TranslationWhen building machine translation systems, one often needs to make the best out of heterogeneous sets of parallel data in training, and to robustly handle inputs from unexpected domains in testing. This multi-domain scenario has attracted a lot of recent work that fall under the general umbrella of transfer learning. In this study, we revisit … 계속됨
MinhQuang Pham, Josep Maria Crego, François Yvon
Transactions of the Association for Computational Linguistics 9: 17–35, February 1th, 2021Integrating Domain Terminology into Neural Machine Translation
Integrating Domain Terminology into Neural Machine TranslationThis paper extends existing work on terminology integration into Neural Machine Translation, a common industrial practice to dynamically adapt translation to a specific domain. Our method, based on the use of placeholders complemented with morphosyntactic annotation, efficiently taps into the ability of the neural network to deal with symbolic knowledge to surpass the surface generalization … 계속됨
Elise Michon, Josep Maria Crego, Jean Senellart
Proceedings of the 28th International Conference on Computational Linguistics, December 2020A Study of Residual Adapters for Multi-Domain Neural Machine Translation
A Study of Residual Adapters for Multi-Domain Neural Machine TranslationDomain adaptation is an old and vexing problem for machine translation systems. The most common approach and successful to supervised adaptation is to fine-tune a baseline system with in-domain parallel data. Standard fine-tuning however modifies all the network parameters, which makes this approach computationally costly and prone to overfitting. A recent, lightweight approach, instead augments … 계속됨
MinhQuang Pham, Josep Maria Crego, François Yvon, Jean Senellart
Proceedings of the Fifth Conference on Machine Translation, November 2020Priming Neural Machine Translation
Priming Neural Machine TranslationPriming is a well known and studied psychology phenomenon based on the prior presentation of one stimulus (cue) to influence the processing of a response. In this paper, we propose a framework to mimic the process of priming in the context of neural machine translation (NMT). We evaluate the effect of using similar translations as … 계속됨
MinhQuang Pham, Jitao Xu, Josep Maria Crego, François Yvon, Jean Senellart
Proceedings of the Fifth Conference on Machine Translation,November 2020Efficient and High-Quality Neural Machine Translation with OpenNMT
Efficient and High-Quality Neural Machine Translation with OpenNMTThis paper describes the OpenNMT submissions to the WNGT 2020 efficiency shared task. We explore training and acceleration of Transformer models with various sizes that are trained in a teacher-student setup. We also present a custom and optimized C++ inference engine that enables fast CPU and GPU decoding with few dependencies. By combining additional optimizations … 계속됨
Guillaume Klein, Dakun Zhang, Clément Chouteau, Josep Crego, Jean Senellart
Proceedings of the Fourth Workshop on Neural Generation and Translation, pages 211--217, Association for Computational Linguistics, July 2020Boosting Neural Machine Translation with Similar Translations
Boosting Neural Machine Translation with Similar TranslationsThis presentation demonstrates data augmentation methods for Neural Machine Translation to make use of similar translations, in a comparable way a human translator employs fuzzy matches. We show how we simply feed the neural model with information on both source and target sides of the fuzzy matches, and we also extend the similarity to include … 계속됨
Jitao Xu, Josep Crego, Jean Senellart
Proceedings of the Sixth Conference on Machine Translation (WMT), Online, November 10-11, 2021Generic and Specialized Word Embeddings for Multi-Domain Machine Translation
Generic and Specialized Word Embeddings for Multi-Domain Machine TranslationSupervised machine translation works well when the train and test data are sampled from the same distribution. When this is not the case, adaptation techniques help ensure that the knowledge learned from out-of-domain texts generalises to in-domain sentences. We study here a related setting, multi-domain adaptation, where the number of domains is potentially large and … 계속됨
Minh Quang Pham, Josep Crego, François Yvon, Jean Senellart
Book: "International Workshop on Spoken Language Translation", "Proceedings of the 16th International Workshop on Spoken Language Translation (IWSLT)", November 2019, Hong-Kong, China