[…] two new papers show that neural networks can learn to translate with no parallel texts—a surprising advance that could make documents in many languages more accessible.
[…] Imagine that you give one person lots of Chinese books and lots of Arabic books—none of them overlapping—and the person has to learn to translate Chinese to Arabic. That seems impossible, right?” says the first author of one study, Mikel Artetxe, a computer scientist at the University of the Basque Country (UPV) in San Sebastián, Spain. “But we show that a computer can do that.”
[…] “This is in infancy,” Artetxe’s co-author Eneko Agirre cautions. “We just opened a new research avenue, so we don’t know where it’s heading.”
[…] Artetxe says the fact that his method and Lample’s—uploaded to arXiv within a day of each other—are so similar is surprising. “But at the same time, it’s great. It means the approach is really in the right direction.”
Congratulations Mikel, Eneko, Gorka and Kyunghyun!
Leave a Reply