LOREM: Language-consistent Open Relation Extraction from Unstructured Text

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

62 Downloads (Pure)

Abstract

We introduce a Language-consistent multi-lingual Open Relation Extraction Model (LOREM) for finding relation tuples of any type between entities in unstructured texts. LOREM does not rely on language-specific knowledge or external NLP tools such as translators or PoS-taggers, and exploits information and structures that are consistent over different languages. This allows our model to be easily extended with only limited training efforts to new languages, but also provides a boost to performance for a given single language. An extensive evaluation performed on 5 languages shows that LOREM outperforms state-of-the-art mono-lingual and cross-lingual open relation extractors. Moreover, experiments on languages with no or only little training data indicate that LOREM generalizes to other languages than the languages that it is trained on.
Original languageEnglish
Title of host publicationProceedings of the The Web Conference (WWW)
Place of PublicationTaipei, Taiwan
Pages1830-1838
Number of pages9
ISBN (Electronic)978-1-4503-7023-3
DOIs
Publication statusPublished - 20 Apr 2020
EventIW3C2: The Web Conference 2020 - Taipei, Taiwan
Duration: 20 Apr 202024 Apr 2020
https://www.iw3c2.org/

Conference

ConferenceIW3C2: The Web Conference 2020
CountryTaiwan
CityTaipei
Period20/04/2024/04/20
Internet address

Keywords

  • open domain relation extraction
  • multi-lingual relation extraction
  • text mining

Fingerprint Dive into the research topics of 'LOREM: Language-consistent Open Relation Extraction from Unstructured Text'. Together they form a unique fingerprint.

  • Cite this