Our research & corresponding publications

 

Evaluating the Robustness of Biomedical Concept Normalization

BERT is vulnerable to adversarial attacks and input transformations. This leads to the linkage of invalid inputs to concepts in an ontology. We study the robustness of different BERT-based Normalization models, including 13 different input transformations, and propose novel adversarial attacks. We found a significant drop in performance. Existing mitigation strategies are also explored.

 

ERLKG

We introduce a generic, human-out-of-the-loop pipeline for Entity Representation Learning and Knowledge Graph based rapid association analysis of COVID-19 through the mining of unstructured biomedical corpora. Due to the lack of benchmark datasets for COVID-19, we propose two intrinsic evaluation datasets, for future researcher evaluation.