BioBERT Embeddings (Pubmed PMC)

Description

This model contains a pre-trained weights of BioBERT, a language representation model for biomedical domain, especially designed for biomedical text mining tasks such as biomedical named entity recognition, relation extraction, question answering, etc. The details are described in the paper “BioBERT: a pre-trained biomedical language representation model for biomedical text mining”.

Download Copy S3 URI

How to use

...
embeddings = BertEmbeddings.pretrained("biobert_pubmed_pmc_base_cased", "en") \
.setInputCols("sentence", "token") \
.setOutputCol("embeddings")
nlp_pipeline = Pipeline(stages=[document_assembler, sentence_detector, tokenizer, embeddings])
pipeline_model = nlp_pipeline.fit(spark.createDataFrame([[""]]).toDF("text"))
result = pipeline_model.transform(spark.createDataFrame([['I hate cancer']], ["text"]))
...
val embeddings = BertEmbeddings.pretrained("biobert_pubmed_pmc_base_cased", "en")
.setInputCols("sentence", "token")
.setOutputCol("embeddings")
val pipeline = new Pipeline().setStages(Array(document_assembler, sentence_detector, tokenizer, embeddings))
val data = Seq("I hate cancer").toDF("text")
val result = pipeline.fit(data).transform(data)
import nlu

text = ["I hate cancer"]
embeddings_df = nlu.load('en.embed.biobert.pubmed_pmc_base_cased').predict(text, output_level='token')
embeddings_df

Results

token	en_embed_biobert_pubmed_pmc_base_cased_embeddings

	I	[-0.012962102890014648, 0.27699071168899536, 0...
	hate	[0.1688309609889984, 0.5337603688240051, 0.148...
	cancer	[0.1850549429655075, 0.05875205248594284, -0.5...

Model Information

Model Name: biobert_pubmed_pmc_base_cased
Type: embeddings
Compatibility: Spark NLP 2.6.2
License: Open Source
Edition: Official
Input Labels: [sentence, token]
Output Labels: [word_embeddings]
Language: [en]
Dimension: 768
Case sensitive: true

Data Source

The model is imported from https://github.com/dmis-lab/biobert