sparknlp.upload_to_hub#

Module Contents#

Classes#

PushToHub

class PushToHub[source]#
list_of_tasks = ['Named Entity Recognition', 'Text Classification', 'Text Generation', 'Sentiment Analysis',...[source]#
zip_directory(zip_path: str)[source]#

Zips folder for pushing to hub.

folder_path:Path to the folder to zip. zip_path:Path of the zip file to create.

unzip_directory()[source]#

Unzips Model to check for required files for upload.

Keyword Arguments: zip_path:Zip Path to unzip.

check_for_required_info()[source]#

Checks if the required fields exist in given dictionary and fills any remaining fields.

Keyword Arguments: model_data: The model data to check.

push_to_hub(language: str, model_zip_path: str, task: str, pythonCode: str, GIT_TOKEN: str, title: str = None, tags: List[str] = None, dependencies: str = None, description: str = None, predictedEntities: str = None, sparknlpVersion: str = None, howToUse: str = None, liveDemo: str = None, runInColab: str = None, scalaCode: str = None, nluCode: str = None, results: str = None, dataSource: str = None, includedModels: str = None, benchmarking: str = None) str[source]#

Pushes model to Hub.

Keyword Arguments: model_data:Dictionary containing info about the model such as Name and Language. GIT_TOKEN: Token required for pushing to hub.

create_docs() dict[source]#

Adds fields in the dictionary for pushing to hub.

Keyword Arguments: dictionary_for_upload: The dictionary to add keys to.