stanford-nlp

NoneType erorr when calling .lower() method on annotated text

NoneType erorr when calling .lower() method on annotated text Question: I have annotated articles in a list (len=488), and I want to apply the .lower() method on the lemmas. I get the following error message AttributeError: ‘NoneType’ object has no attribute ‘lower’. Here’s the code: file = open("Guardian_Syria_text.csv", mode="r", encoding=’utf-8-sig’) data = list(csv.reader(file, delimiter=",")) file.close …

Total answers: 1

How to visualize Stanford Stanza NER results with BRAT

How to visualize Stanford Stanza NER results with BRAT Question: I am using Stanza Biomedical i2b2 processor to identify PROBLEM, TREATMENT and TEST entities in drugs data. Python code is as follows: import stanza stanza.download( "en", package="mimc", processors={"ner": ["i2b2"]}, verbose=False, ) nlp = stanza.Pipeline( "en", package="mimc", processors={"ner": ["i2b2"]}, verbose=False, ) parsed_row = nlp("Prevention of phototoxicity …

Total answers: 1

Converting word to vector using GloVe

Converting word to vector using GloVe Question: I loaded my glove package as follows: import gensim.downloader as api model = api.load("glove-wiki-gigaword-100") and would want to create a function where I pass in a word and the GloVe model, and it will return the corresponding vector, for instance, def convert_word_to_vec(word, model): and when I pass in …

Total answers: 1

Error while loading vector from Glove in Spacy

Error while loading vector from Glove in Spacy Question: I am facing the following attribute error when loading glove model: Code used to load model: nlp = spacy.load(‘en_core_web_sm’) tokenizer = spacy.load(‘en_core_web_sm’, disable=[‘tagger’,’parser’, ‘ner’, ‘textcat’]) nlp.vocab.vectors.from_glove(‘../models/GloVe’) Getting the following atribute error when trying to load the glove model: AttributeError: ‘spacy.vectors.Vectors’ object has no attribute ‘from_glove’ Have …

Total answers: 2

Extract Noun Phrases with Stanza and CoreNLPClient

Extract Noun Phrases with Stanza and CoreNLPClient Question: I am trying to extract noun phrases from sentences using Stanza(with Stanford CoreNLP). This can only be done with the CoreNLPClient module in Stanza. # Import client module from stanza.server import CoreNLPClient # Construct a CoreNLPClient with some basic annotators, a memory allocation of 4GB, and port …

Total answers: 2

How to extract name from string using nltk

How to extract name from string using nltk Question: I am trying to extract name(Indian) from unstructured string. Here come my code: text = “Balaji Chandrasekaran Bangalore | Senior Business Analyst/ Lead Business Analyst An accomplished Senior Business Analyst with a track record of handling complex projects in given period of time, exceeding above the …

Total answers: 2

Only Get Tokenized Sentences as Output from Stanford Core NLP

Only Get Tokenized Sentences as Output from Stanford Core NLP Question: I need to split sentences. I’m using the pycorenlp wrapper for python3. I’ve started the server from my jar directory using: java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 15000 I’ve run the following commands: from pycorenlp import StanfordCoreNLP nlp = StanfordCoreNLP(‘http://localhost:9000’) text = …

Total answers: 2

Stanford nlp for python

Stanford nlp for python Question: All I want to do is find the sentiment (positive/negative/neutral) of any given string. On researching I came across Stanford NLP. But sadly its in Java. Any ideas on how can I make it work for python? Asked By: 90abyss || Source Answers: Textblob is a great package for sentimental …

Total answers: 10

Extract list of Persons and Organizations using Stanford NER Tagger in NLTK

Extract list of Persons and Organizations using Stanford NER Tagger in NLTK Question: I am trying to extract list of persons and organizations using Stanford Named Entity Recognizer (NER) in Python NLTK. When I run: from nltk.tag.stanford import NERTagger st = NERTagger(‘/usr/share/stanford-ner/classifiers/all.3class.distsim.crf.ser.gz’, ‘/usr/share/stanford-ner/stanford-ner.jar’) r=st.tag(‘Rami Eid is studying at Stony Brook University in NY’.split()) print(r) the …

Total answers: 6