tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') model = AutoModel.from_pretrained('bert-base-uncased')
from sklearn.feature_extraction.text import TfidfVectorizer
text = "hiwebxseriescom hot"
vectorizer = TfidfVectorizer() X = vectorizer.fit_transform([text])
Here's an example using scikit-learn:
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') model = AutoModel.from_pretrained('bert-base-uncased')
from sklearn.feature_extraction.text import TfidfVectorizer part 1 hiwebxseriescom hot
text = "hiwebxseriescom hot"
vectorizer = TfidfVectorizer() X = vectorizer.fit_transform([text]) tokenizer = AutoTokenizer
Here's an example using scikit-learn: