To push container images to ghcr, you need peronal access token (PAT) - see how to create PAT
- Get PAT (personal access token)
Personal Settings > Developer settings > Personal access tokens
To push container images to ghcr, you need peronal access token (PAT) - see how to create PAT
Personal Settings > Developer settings > Personal access tokens
| var gulp = require('gulp'); | |
| var rimraf = require('rimraf'); | |
| var clean = require('gulp-clean'); | |
| var ts = require('gulp-typescript'); | |
| var merge = require('merge-stream'); | |
| var sourcemaps = require('gulp-sourcemaps'); | |
| var tslint = require('gulp-tslint'); | |
| var paths = { | |
| webroot: './wwwroot', |
| Our program, who art in memory, | |
| called by thy name; | |
| thy operating system run; | |
| thy function be done at runtime | |
| as it was on development. | |
| Give us this day our daily output. | |
| And forgive us our code duplication, | |
| as we forgive those who | |
| duplicate code against us. | |
| And lead us not into frustration; |
| from textblob.classifiers import NaiveBayesClassifier | |
| from textblob import TextBlob | |
| train = [ | |
| ('I love this sandwich.', 'pos'), | |
| ('This is an amazing place!', 'pos'), | |
| ('I feel very good about these beers.', 'pos'), | |
| ('This is my best work.', 'pos'), | |
| ("What an awesome view", 'pos'), | |
| ('I do not like this restaurant', 'neg'), |
| # coding=UTF-8 | |
| import nltk | |
| from nltk.corpus import brown | |
| # This is a fast and simple noun phrase extractor (based on NLTK) | |
| # Feel free to use it, just keep a link back to this post | |
| # http://thetokenizer.com/2013/05/09/efficient-way-to-extract-the-main-topics-of-a-sentence/ | |
| # Create by Shlomi Babluki | |
| # May, 2013 |
| # -*- coding: utf-8 *-* | |
| ''' | |
| The following is a naive, unsupervised text summarizer. | |
| It extracts N of the text's most salient sentences. | |
| Salience is defined as the average of the tf-idf weights of the words in a sentence. | |
| ''' | |
| from nltk import sent_tokenize, word_tokenize | |
| from collections import Counter | |
| from math import log10 |