https://github.com/anibali/docker-pytorch/blob/master/cuda-10.0/Dockerfile
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
docker login docker.pkg.github.com -u talosinsight -p TOKEN | |
docker pull docker.pkg.github.com/talosinsight/insight-translator/translator:latest | |
--// |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// Imports the Google Cloud client library | |
const language = require('@google-cloud/language'); | |
// Creates a client | |
const client = new language.LanguageServiceClient(); | |
const categorizeText = async(document:string) => { | |
// Prepares a document, representing the provided text | |
const document = { |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
resource "google_cloud_run_service" "default" { | |
name = "tftest-cloudrun" | |
location = "us-central1" | |
provider = "google-beta" | |
metadata { | |
namespace = "my-project-name" | |
} | |
spec { |
run tf apply var=_GIT_USERNAME=test var=_GIT_Password=abc ....
lokal. This will create a cloudbuild-trigger on the defined branch
of the repository. Afterwards you can push to the Branch an the Cloudbuild will be triggered and run the steps definden in the cloudbuild.yam
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def get_string_from_dict(input_dict=''): | |
try: | |
result_string='' | |
# loops trough dict | |
for key, value in input_dict.items(): | |
# checks if value is string then translates it | |
if(type(value) == str or type(value_text) == int): |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
const fetch = require('node-fetch'); | |
import {transformEntities} from './transformEntities.ts' | |
const knowYourGraph = (apiKey:string, queryString:string, limit = 50, lng = "en") => { | |
return new Promise(async (resolve, reject) => { | |
try { | |
// building Url for Knowledge Graph | |
const url:string = `https://kgsearch.googleapis.com/v1/entities:search?indent=true&languages=${lng}&limit=${limit}&query=${queryString}&key=${apiKey}`; | |
// request API | |
const response = await fetch(url, { | |
headers: { |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def entity_parser(entity_array): | |
res_ent=[] | |
for ent in inpust_list: | |
if ent['sailence'] > 0 and ent['type] != 'OTHER': | |
res_ent.append({"name":ent['name'], "type":ent['type'].capitalize(), | |
"relation": f"is{ent['type'].capitalize()}", "sailence": ent['sailence']}) | |
return res_ent | |
def create_article_script(input_article=None): |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def add_entity_to_words(sentence='',entities=''): | |
print(sentence) | |
res_sen_tpl = [] | |
sentence= sentence.lower() | |
words_in_sentence = tokenize_to_word(sentence) | |
for wrd_idx,word in enumerate(words_in_sentence): | |
if len(word) > 1: | |
r_word = f"{word}" | |
word = word.lower() | |
for ent in entities: |
- Talos-Job gets created
- Patent Crawler gets triggered once (noramlly it runs everyday once)
- Patent Crawler queries "company-patent-table" (not defined yet) should be a table where we match companies to patent assignee for example siemens healthineers doesn hold any patents but siemens healthcare does (not required @ beginning).
- Patent crawler request url with correct company (rather name not excatly defined and filter in csv), for example : Siemens
a. url see below
b. result is a csv - patent crawler parses results and compares the result of csv with patent-link-table. (only add last 5-10 years)
- "new patents" which are not in the "patent-link-table" will be added and the patent crawler crawls them