This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
var sheetId = "Your_sheet_id"; | |
var sheetName = "Forrm_tab_name"; | |
var schema = { | |
timeStamp: 0, | |
title: 3, | |
authors: 2, | |
isRead: 1, | |
sourceShort: 4, | |
year: 5, |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from optuna.integration import AllenNLPExecutor | |
import optuna | |
def objective(trial: optuna.Trial) -> float: | |
trial.suggest_float("embedding_dropout", 0.0, 0.5) | |
executor = AllenNLPExecutor(trial, "./config.jsonnet", "result", include_package="allennlp_models") | |
return executor.run() | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"dataset_reader":{ | |
"type": "sst_tokens", | |
"use_subtrees": true, | |
"granularity": "5-class" | |
}, | |
"validation_dataset_reader":{ | |
"type": "sst_tokens", | |
"use_subtrees": false, | |
"granularity": "5-class" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"dataset_reader": { | |
"lazy": false, | |
"token_indexers": { | |
"tokens": { | |
"lowercase_tokens": true, | |
"type": "single_id" | |
} | |
}, | |
"tokenizer": { |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sending build context to Docker daemon 81.92kB | |
Step 1/7 : FROM ubuntu:20.04 | |
---> adafef2e596e | |
Step 2/7 : RUN apt update -y && apt install -y python3 python3-dev python3-venv python3-pip | |
---> Using cache | |
---> 5ed03d347c2e | |
Step 3/7 : RUN pip3 install poetry | |
---> Using cache | |
---> ea3716a0aa47 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# -*- coding: utf-8 -*- | |
from setuptools import setup | |
packages = \ | |
['konoha', | |
'konoha.api', | |
'konoha.data', | |
'konoha.integrations', | |
'konoha.word_tokenizers'] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[tool.poetry] | |
name = "konoha" | |
version = "4.4.0" | |
description = "A tiny sentence/word tokenizer for Japanese text written in Python" | |
authors = ["himkt <[email protected]>"] | |
[tool.poetry.dependencies] | |
python = "^3.6.1" | |
janome = {version = "^0.3.10", optional = true} | |
natto-py = {version = "^0.9.0", optional = true} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
> python allennlp_simple.py (feature/allennlp-pruner| ● 3) | |
3000it [00:11, 255.15it/s] | |
3000it [00:01, 1724.34it/s] | |
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3000/3000 [00:00<00:00, 5099.38it/s] | |
400000it [00:02, 156772.51it/s] | |
/home/ubuntu/work/github.com/himkt/optuna/optuna/_experimental.py:84: ExperimentalWarning |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// Use dev.jsonl for training to reduce computation time. | |
// local TRAIN_PATH = 'https://s3-us-west-2.amazonaws.com/allennlp/datasets/imdb/dev.jsonl'; | |
local VALIDATION_PATH = 'https://s3-us-west-2.amazonaws.com/allennlp/datasets/imdb/test.jsonl'; | |
local DROPOUT = std.extVar('DROPOUT'); | |
local EMBEDDING_DIM = std.extVar('EMBEDDING_DIM'); | |
local CNN_FIELDS(max_filter_size, embedding_dim, hidden_size, num_filters) = { | |
type: 'cnn', | |
ngram_filter_sizes: std.range(1, max_filter_size), | |
num_filters: num_filters, | |
embedding_dim: embedding_dim, |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
============================= test session starts ============================== | |
platform linux -- Python 3.7.7, pytest-5.4.3, py-1.8.2, pluggy-0.13.1 | |
rootdir: /workspaces/optuna | |
plugins: nbval-0.9.5 | |
collected 10 items | |
tests/integration_tests/test_fastai.py . [ 10%] | |
tests/integration_tests/allennlp_tests/test_allennlp.py ....FFFF. [100%] | |
=================================== FAILURES =================================== |
NewerOlder