This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/* | |
urldecoder.js | |
This simple service reveals the real URL behind a shortened one | |
Usage: fire it up with: | |
node urldecoder.js | |
Then access as follows: |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Verifying my Blockstack ID is secured with the address 1Q1Wi8Nu4tjnM8VgCj7NHwwq4rBNgtmwTf https://explorer.blockstack.org/address/1Q1Wi8Nu4tjnM8VgCj7NHwwq4rBNgtmwTf |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Global seed set to 23 | |
Running on GPUs 0, | |
Loading model from model.ckpt | |
LatentDiffusion: Running in eps-prediction mode | |
DiffusionWrapper has 859.52 M params. | |
making attention of type 'vanilla' with 512 in_channels | |
Working with z of shape (1, 4, 64, 64) = 16384 dimensions. | |
making attention of type 'vanilla' with 512 in_channels | |
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ['vision_model.encoder.layers.8.self_attn.out_proj.bias', 'vision_model.encoder.layers.13.mlp.fc2.bias', 'vision_model.encoder.layers.4.self_attn.v_proj.weight', 'vision_model.encoder.layers.14.mlp.fc1.weight', 'vision_model.encoder.layers.9.layer_norm1.bias', 'vision_model.encoder.layers.16.self_attn.q_proj.weight', 'vision_model.encoder.layers.4.self_attn.q_proj.bias', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'vision_model.pre_layrnorm.bias', 'vision_model.encoder.layers.8.mlp.fc2.bias', 'vision_model.encoder.layers.19.self_attn.k_proj.bias', 'vision_model.encoder. |