Skip to content

Instantly share code, notes, and snippets.

View scarecrow1123's full-sized avatar

Seelan scarecrow1123

View GitHub Profile
@scarecrow1123
scarecrow1123 / Important Links.md
Last active June 1, 2017 05:28
NLP + Deep Learning

General Information

  • Conversations and lectures
  • based on the actual speech that is used at Universities
  • Accents - North America, U.K. and Australia
  • 2 to 3 conversations and 4 - 6 lectures
  • Can be heard only one time
  • conversations - 3 minutes long and 5 mcqs
  • questions can't be seen while listening to the conversations
  • Lecures - 5 mins long and 6 mcqs
  • 60 - 90 mins to complete the listening section
@scarecrow1123
scarecrow1123 / GitHub-Forking.md
Created April 17, 2019 01:41 — forked from Chaser324/GitHub-Forking.md
GitHub Standard Fork & Pull Request Workflow

Whether you're trying to give back to the open source community or collaborating on your own projects, knowing how to properly fork and generate pull requests is essential. Unfortunately, it's quite easy to make mistakes or not know what you should do when you're initially learning the process. I know that I certainly had considerable initial trouble with it, and I found a lot of the information on GitHub and around the internet to be rather piecemeal and incomplete - part of the process described here, another there, common hangups in a different place, and so on.

In an attempt to coallate this information for myself and others, this short tutorial is what I've found to be fairly standard procedure for creating a fork, doing your work, issuing a pull request, and merging that pull request back into the original project.

Creating a Fork

Just head over to the GitHub page and click the "Fork" button. It's just that simple. Once you've done that, you can use your favorite git client to clone your repo or j

@scarecrow1123
scarecrow1123 / experiment.jsonnet
Last active May 8, 2020 18:06
A custom(read dirty) AllenNLP trainer subclass to use fp16 using `apex.amp`
{
// ....
"trainer": {
"type": "fp16-trainer",
"mixed_precision": true,
// other options
}
// ....
}
@scarecrow1123
scarecrow1123 / dist_log.py
Created September 26, 2019 05:29
Example for handling multiprocess logging when using `torch.distributed`
import argparse
import logging
from logging import Filter
from logging.handlers import QueueHandler, QueueListener
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
from torch.multiprocessing import Queue