-
Deep Learning for NLP: Book by Yoav Goldberg, and a Primer version (without the NLP bits, without some of the advanced bits)
-
Manning and Schutze Foundations of Statistical Natural Language Processing. Buy at Amazon
- Classic book, a bit outdates by now, but some chapters are still worth reading today.
-
Jurafsky and Martin Speech and Language Processing (3rd Edition)
175504 אז | |
182482 יום | |
198859 טוב | |
202605 את | |
204134 כמה | |
220754 בוקר | |
230920 לילה | |
244401 רק | |
245847 עוד | |
261671 מי |
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"# The unreasonable effectiveness of Character-level Language Models\n", | |
"## (and why RNNs are still cool)\n", | |
"\n", | |
"###[Yoav Goldberg](http://www.cs.biu.ac.il/~yogo)\n", |
{ | |
"translatorID": "f4a5876a-3e53-40e2-9032-d99a30d7a6fc", | |
"label": "ACL", | |
"creator": "Nathan Schneider, Yoav Goldberg", | |
"target": "^https?://(www[.])?aclweb\\.org/anthology/[^#]+", | |
"minVersion": "1.0.8", | |
"maxVersion": "", | |
"priority": 100, | |
"inRepository": true, | |
"translatorType": 4, |
import sys | |
import numpy as np | |
import random | |
sys.argv += ["--dynet-mem", "1000", "--dynet-seed", "10", "--dynet-gpu-ids" , "1" ] | |
from dynet import * | |
random.seed(10) | |
np.random.seed(20) |
import sys | |
import numpy as np | |
import random | |
sys.argv += ["--dynet-mem", "1000", "--dynet-seed", "10", "--dynet-gpu-ids" , "1" ] | |
from dynet import * | |
random.seed(10) | |
np.random.seed(20) |
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"# 4gram language models share secrets too...\n", | |
"_Yoav Goldberg, 28 Feb, 2018._\n", | |
"\n", | |
"In [a recent research paper](https://arxiv.org/pdf/1802.08232.pdf) titled \"The Secret Sharer:\n", |
Yoav Goldberg, Jan 23, 2021.
The FAccT paper "On the Dangers of Stochastic Parrots: Can Languae Models be Too Big" by Bender, Gebru, McMillan-Major and Shmitchell has been the center of a controversary recently. The final version is now out, and, owing a lot to this controversary, would undoubtly become very widely read. I read an earlier draft of the paper, and I think that the new and updated final version is much improved in many ways: kudos for the authors for this upgrade. I also agree with and endorse most of the content. This is important stuff, you should read it.
However, I do find some aspects of the paper (and the resulting discourse around it and around technology) to be problematic. These weren't clear to me when initially reading the first draft several months ago, but they became very clear to me now. These points are for the most part
Yoav Goldberg, Jan 30, 2021
This new paper from Google Research Ethics Team (by Sambasivan, Arnesen, Hutchinson, Doshi, and Prabhakaran) touches on a very imortant topic: research (and supposedly also applied) work on algorithmic fairness---and more broadly AI-ethics---is US-centric[*], reflecting US subgroups, values, and methods. But AI is also applied elsewhere (for example, India). Do the methods and result developed for/in the US transfer? The answer is, of course, no, and the paper is doing a good job of showing it. If you are the kind of person who is impressed by the number of citations, this one has 220, a much higher number than another paper (not) from Google Research that became popular recently and which boasts many citations. I think this current paper (let's call it "the India Paper") is substantially more important, given that it raises a very serious issue that