- 경사스러운 일입니다. 이제 이 곳에 달리는 댓글을 막을 수 있어 그리 하게 되었습니다. 그래도 이전에 달렸던 댓글들은 아무래도 볼 수 있는 다른 방법이 인터넷 상에 존재하긴 하는 모양입니다.
…
(U+2026),⋯
(U+22EF),⋮
(U+22EE)는 모두.
(U+002E)가 3번 연속으로 나열된 것과 같은 것으로 봅니다.- "한글 음절 문자"는 가(U+AC00) 이상 힣(U+D7A3) 이하의 유니코드 문자들을 의미합니다.
…
(U+2026), ⋯
(U+22EF), ⋮
(U+22EE)는 모두 .
(U+002E)가 3번 연속으로 나열된 것과 같은 것으로 봅니다.7 | |
2 | |
1 | |
0 | |
4 | |
1 | |
4 | |
9 | |
5 | |
9 |
Have you ever argued for or against teaching language X as the first language in a university computer science curriculum? If so, I hope that your arguments:
license: mit |
Audience: I assume you heard of chatGPT, maybe played with it a little, and was imressed by it (or tried very hard not to be). And that you also heard that it is "a large language model". And maybe that it "solved natural language understanding". Here is a short personal perspective of my thoughts of this (and similar) models, and where we stand with respect to language understanding.
Around 2014-2017, right within the rise of neural-network based methods for NLP, I was giving a semi-academic-semi-popsci lecture, revolving around the story that achieving perfect language modeling is equivalent to being as intelligent as a human. Somewhere around the same time I was also asked in an academic panel "what would you do if you were given infinite compute and no need to worry about labour costs" to which I cockily responded "I would train a really huge language model, just to show that it doesn't solve everything!". We