Created
June 7, 2019 06:24
-
-
Save harpiechoise/96e546573e872cf558befab61ee9082e to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## 0. Requisitos\n", | |
"\n", | |
"1. <a href='https://gist.github.com/harpiechoise/1ce67cc6589549932bf847c32933e35c?fbclid=IwAR1dGKErTetGdJAtWAro2p6StHjGAB23SYimLBbJ6E4MObe7StCjky91_Kc'> Textos a profundidad </a>\n", | |
"\n", | |
"1. <a href='https://gist.github.com/harpiechoise/c05f6cfb22cc44cb270769ba7f40a098?fbclid=IwAR0eGgbIB69F76OoTnOwYVV8l9467K-XbQhdSnfuaX-ex6QtQJ8wLPAQprg'> NLP Básico </a>\n", | |
"\n", | |
"## 1. Introducción\n", | |
"\n", | |
"En este documento, quiero dejar claro el tema de Part Of Speech, esta es una de las partes fundamentales del proceso de NLP porque nos ayuda a entender el sentido, y a tener una visión más clara de los conceptos para la idea de un texto, si yo quiero extraer el tópico central de un texto, esta habilidad es fundamental. Ya que nos ayuda a entender la estructura basándonos en como estan construidas y las relaciones de las palabras en si, entonces vamos a empezar por lo más básico que son las etiquetas\n", | |
"\n", | |
"Muchas palabras son raras, o no frecuentes dentro del lenguaje y se ven completamente diferentes pero aveces significan exactamente lo mismo, pero las mismas palabras en distinto orden pueden significar algo totalmente distinto, entonces para saber a que se refiere una combinación de las palabras debemos ver la parte del discurso en donde salen para discernir un concepto y no ver las palabras en si.\n", | |
"\n", | |
"Incluso separar palabras a través de la Tokenizacion, puede ser muy dificultoso dependiendo del lenguaje, mientras sea posible, para resolver ciertos problemas a partir de texto crudo, es usualmente mejor usar el conocimiento lingüístico para agregar información útil. Y las etiquetas POS son totalmente necesarias para llevar el procesamiento del lenguaje natural al siguiente nivel.\n", | |
"\n", | |
"Para esto es exactamente para lo que está hecho SpaCy, le das texto crudo, y te devuelve un objeto de tipo doc con una variedad de anotaciones.\n", | |
"\n", | |
"En este documento vamos a ver más a fondo las etiquetas POS. SpaCy provee dos tipos de etiquetas POS, las simples (Sustantivos, Adjetivos, Verbos) y unas más detalladas (Plural, Verbo Superlativo, Adjetivo Superlativo), eso nos provee más información sobre nuestro texto.\n", | |
"\n", | |
"Tenemos que tener en cuenta que el español o en ingles son lenguajes increíblemente complejos con una cantidad inmensa de reglas y muchas excepciones a esas mismas reglas.\n", | |
"\n", | |
"Aquí no enseñaremos reglas gramaticales. Pero puedes consultar referencias en internet para aprender más de las reglas gramaticales del español o ingles, por lo que para entender a que se refieren las etiquetas POS, te recomiendo encarecidamente buscar información por internet para saber exactamente los conceptos como **Verbo Superlativo** o **Adjetivo Superlativo** para sacar el mejor provecho posible y aumentar tus habilidades al momento de aplicar el procesamiento del lenguaje natural, ahora vamos a explorar con más detalles las etiquetas POS con SpaCy.\n", | |
"\n", | |
"Antes de empezar, quiero que tengas bien en cuenta esta tabla, si necesitas saber que función cumple cada etiqueta te va a servir mucho\n", | |
"\n", | |
"<a href='https://spacy.io/api/annotation#pos-tagging'> POS tags SpaCy </a>\n", | |
"\n", | |
"Primero importaremos la librería SpaCy" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 1, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"import spacy" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Vamos a cargar el core en español " | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 2, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"nlp = spacy.load('es_core_news_sm')" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Ahora haremos un objeto doc" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 3, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"'El zorro pequeño saltó por encima del perro sin tocarlo y lo hizo una vez más de vuelta.'" | |
] | |
}, | |
"execution_count": 3, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"doc = nlp(u\"El zorro pequeño saltó por encima del perro sin tocarlo y lo hizo una vez más de vuelta.\")\n", | |
"doc.text" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Vamos a tomar un token dentro de nuestro objeto doc, eso ya lo sabemos asique omitiré explicar esta parte" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 4, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"saltó" | |
] | |
}, | |
"execution_count": 4, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"doc[3]" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Dentro de este token puedo tomar el texto, o la etiqueta POS" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 5, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"saltó\n" | |
] | |
}, | |
{ | |
"data": { | |
"text/plain": [ | |
"'VERB'" | |
] | |
}, | |
"execution_count": 5, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"print(doc[3].text)\n", | |
"doc[3].pos_" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"O puedo tomar la tag más detallada de la siguiente manera" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 6, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"saltó\n" | |
] | |
}, | |
{ | |
"data": { | |
"text/plain": [ | |
"'VERB__Mood=Ind|Number=Sing|Person=3|Tense=Past|VerbForm=Fin'" | |
] | |
}, | |
"execution_count": 6, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"print(doc[3].text)\n", | |
"doc[3].tag_" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Aquí lo que nos está diciendo\n", | |
"\n", | |
"**Number = Sing**: Quiere decir que es singular\n", | |
"\n", | |
"**Person = 3**: Quiere decir que está en primera persona\n", | |
"\n", | |
"**Tense = Past**: Quiere decir que está conjugado en forma pasada\n", | |
"\n", | |
"Eso nos da mucha información de la palabra saltó y su función dentro del texto, ahora si no tenemos la tabla a mano, podemos hacer que SpaCy nos explique una etiqueta.\n", | |
"\n", | |
"Ahora vamos a ver cada una de las etiquetas que tenemos" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 7, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"El DET determiner DET__Definite=Def|Gender=Masc|Number=Sing|PronType=Art\n", | |
"zorro NOUN noun NOUN__Gender=Masc|Number=Sing\n", | |
"pequeño ADJ adjective ADJ__Gender=Masc|Number=Sing\n", | |
"saltó VERB verb VERB__Mood=Ind|Number=Sing|Person=3|Tense=Past|VerbForm=Fin\n", | |
"por ADP adposition ADP__AdpType=Prep\n", | |
"encima ADV adverb ADV__AdpType=Preppron|Gender=Masc|Number=Sing\n", | |
"del ADP adposition ADP__AdpType=Preppron|Gender=Masc|Number=Sing\n", | |
"perro NOUN noun NOUN__Gender=Masc|Number=Sing\n", | |
"sin ADP adposition ADP__AdpType=Prep\n", | |
"tocarlo NOUN noun NOUN__Gender=Masc|Number=Sing\n", | |
"y CONJ conjunction CCONJ___\n", | |
"lo PRON pronoun PRON__Case=Acc|Gender=Masc|Number=Sing|Person=3|PronType=Prs\n", | |
"hizo VERB verb VERB__Mood=Ind|Number=Sing|Person=3|Tense=Past|VerbForm=Fin\n", | |
"una DET determiner DET__Definite=Ind|Gender=Fem|Number=Sing|PronType=Art\n", | |
"vez NOUN noun NOUN__Gender=Fem|Number=Sing\n", | |
"más ADV adverb ADV___\n", | |
"de ADP adposition ADP__AdpType=Prep\n", | |
"vuelta NOUN noun NOUN__Gender=Fem|Number=Sing\n", | |
". PUNCT punctuation PUNCT__PunctType=Peri\n" | |
] | |
} | |
], | |
"source": [ | |
"for token in doc:\n", | |
" print(f'{token.text:{10}} {token.pos_:{20}} {spacy.explain(token.pos_):{20}} {token.tag_}')" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Con esto nos podemos dar a la idea de la estructuras de nuestras palabras en el texto.\n", | |
"\n", | |
"Otra cosa que podemos hacer es contar la frecuencia de las etiquetas en nuestro texto, de la siguiente manera" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 8, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"POS_count = doc.count_by(spacy.attrs.POS)" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 9, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"{97: 1, 100: 2, 84: 1, 85: 4, 86: 2, 88: 1, 90: 2, 92: 5, 95: 1}" | |
] | |
}, | |
"execution_count": 9, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"POS_count" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Aquí nos vemos que nos devuelve unos números, y una especie de contador, esos números son muy útiles por si mismos, porque son los códigos de las etiquetas POS.\n", | |
"\n", | |
"El atributo **pos_**, nos devuelve el texto de esa etiqueta en especifico, entonces, ¿Como hago yo para obtener ese texto de la etiqueta POS?, con el atributo vocab del objeto doc, puedo hacerlo, veamoslo" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 10, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"'CONJ'" | |
] | |
}, | |
"execution_count": 10, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"doc.vocab[88].text" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Como podemos ver en nuestro texto, lo más común de encontrar es la conjunción.\n", | |
"\n", | |
"Ahora veamos todas las etiquetas" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 11, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
" 84. ADJ , 1\n", | |
" 85. ADP , 4\n", | |
" 86. ADV , 2\n", | |
" 88. CONJ , 1\n", | |
" 90. DET , 2\n", | |
" 92. NOUN , 5\n", | |
" 95. PRON , 1\n", | |
" 97. PUNCT, 1\n", | |
" 100. VERB , 2\n" | |
] | |
} | |
], | |
"source": [ | |
"for k, v in sorted(POS_count.items()):\n", | |
" print(f\"{k:{5}}. {doc.vocab[k].text:{5}}, {v:{5}}\")" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"También podemos hacer lo mismo con las tags, veamoslo" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 12, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"19706760756614928 DET__Definite=Def|Gender=Masc|Number=Sing|PronType=Art, 1\n", | |
"1182704070608105034 ADJ__Gender=Masc|Number=Sing, 1\n", | |
"1674070363570382157 PUNCT__PunctType=Peri, 1\n", | |
"1966231466788530083 VERB__Mood=Ind|Number=Sing|Person=3|Tense=Past|VerbForm=Fin, 2\n", | |
"2576739957777124069 PRON__Case=Acc|Gender=Masc|Number=Sing|Person=3|PronType=Prs, 1\n", | |
"6390969617640569046 ADV__AdpType=Preppron|Gender=Masc|Number=Sing, 1\n", | |
"8673025306212932083 CCONJ___, 1\n", | |
"9320222956842691890 NOUN__Gender=Fem|Number=Sing, 2\n", | |
"9744929252790920313 NOUN__Gender=Masc|Number=Sing, 3\n", | |
"10786662521817145411 ADP__AdpType=Preppron|Gender=Masc|Number=Sing, 1\n", | |
"11966412190124123628 DET__Definite=Ind|Gender=Fem|Number=Sing|PronType=Art, 1\n", | |
"14916762203302091850 ADV___, 1\n", | |
"16249466257285059019 ADP__AdpType=Prep, 3\n" | |
] | |
} | |
], | |
"source": [ | |
"TAG_count = doc.count_by(spacy.attrs.TAG)\n", | |
"\n", | |
"for k, v in sorted(TAG_count.items()):\n", | |
" print(f\"{k} {doc.vocab[k].text}, {v}\")" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Algo a notar, es que los mas frecuentes tienen numeros mas pequeños, y las tags menos comunes tienen numeros mas grandes, por esa razon las etiquetas POS tienen numero mucho mas pequeños, que las etiquetas mas detalladas\n", | |
"\n", | |
"y tambien podemos hacer lo mismo con las etiquetas DEP (Dependecia Sintactica)" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 13, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"400 advmod, 1\n", | |
"402 amod, 1\n", | |
"407 cc, 1\n", | |
"410 conj, 1\n", | |
"415 det, 2\n", | |
"426 nmod, 2\n", | |
"429 nsubj, 1\n", | |
"434 obj, 1\n", | |
"435 obl, 2\n", | |
"445 punct, 1\n", | |
"341650274070182569 fixed, 2\n", | |
"8110129090154140942 case, 3\n", | |
"8206900633647566924 ROOT, 1\n" | |
] | |
} | |
], | |
"source": [ | |
"DEP_count = doc.count_by(spacy.attrs.DEP)\n", | |
"for k, v in sorted(DEP_count.items()):\n", | |
" print(f\"{k} {doc.vocab[k].text}, {v}\")" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Como vemos aquí tenemos números más pequeños y otros más grandes, pero es por lo que acabo de decir, la frecuencia de estas etiquetas dentro de los textos, para saber todas las etiquetas de dependencia sintáctica les dejo esta tabla\n", | |
"\n", | |
"<a href=\"https://spacy.io/api/annotation#dependency-parsing\">SpaCy DEP Tags</a>" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Entonces, como vimos podemos consultar las etiquetas con el guion bajo (\\_) si olvidamos el gion bajo solo nos devolverá un numero, con spacy.explain podemos consultar la etiqueta que estamos consultando\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## 2. Visualizando las etiquetas POS\n", | |
"\n", | |
"Acabamos de aprender como extraer etiquetas POS de los tokens y obtener información de ellas.\n", | |
"\n", | |
"Ahora vamos a ver como visualizar la estructuras de nuestras etiquetas\n", | |
"\n", | |
"Vamos a importar la libreria displacy y mostrar gráficamente estas etiquetas de la siguiente manera\n", | |
"\n", | |
"**Nota** el estilo 'dep' es para decir que queremos visualizar las etiquetas, luego veremos otro estilo de visualización" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 14, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/html": [ | |
"<svg xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xml:lang=\"es\" id=\"c0f0b82e23624c58821875925c2d70f5-0\" class=\"displacy\" width=\"3200\" height=\"574.5\" direction=\"ltr\" style=\"max-width: none; height: 574.5px; color: #000000; background: #ffffff; font-family: Arial; direction: ltr\">\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"50\">El</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"50\">DET</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"225\">zorro</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"225\">NOUN</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"400\">pequeño</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"400\">ADJ</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"575\">saltó</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"575\">VERB</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"750\">por</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"750\">ADP</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"925\">encima</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"925\">ADV</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"1100\">del</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"1100\">ADP</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"1275\">perro</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"1275\">NOUN</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"1450\">sin</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"1450\">ADP</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"1625\">tocarlo</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"1625\">NOUN</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"1800\">y</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"1800\">CONJ</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"1975\">lo</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"1975\">PRON</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"2150\">hizo</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"2150\">VERB</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"2325\">una</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"2325\">DET</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"2500\">vez</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"2500\">NOUN</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"2675\">más</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"2675\">ADV</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"2850\">de</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"2850\">ADP</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"3025\">vuelta.</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"3025\">NOUN</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-0\" stroke-width=\"2px\" d=\"M70,439.5 C70,352.0 205.0,352.0 205.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-0\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">det</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M70,441.5 L62,429.5 78,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-1\" stroke-width=\"2px\" d=\"M245,439.5 C245,264.5 560.0,264.5 560.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-1\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">nsubj</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M245,441.5 L237,429.5 253,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-2\" stroke-width=\"2px\" d=\"M245,439.5 C245,352.0 380.0,352.0 380.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-2\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">amod</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M380.0,441.5 L388.0,429.5 372.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-3\" stroke-width=\"2px\" d=\"M770,439.5 C770,177.0 1265.0,177.0 1265.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-3\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">case</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M770,441.5 L762,429.5 778,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-4\" stroke-width=\"2px\" d=\"M770,439.5 C770,352.0 905.0,352.0 905.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-4\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">fixed</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M905.0,441.5 L913.0,429.5 897.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-5\" stroke-width=\"2px\" d=\"M770,439.5 C770,264.5 1085.0,264.5 1085.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-5\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">fixed</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M1085.0,441.5 L1093.0,429.5 1077.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-6\" stroke-width=\"2px\" d=\"M595,439.5 C595,89.5 1270.0,89.5 1270.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-6\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">obl</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M1270.0,441.5 L1278.0,429.5 1262.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-7\" stroke-width=\"2px\" d=\"M1470,439.5 C1470,352.0 1605.0,352.0 1605.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-7\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">case</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M1470,441.5 L1462,429.5 1478,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-8\" stroke-width=\"2px\" d=\"M1295,439.5 C1295,264.5 1610.0,264.5 1610.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-8\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">nmod</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M1610.0,441.5 L1618.0,429.5 1602.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-9\" stroke-width=\"2px\" d=\"M1820,439.5 C1820,264.5 2135.0,264.5 2135.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-9\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">cc</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M1820,441.5 L1812,429.5 1828,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-10\" stroke-width=\"2px\" d=\"M1995,439.5 C1995,352.0 2130.0,352.0 2130.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-10\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">obj</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M1995,441.5 L1987,429.5 2003,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-11\" stroke-width=\"2px\" d=\"M595,439.5 C595,2.0 2150.0,2.0 2150.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-11\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">conj</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M2150.0,441.5 L2158.0,429.5 2142.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-12\" stroke-width=\"2px\" d=\"M2345,439.5 C2345,352.0 2480.0,352.0 2480.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-12\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">det</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M2345,441.5 L2337,429.5 2353,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-13\" stroke-width=\"2px\" d=\"M2170,439.5 C2170,264.5 2485.0,264.5 2485.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-13\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">obl</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M2485.0,441.5 L2493.0,429.5 2477.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-14\" stroke-width=\"2px\" d=\"M2520,439.5 C2520,352.0 2655.0,352.0 2655.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-14\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">advmod</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M2655.0,441.5 L2663.0,429.5 2647.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-15\" stroke-width=\"2px\" d=\"M2870,439.5 C2870,352.0 3005.0,352.0 3005.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-15\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">case</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M2870,441.5 L2862,429.5 2878,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-c0f0b82e23624c58821875925c2d70f5-0-16\" stroke-width=\"2px\" d=\"M2520,439.5 C2520,177.0 3015.0,177.0 3015.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-c0f0b82e23624c58821875925c2d70f5-0-16\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">nmod</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M3015.0,441.5 L3023.0,429.5 3007.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"</svg>" | |
], | |
"text/plain": [ | |
"<IPython.core.display.HTML object>" | |
] | |
}, | |
"metadata": {}, | |
"output_type": "display_data" | |
} | |
], | |
"source": [ | |
"from spacy import displacy\n", | |
"\n", | |
"displacy.render(doc, style='dep', jupyter=True)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Aquí podemos ver que El < Zorro, es el determinante del zorro, y pequeño es una característica, Saltó es un verbo relacionado con el zorro, y este está siendo aplicado por el perro etc.\n", | |
"\n", | |
"aquí no hay mucho que explicar, también podemos correr esta visualización en un puerto de nuestro computador y abrirla del navegador" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 15, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"/home/harpie/.conda/envs/nlp/lib/python3.7/runpy.py:193: UserWarning: [W011] It looks like you're calling displacy.serve from within a Jupyter notebook or a similar environment. This likely means you're already running a local web server, so there's no need to make displaCy start another one. Instead, you should be able to replace displacy.serve with displacy.render to show the visualization.\n", | |
" \"__main__\", mod_spec)\n" | |
] | |
}, | |
{ | |
"data": { | |
"text/html": [ | |
"<!DOCTYPE html>\n", | |
"<html lang=\"es\">\n", | |
" <head>\n", | |
" <title>displaCy</title>\n", | |
" </head>\n", | |
"\n", | |
" <body style=\"font-size: 16px; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol'; padding: 4rem 2rem; direction: ltr\">\n", | |
"<figure style=\"margin-bottom: 6rem\">\n", | |
"<svg xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xml:lang=\"es\" id=\"417578a0e7a74fd881dfd119d9120fab-0\" class=\"displacy\" width=\"3200\" height=\"574.5\" direction=\"ltr\" style=\"max-width: none; height: 574.5px; color: #000000; background: #ffffff; font-family: Arial; direction: ltr\">\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"50\">El</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"50\">DET</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"225\">zorro</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"225\">NOUN</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"400\">pequeño</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"400\">ADJ</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"575\">saltó</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"575\">VERB</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"750\">por</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"750\">ADP</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"925\">encima</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"925\">ADV</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"1100\">del</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"1100\">ADP</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"1275\">perro</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"1275\">NOUN</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"1450\">sin</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"1450\">ADP</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"1625\">tocarlo</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"1625\">NOUN</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"1800\">y</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"1800\">CONJ</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"1975\">lo</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"1975\">PRON</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"2150\">hizo</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"2150\">VERB</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"2325\">una</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"2325\">DET</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"2500\">vez</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"2500\">NOUN</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"2675\">más</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"2675\">ADV</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"2850\">de</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"2850\">ADP</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"484.5\">\n", | |
" <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"3025\">vuelta.</tspan>\n", | |
" <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"3025\">NOUN</tspan>\n", | |
"</text>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-0\" stroke-width=\"2px\" d=\"M70,439.5 C70,352.0 205.0,352.0 205.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-0\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">det</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M70,441.5 L62,429.5 78,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-1\" stroke-width=\"2px\" d=\"M245,439.5 C245,264.5 560.0,264.5 560.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-1\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">nsubj</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M245,441.5 L237,429.5 253,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-2\" stroke-width=\"2px\" d=\"M245,439.5 C245,352.0 380.0,352.0 380.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-2\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">amod</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M380.0,441.5 L388.0,429.5 372.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-3\" stroke-width=\"2px\" d=\"M770,439.5 C770,177.0 1265.0,177.0 1265.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-3\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">case</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M770,441.5 L762,429.5 778,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-4\" stroke-width=\"2px\" d=\"M770,439.5 C770,352.0 905.0,352.0 905.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-4\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">fixed</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M905.0,441.5 L913.0,429.5 897.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-5\" stroke-width=\"2px\" d=\"M770,439.5 C770,264.5 1085.0,264.5 1085.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-5\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">fixed</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M1085.0,441.5 L1093.0,429.5 1077.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-6\" stroke-width=\"2px\" d=\"M595,439.5 C595,89.5 1270.0,89.5 1270.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-6\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">obl</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M1270.0,441.5 L1278.0,429.5 1262.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-7\" stroke-width=\"2px\" d=\"M1470,439.5 C1470,352.0 1605.0,352.0 1605.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-7\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">case</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M1470,441.5 L1462,429.5 1478,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-8\" stroke-width=\"2px\" d=\"M1295,439.5 C1295,264.5 1610.0,264.5 1610.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-8\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">nmod</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M1610.0,441.5 L1618.0,429.5 1602.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-9\" stroke-width=\"2px\" d=\"M1820,439.5 C1820,264.5 2135.0,264.5 2135.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-9\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">cc</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M1820,441.5 L1812,429.5 1828,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-10\" stroke-width=\"2px\" d=\"M1995,439.5 C1995,352.0 2130.0,352.0 2130.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-10\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">obj</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M1995,441.5 L1987,429.5 2003,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-11\" stroke-width=\"2px\" d=\"M595,439.5 C595,2.0 2150.0,2.0 2150.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-11\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">conj</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M2150.0,441.5 L2158.0,429.5 2142.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-12\" stroke-width=\"2px\" d=\"M2345,439.5 C2345,352.0 2480.0,352.0 2480.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-12\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">det</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M2345,441.5 L2337,429.5 2353,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-13\" stroke-width=\"2px\" d=\"M2170,439.5 C2170,264.5 2485.0,264.5 2485.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-13\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">obl</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M2485.0,441.5 L2493.0,429.5 2477.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-14\" stroke-width=\"2px\" d=\"M2520,439.5 C2520,352.0 2655.0,352.0 2655.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-14\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">advmod</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M2655.0,441.5 L2663.0,429.5 2647.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-15\" stroke-width=\"2px\" d=\"M2870,439.5 C2870,352.0 3005.0,352.0 3005.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-15\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">case</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M2870,441.5 L2862,429.5 2878,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"\n", | |
"<g class=\"displacy-arrow\">\n", | |
" <path class=\"displacy-arc\" id=\"arrow-417578a0e7a74fd881dfd119d9120fab-0-16\" stroke-width=\"2px\" d=\"M2520,439.5 C2520,177.0 3015.0,177.0 3015.0,439.5\" fill=\"none\" stroke=\"currentColor\"/>\n", | |
" <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n", | |
" <textPath xlink:href=\"#arrow-417578a0e7a74fd881dfd119d9120fab-0-16\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">nmod</textPath>\n", | |
" </text>\n", | |
" <path class=\"displacy-arrowhead\" d=\"M3015.0,441.5 L3023.0,429.5 3007.0,429.5\" fill=\"currentColor\"/>\n", | |
"</g>\n", | |
"</svg>\n", | |
"</figure>\n", | |
"</body>\n", | |
"</html>" | |
], | |
"text/plain": [ | |
"<IPython.core.display.HTML object>" | |
] | |
}, | |
"metadata": {}, | |
"output_type": "display_data" | |
}, | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"\n", | |
"Using the 'dep' visualizer\n", | |
"Serving on http://0.0.0.0:5000 ...\n", | |
"\n", | |
"Shutting down server on port 5000.\n" | |
] | |
} | |
], | |
"source": [ | |
"displacy.serve(doc, style='dep')" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Si entramos a la ip 127.0.0.1:5000 podremos ver esta grafica" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## NER (Named Entity Recognition).\n", | |
"\n", | |
"NER se refiere a localizar y clasificar, menciones de entidades nombradas, en texto desctructuracion y ponerlas dentro de categorías predefinidas, como el nombre de una persona, organizaciones, locaciones, cantidades, etc.\n", | |
"\n", | |
"SpaCy hará esto automáticamente por nosotros, ahora exploraremos como se hace la Named Entity Recognition y también como agregar nuestras propias entidades.\n", | |
"\n", | |
"Vamos a crear una función para mostrar estas entidades primeramente" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 16, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"def show_ents(doc):\n", | |
" if doc.ents:\n", | |
" for ent in doc.ents:\n", | |
" print(ent.text, '-', ent.label_, '-', str(spacy.explain(ent.label_)))\n", | |
" else:\n", | |
" print('No se encontraron entidades.')" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Ahora haremos un documento con entidades." | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 17, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"doc = nlp(u'Jaime quiere comprar, 25 acciones de Wallmart Supermarket por $200.000, en Chile')" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 18, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"Jaime - PER - Named person or family.\n", | |
"Wallmart Supermarket - MISC - Miscellaneous entities, e.g. events, nationalities, products or works of art\n", | |
"Chile - LOC - Non-GPE locations, mountain ranges, bodies of water\n" | |
] | |
} | |
], | |
"source": [ | |
"show_ents(doc)\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Ahora haremos un documento con entidades.Aquí vemos que reconoce Jaime como un nombre de Persona, Wallmart como un MISC. y Chile como un lugar.\n", | |
"\n", | |
"Pero no me reconoce, la entidad monetaria \\$200.000 entonces voy a agregarla para eso tengo que extraer el código de la etiqueta y agregarla a un span" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 19, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"394" | |
] | |
}, | |
"execution_count": 19, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"MONEY = doc.vocab.strings[u'MONEY'] # Estoy tomando la categoria ORG\n", | |
"from spacy.tokens import Span\n", | |
"MONEY" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Ahora localizaremos nuestra entidad en este caso $200.000" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 20, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"$200.000" | |
] | |
}, | |
"execution_count": 20, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"doc[10:12]" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Y crearemos la nueva entidad" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 21, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"new_ent = Span(doc, 10, 12, label=MONEY)" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 22, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"doc.ents = list(doc.ents) + [new_ent]" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 23, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"Jaime - PER - Named person or family.\n", | |
"Wallmart Supermarket - MISC - Miscellaneous entities, e.g. events, nationalities, products or works of art\n", | |
"$200.000 - MONEY - Monetary values, including unit\n", | |
"Chile - LOC - Non-GPE locations, mountain ranges, bodies of water\n" | |
] | |
} | |
], | |
"source": [ | |
"show_ents(doc)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Ahora veremos otra forma de agregar NER Tags, a través de pattern matching" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 24, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"doc = nlp(u'Mi compañia creo una nueva aspiradora limpiadora. esta es la mejor aspiradora-limpiadora del mercado')" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 25, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"No se encontraron entidades.\n" | |
] | |
} | |
], | |
"source": [ | |
"show_ents(doc)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Como vemos no se encuentran entidades, cuando tengo una, que es un producto, la **aspiradora limpiadora** para solucionarlo, puedo hacerlo a través de pattern matching.\n", | |
"\n", | |
"Para hacer esto usare el Phrase Matcher, que no lo explique, pero busca cadenas literales dentro del texto, en este caso aspiradora limpiadora y aspiradora-limpiadora, vamos allá" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 26, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"from spacy.matcher import PhraseMatcher\n", | |
"\n", | |
"phrase_list = ['aspiradora limpiadora', 'aspiradora-limpiadora ']\n", | |
"matcher = PhraseMatcher(nlp.vocab)\n", | |
"phrase_patterns = [nlp(text) for text in phrase_list]" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 27, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"[aspiradora limpiadora, aspiradora-limpiadora ]" | |
] | |
}, | |
"execution_count": 27, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"phrase_patterns" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Lo que hacemos en el list comprehension es procesar nuestras frases" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 28, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"matcher.add('producto', None, *phrase_patterns)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"El **\"\\*\"** es lo que se conoce como spread operator, lo que hace es sacar los elementos de una lista" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 29, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"found_matches = matcher(doc)" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 30, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"[(5511689005071304127, 5, 7), (5511689005071304127, 12, 15)]" | |
] | |
}, | |
"execution_count": 30, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"found_matches" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Ahora buscaremos la etiqueta PRODUCT para productos de la misma manera que antes" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 31, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"PROD = doc.vocab.strings[u'PRODUCT']" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Y con esto ya podemos generar nuestros Spans" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 32, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"spans = [Span(doc, match[1], match[2], label=PROD) for match in found_matches]" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Y agregamos nuestras entidades al documento" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 33, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"doc.ents = list(doc.ents) + spans" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 34, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"aspiradora limpiadora - PRODUCT - Objects, vehicles, foods, etc. (not services)\n", | |
"aspiradora-limpiadora - PRODUCT - Objects, vehicles, foods, etc. (not services)\n" | |
] | |
} | |
], | |
"source": [ | |
"show_ents(doc)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## 4. Segmentación de oraciones\n", | |
"\n", | |
"En el documento anterior, vi muy por encima que los objetos tipo **Doc**, también estaban divididos en oraciones, ahora vamos a aprender como funciona la segmentación de oraciones, y como hacer nuestras propias reglas de segmentación, para separar los objetos **Doc** en oraciones.\n", | |
"\n", | |
"Como siempre primero vamos a crear un objeto **Doc**" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 35, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"doc = nlp('Primera oracion. Segunda oracion. Tercera oracion')" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Para mostrar las oraciones debemos entrar a la propiedad **sents** de nuestro objeto **doc**" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 36, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"Primera oracion.\n", | |
"Segunda oracion.\n", | |
"Tercera oracion\n" | |
] | |
} | |
], | |
"source": [ | |
"for sent in doc.sents:\n", | |
" print(sent)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"De esta forma iteramos nuestras oraciones, cabe notar que la propiedad **sents** es un generador, por lo que no podremos consultarlo en forma de lista" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 37, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"ename": "TypeError", | |
"evalue": "'generator' object is not subscriptable", | |
"output_type": "error", | |
"traceback": [ | |
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", | |
"\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)", | |
"\u001b[0;32m<ipython-input-37-c1472b4bcc1a>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mdoc\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msents\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", | |
"\u001b[0;31mTypeError\u001b[0m: 'generator' object is not subscriptable" | |
] | |
} | |
], | |
"source": [ | |
"doc.sents[0]" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Pero en caso de necesitar hacerlo podemos pasarlo a una lista con la funcion list de python" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 38, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"Primera oracion." | |
] | |
}, | |
"execution_count": 38, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"list(doc.sents)[0]" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Lo mismo es aplicable a todos las propiedades de tipo generador en python.\n", | |
"\n", | |
"También si queremos aplicar nuestras propias reglas de segmentación, podemos hacerlo, veamos un ejemplo" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 39, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"doc = nlp(u'\"No te creas todo lo que leas en internet; no porque haya una foto al lado es real\" -Abraham Lincon')" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Entonces veamos que pasa si extraemos las oraciones de este texto" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 40, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"\"No te creas todo lo que leas en internet; no porque haya una foto al lado es real\"\n", | |
"-Abraham Lincon\n" | |
] | |
} | |
], | |
"source": [ | |
"for sent in doc.sents:\n", | |
" print(sent)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"No es que esté del todo mal, pero en algunos casos necesitamos que nos separe en el punto y coma, entonces para hacer eso, lo que haremos es definir una función con reglas personalizadas para hacer esa segmentación" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 41, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"def nueva_regla_de_segmentacion(doc):\n", | |
" for token in doc[:-1]: # Acceder a todos los tokens menos al ultimo\n", | |
" if token.text == ';':\n", | |
" doc[token.i + 1].is_sent_start = True\n", | |
" return doc" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"La idea es acceder a todos los tokens menos al ultimo por eso seleccionamos doc\\[-1], y escoger el token que esta después del token ___;___ que es un espacio, si no lo hacemos nos llevamos el token en si, entonces ahora agregamos la regla como pipeline del objeto NLP" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 42, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"nlp.add_pipe(nueva_regla_de_segmentacion, before='parser')" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 43, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"['tagger', 'nueva_regla_de_segmentacion', 'parser', 'ner']" | |
] | |
}, | |
"execution_count": 43, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"nlp.pipe_names" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Ahora volvamos a probar nuestra segmentación de oraciones" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 44, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"doc2 = nlp(u'\"No te creas todo lo que leas en internet ; no porque haya una foto al lado es real\" -Abraham Lincon')" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 45, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"\"No te creas todo lo que leas en internet ;\n", | |
"no porque haya una foto al lado es real\" -Abraham\n", | |
"Lincon\n" | |
] | |
} | |
], | |
"source": [ | |
"for sent in doc2.sents:\n", | |
" print(sent)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Y es así como agregamos nuevas reglas de segmentación." | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## 5.0 Machine Learning (Pincelada Teórica)\n", | |
"\n", | |
"El objetivo para esta parte será entender conceptos básicos de machine learning para clasificación de textos, entender las métricas y como evaluar modelos de clasificación de texto, por lo que antes de empezar e irnos directo al código vamos a repasar la teoría.\n", | |
"\n", | |
"Antes de entrar a la clasificación de texto, quiero que entiendan a nivel general el proceso de **machine learning** que vamos a usar, que se conoce como **Supervised Machine Learning** o **Aprendizaje de maquina supervisado**.\n", | |
"\n", | |
"Igual para el machine learning hay que tener una base de matemáticas, pero para dejar esto un poco más \"light\" y no entrar tanto en ese tipo de detalles, vamos a dejar de lado la matemática. Igualmente si estas interesado en adquirir esos conocimientos te recomiendo el libro. **Introduction to Statical Learning** de Garet James, no es gratis, pero es fácil de encontrar en google.\n", | |
"\n", | |
"Ahora debemos responder la pregunta, ¿Que es machine learning?. El Machine Learning (Aprendizaje de maquina) es un método de análisis de datos que automatiza la construcción de modelos analíticos.\n", | |
"\n", | |
"Para aprender usa algoritmos iterativos, en la mayoría de los casos recursivos, para reconocer patrones que no están estrictamente definidos en los datos, hoy en día el machine learning se usa para muchas cosas, por decirlo así el NLP es solo la punta del Iceberg. Algunas de las aplicaciones son\n", | |
"\n", | |
"- Detección de fraude\n", | |
"- Resultados de búsquedas por internet\n", | |
"- Traducción\n", | |
"- Reconocimiento de patrones\n", | |
"- Filtro de Spam en email\n", | |
"- Reconocimiento y clasificación de imágenes\n", | |
"\n", | |
"## 5.1 Aprendizaje Supervisado\n", | |
"\n", | |
"El **Aprendizaje Supervisado** es una técnica donde se usan ejemplos etiquetados, es decir, le paso la pregunta y la respuesta, y cuando le pase una información nunca antes vista por la maquina me devolverá una respuesta que puede ser correcta o incorrecta, pero eso se juzga con las métricas.\n", | |
"\n", | |
"**NO HAY MODELOS 100% EFECTIVOS**. Hay que sacarse de la cabeza que mi modelo es infalible pero si muy preciso, pero nunca llegara al 100% de efectividad, en el caso de filtro de spam tendría que pasarle datos como\n", | |
"\n", | |
"\"Hola tenemos una oferta para ti\", \"Spam\"\n", | |
"\"Hola, que tal estas\", \"No Spam\"\n", | |
"\n", | |
"Entonces nuestro algoritmo recibirá una serie de datos de entada con su correspondiente respuesta correcta, y el algoritmo comparara la respuesta que da con la respuesta real para encontrar errores y autoajustarse en fin de minimizar ese error, estos parámetros se modifican solos.\n", | |
"\n", | |
"Los algoritmos de aprendizaje supervisado porque a través de datos históricos, pueden predecir sucesos futuros, para que lo entendamos, si yo hago un filtro de spam y taggeo toda mi bandeja de entrada como spam y no spam y lo paso por el algoritmo la próxima vez que me llegue un correo de spam el algoritmo reconocerá solo que es spam y lo clasificara en mi bandeja de spam.\n", | |
"\n", | |
"Ahora veremos los pasos que debemos seguir para implementar un modelo de aprendizaje supervisado\n", | |
"\n", | |
"1. Adquirir los datos: Los datos se pueden adquirir de varios sitios, pero, la mayoría se obtienen a través de encuestas, clientes o datos públicos en internet\n", | |
"\n", | |
"2. Limpiar los datos: Los datos en estado crudos deben pasar por un proceso de limpieza, donde eliminamos valores redundantes o cualquier error de los datos para que no afecten en el proceso de aprendizaje, también debemos convertir nuestros datos a vectores numéricos que nuestro algoritmo pueda entender.\n", | |
"\n", | |
"3. Separar los datos en dos conjuntos, Entrenamiento y Prueba: Hacemos esto para tener un respaldo a la hora de probar nuestro modelo, por lo que una parte de los datos la vamos a reservar y no pasarla por el algoritmo para ver si funciona o debemos modificar nuestros parámetros o limpiar mejor los datos.\n", | |
"\n", | |
"4. Entrenar el modelo: Entonces una vez hayamos hecho todos estos pasos, debemos entrenar nuestro modelo pasándole los datos de entrenamiento\n", | |
"\n", | |
"5. Probar el modelo: Para esto corremos una iteración y comprobamos resultados con métricas, la matriz de confusión, es que nos da toda la información de los aciertos y desaciertos del modelo, ya que agrupa varias métricas, en caso de tener muchos desaciertos debemos repetir desde el paso 2 al 5.\n", | |
"\n", | |
"6. Desplegar el modelo, o llevarlo a producción, dependiendo de lo que estemos haciendo\n", | |
"\n", | |
"## 5.2 Métricas de clasificación\n", | |
"\n", | |
"Una vez que el proceso de aprendizaje automático está completo, debemos evaluar el desempeño de este modelo, para eso usaremos el set de prueba.\n", | |
"\n", | |
"Para entender las métricas de clasificación, debemos entender los siguientes conceptos clave\n", | |
"\n", | |
"1. Accuracy (Precisión)\n", | |
"2. Recall (Recordar)\n", | |
"3. Presicion (Presicion)\n", | |
"4. F1-Score (Puntaje F1) este es una combinación entre la precision y el recall\n", | |
"\n", | |
"Pero antes de entender estas métricas, debemos entender el razonamiento detrás de estas métricas y como funcionan en el mundo real.\n", | |
"\n", | |
"Típicamente en cualquier clasificación, nuestro modelo podrá solo alcanzar dos resultado:\n", | |
"\n", | |
"- La predicción a partir de los es correcta\n", | |
"- La predicción a partir de los datos es incorrecta\n", | |
"\n", | |
"Afortunadamente la predicción **correcto vs incorrecto** soporta una cantidad de \"features\" multiples.\n", | |
"\n", | |
"Pero para explicar las métricas, imaginemos que tenemos una situación de **Clasificación Binaria**, donde solo hay dos salidas posibles Si o No.\n", | |
"\n", | |
"En nuestro ejemplo es el clásico de que si un Email es \"Spam\" o \"Ham\" (Ham significa que el contenido es seguro, y no es spam). Este al ser un proceso de aprendizaje supervisado, tenemos los sets de entrenamiento y prueba, entonces una vez que pasemos todos los valores de prueba, tendremos la predicción de el modelo y nuestros resultados reales.\n", | |
"\n", | |
"Cabe recordar que al trabajar con texto tendremos que procesar los datos para que nuestro modelo los pueda entender, es como hacer un a traducción de texto crudo a algo que pueda entender mi modelo para trabajar con él.\n", | |
"\n", | |
"Si quieres saber como se ve un texto vectorizado, es más o menos así.\n", | |
"\n", | |
"<img src=\"https://freecontent.manning.com/wp-content/uploads/Chollet_DLfT_01.png\">\n", | |
"\n", | |
"Pero esto lo veremos más adelante, asique volvamos a lo que teníamos\n", | |
"\n", | |
"Tenemos un modelo entrenado y este modelo solo se entreno con los datos de entrenamiento, es decir, que no conoce los datos de prueba, entonces ahora tomamos un dato de prueba y lo pasamos por el modelo y este nos va a predecir un valor, entonces ese valor predicho por el modelo lo comparamos con el valor real que igual lo tenemos\n", | |
"\n", | |
"Predicho = Ham\n", | |
"Valor Real = Spam\n", | |
"\n", | |
"Ham == Spam ! esto es lo que se conoce como error\n", | |
"\n", | |
"Entonces ahora lo que haremos es repetir este proceso para todos los valores de nuestro set de prueba y al final tendremos una cantidad de coincidencias que serán correctas o incorrectas (Error o No Error). Aun así no podemos fiarnos de estas métricas, puede que aquí nuestro modelo tenga un desempeño distinto a cuando lo pongamos en producción\n", | |
"\n", | |
"Ahora vamos a traer de vuelta los cuatro valores de los que hablamos y veremos como se calculan\n", | |
"\n", | |
"1. Accuracy (Precisión): La precisión es la cantidad de respuestas correctas divididas por el numero total de predicciones, por eso se llama precisión, porque no esta calculando las respuestas incorrectas, solo está teniendo en cuenta los casos en donde la predicción fue igual a la de nuestros datos reales, por ejemplo si tengo 80 respuestas correctas de 100 el calculo seria el siguiente 80/100 = 0.8 o el 80% de precisión, esta métrica es muy útil cuando nuestras clases están bien equilibradas, por ejemplo tener el mismo numero de respuestas Si o No dentro del dataset.\n", | |
"\n", | |
"Los conceptos de Recall, Precisión y F1-Score, serán abordados en la siguiente sección que es la de matriz de confusión pero aquí daré una definición formal de lo que es\n", | |
"\n", | |
"2. Recall: Es la habilidad del modelo de encontrar **Todos** los casos relevantes dentro del dataset. La definición precisa es: \"El numero de **positivos verdaderos** dividido por el numero de **positivos verdaderos** más el numero de **falsos negativos**. Cuando lo veamos dentro de la matriz de confusión esta definición tendrá mucho más sentido\n", | |
"\n", | |
"3. Presicion: Es la habilidad del modelo de identificar **solo** los puntos relevantes de nuestros datos, la definición precisa es: \"El numero de **positivos verdaderos** dividido por el **numero de positivos verdaderos** más el numero de **falsos positivos**.\n", | |
"\n", | |
"Recall y presicion: Aveces solemos asociar los dos al mismo concepto, pero mientras el recall expresa la habilidad de encontrar instancias relevantes en el dataset, La posición expresa la proporción de los puntos relevantes que considera nuestro modelo\n", | |
"\n", | |
"4. F1-Score: Hay casos donde queremos encontrar la mezcla optima de la presicion y el recall, para ello podemos convinar ambos en algo que se llama F1-Score. El F1-Score es el promedio armónico de la presicion la ecuación seria la siguiente:\n", | |
"<img src=\"https://cdn-images-1.medium.com/max/1600/1*UJxVqLnbSj42eRhasKeLOA.png\">\n", | |
"\n", | |
"## 5.3 Matriz de confusión\n", | |
"\n", | |
"Anteriormente habíamos dicho que la \"matriz de confusión\" reunía varias métricas, y nos daba más información del desempeño de nuestro modelo, ahora pasaremos a entender como leer estas metricas y a interpretar la matriz de confusión, así vamos a entender mejor el recall y la presicion.\n", | |
"\n", | |
"En un problema de clasificación, durante la fase de \"testing\", tenemos dos categorías\n", | |
"\n", | |
"- Condición Verdadera: Un mensaje de texto que es Spam\n", | |
"- Condición Predecida: La predicción del modelo\n", | |
"\n", | |
"Esto significa que tenemos dos tipos de clases distintas, las clasificadas correctamente, y las clasificadas incorrectamente, entonces basándonos en esto, vamos a agrupar estas condiciones en una tabla\n", | |
"\n", | |
"| Populacion Total | Predicción positiva | Predicción negativa |\n", | |
"|------|------|------|\n", | |
"| Condición Real Positiva | Verdadero Positivo | Falso Negativo\n", | |
"| Condición Real negativa | Falso Positivo | Verdadero Negativo\n", | |
"\n", | |
"Y esto es a lo que llamamos Matriz de confusión, los \"Verdaderos\" es cuando la predicción es correcta los \"Falsos\" es cuando no calza la predicción con la etiqueta o dato real.\n", | |
"\n", | |
"En esta imagen se pueden ver las métricas aplicables a la matriz de confusión aquí las tenemos todas\n", | |
"\n", | |
"<img src=\"https://i.ibb.co/R0x3kkj/Captura-de-pantalla-de-2019-06-05-23-23-51.jpg\"></img>\n", | |
"**Fuente: <a href=\"https://en.wikipedia.org/wiki/Confusion_matrix\">Wikipedia</a>**\n", | |
"\n", | |
"Pero, ¿Que métricas aplico?, eso depende del problema, las principales ya las dijimos, recall precision y f1-score, pero en algunos casos se pueden aplicar otras. La idea es leer sobre este tema para saber que métricas se ajustan a mi problema.\n", | |
"\n", | |
"## 5.4 Revisión no practica de como funciona scikit learn\n", | |
"\n", | |
"**Scikit Learn**, es la librería de machine learning más popular de python porque tiene todos los algoritmos de Machine Learning que se usan, incluso los menos populares, y ademas tiene una basta y detallada documentación del uso de esos algoritmos, si quieres llevar tus conocimientos a otro nivel, te recomiendo pasarte por la documentación de esta librería\n", | |
"\n", | |
"<a href=\"https://scikit-learn.org/stable/documentation.html\"> Documentación de Scikit Learn</a>\n", | |
"\n", | |
"Si nunca has utilizado Scikit Learn su instalación es muy fácil se puede hacer a través de pip y conda\n", | |
"\n", | |
"Pip: pip install scikit-learn\n", | |
"Conda: conda install scikit-learn\n", | |
"\n", | |
"Ahora vamos a hablar de la estructura básica de Sckit Learn, si recordamos el proceso de Machine Learning tenemos que:\n", | |
"\n", | |
"1. Adquirir los datos\n", | |
"\n", | |
"2. Limpiar los datos\n", | |
"\n", | |
"3. Separar los datos en dos conjuntos, Entrenamiento y Prueba\n", | |
"\n", | |
"4. Entrenar el modelo:\n", | |
"\n", | |
"5. Probar el modelo:\n", | |
"\n", | |
"6. Desplegar el modelo, o llevarlo a producción, dependiendo de lo que estemos haciendo\n", | |
"\n", | |
"Ahora lo que veremos de forma muy rápida como llevar a cabo este proceso con Scikit Learn, no te preocupes por memorizar nada, porque vamos a hacerlo en código más adelante.\n", | |
"\n", | |
"Todos los algoritmos disponibles en Scikit Learn son expuestos vía un \"Estimator\".\n", | |
"\n", | |
"Primero para importar el modelo que quieres usar se usa esta sintaxis general:\n", | |
"\n", | |
"~~~\n", | |
"from sklearn.familia import Modelo\n", | |
"\n", | |
"Ejemplo:\n", | |
"from sklearn.linear_model import LinearRegression\n", | |
"~~~\n", | |
"\n", | |
"Para importar cualquier modelo se tiene que hacer así, tenemos que importar desde sklearn la familia de modelos que vamos a escoger, y el modelo.\n", | |
"\n", | |
"En este caso linear_model es la familia que contiene todos los modelos de estimación lineal, e importamos el modelo de regresión lineal\n", | |
"\n", | |
"Hay que tener en cuenta que todos los estimadores tienen opciones, que pueden ser seteados al instanciar el modelo, y también tienen valores por defecto, estos valores se pueden consultar en la documentación, o en jupyter notebook con shift+tab para revisar los parámetros posibles.\n", | |
"\n", | |
"Una vez configurado el modelo, podemos separar los datos en set de prueba y entrenamiento scikit learn nos trae una función para eso y se usa de la siguiente manera\n", | |
"\n", | |
"~~~\n", | |
"from sklearn.model_selection import train_test_split\n", | |
"\n", | |
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)\n", | |
"~~~\n", | |
"\n", | |
"Para entender esta función, se le pasan los vectores X e y el tamaño del set de prueba siendo 1 el 100% aquí apartando el 30% para el test\n", | |
"\n", | |
"para entrenar el modelo tenemos que usar la funcion fit\n", | |
"\n", | |
"~~~\n", | |
"model.fit(X_test, y_test)\n", | |
"~~~\n", | |
"\n", | |
"Que aquí se efectuaran las funciones iterativas de las que hablamos antes.\n", | |
"\n", | |
"Y para predecir usaremos la función predict.\n", | |
"\n", | |
"~~~\n", | |
"model.predict(Algun valor)\n", | |
"~~~\n", | |
"\n", | |
"Así básicamente es como funciona, una vez dicho esto, manos a la obra" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## 6. Manos a la obra\n", | |
"\n", | |
"El dataset con el que estaremos trabajando es el de Spam que lo puedes encontrar aquí\n", | |
"\n", | |
"<a href=\"https://www.kaggle.com/uciml/sms-spam-collection-dataset/downloads/sms-spam-collection-dataset.zip/1\">Dataset Kaggle</a>\n", | |
"\n", | |
"lo primero es instalar las siguientes librerías\n", | |
"\n", | |
"Numpy\n", | |
"Pandas\n", | |
"Scikit Lern\n", | |
"comandos:\n", | |
"pip install numpy\n", | |
"pip install pandas\n", | |
"pip install scikit-learn\n", | |
"\n", | |
"lo primero que haremos es abrir nuestro archivo csv, e importar numpy para trabajar con el" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 46, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"Index(['v1', 'v2', 'Unnamed: 2', 'Unnamed: 3', 'Unnamed: 4'], dtype='object')" | |
] | |
}, | |
"execution_count": 46, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"import numpy as np\n", | |
"import pandas as pd\n", | |
"\n", | |
"df = pd.read_csv('spam.csv',encoding = \"ISO-8859-1\")\n", | |
"df.columns" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Esta linea de codigo nos devolvera un objeto de tipo DataFrame que contendra nuestro CSV, ahora revisaremos la estructura de nuestros datos" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 47, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/html": [ | |
"<div>\n", | |
"<style scoped>\n", | |
" .dataframe tbody tr th:only-of-type {\n", | |
" vertical-align: middle;\n", | |
" }\n", | |
"\n", | |
" .dataframe tbody tr th {\n", | |
" vertical-align: top;\n", | |
" }\n", | |
"\n", | |
" .dataframe thead th {\n", | |
" text-align: right;\n", | |
" }\n", | |
"</style>\n", | |
"<table border=\"1\" class=\"dataframe\">\n", | |
" <thead>\n", | |
" <tr style=\"text-align: right;\">\n", | |
" <th></th>\n", | |
" <th>v1</th>\n", | |
" <th>v2</th>\n", | |
" </tr>\n", | |
" </thead>\n", | |
" <tbody>\n", | |
" <tr>\n", | |
" <th>0</th>\n", | |
" <td>ham</td>\n", | |
" <td>Go until jurong point, crazy.. Available only ...</td>\n", | |
" </tr>\n", | |
" <tr>\n", | |
" <th>1</th>\n", | |
" <td>ham</td>\n", | |
" <td>Ok lar... Joking wif u oni...</td>\n", | |
" </tr>\n", | |
" <tr>\n", | |
" <th>2</th>\n", | |
" <td>spam</td>\n", | |
" <td>Free entry in 2 a wkly comp to win FA Cup fina...</td>\n", | |
" </tr>\n", | |
" <tr>\n", | |
" <th>3</th>\n", | |
" <td>ham</td>\n", | |
" <td>U dun say so early hor... U c already then say...</td>\n", | |
" </tr>\n", | |
" <tr>\n", | |
" <th>4</th>\n", | |
" <td>ham</td>\n", | |
" <td>Nah I don't think he goes to usf, he lives aro...</td>\n", | |
" </tr>\n", | |
" </tbody>\n", | |
"</table>\n", | |
"</div>" | |
], | |
"text/plain": [ | |
" v1 v2\n", | |
"0 ham Go until jurong point, crazy.. Available only ...\n", | |
"1 ham Ok lar... Joking wif u oni...\n", | |
"2 spam Free entry in 2 a wkly comp to win FA Cup fina...\n", | |
"3 ham U dun say so early hor... U c already then say...\n", | |
"4 ham Nah I don't think he goes to usf, he lives aro..." | |
] | |
}, | |
"execution_count": 47, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"\n", | |
"df = df[['v1', 'v2']] #Sacamos las columnas que no nos interesan\n", | |
"df.head()" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Para saber la cantidad de datos, que tenemos usaremos la función describe de pandas" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 48, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/html": [ | |
"<div>\n", | |
"<style scoped>\n", | |
" .dataframe tbody tr th:only-of-type {\n", | |
" vertical-align: middle;\n", | |
" }\n", | |
"\n", | |
" .dataframe tbody tr th {\n", | |
" vertical-align: top;\n", | |
" }\n", | |
"\n", | |
" .dataframe thead th {\n", | |
" text-align: right;\n", | |
" }\n", | |
"</style>\n", | |
"<table border=\"1\" class=\"dataframe\">\n", | |
" <thead>\n", | |
" <tr style=\"text-align: right;\">\n", | |
" <th></th>\n", | |
" <th>v1</th>\n", | |
" <th>v2</th>\n", | |
" </tr>\n", | |
" </thead>\n", | |
" <tbody>\n", | |
" <tr>\n", | |
" <th>count</th>\n", | |
" <td>5572</td>\n", | |
" <td>5572</td>\n", | |
" </tr>\n", | |
" <tr>\n", | |
" <th>unique</th>\n", | |
" <td>2</td>\n", | |
" <td>5169</td>\n", | |
" </tr>\n", | |
" <tr>\n", | |
" <th>top</th>\n", | |
" <td>ham</td>\n", | |
" <td>Sorry, I'll call later</td>\n", | |
" </tr>\n", | |
" <tr>\n", | |
" <th>freq</th>\n", | |
" <td>4825</td>\n", | |
" <td>30</td>\n", | |
" </tr>\n", | |
" </tbody>\n", | |
"</table>\n", | |
"</div>" | |
], | |
"text/plain": [ | |
" v1 v2\n", | |
"count 5572 5572\n", | |
"unique 2 5169\n", | |
"top ham Sorry, I'll call later\n", | |
"freq 4825 30" | |
] | |
}, | |
"execution_count": 48, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"df.describe()" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"En este caso tenemos 5572 valores, ahora vamos a entrar al siguiente tema" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## 6.1. Teoría del Text Feature Extraction (Extracción de características del texto)\n", | |
"\n", | |
"La mayoría de los algoritmos de machine learning, no pueden procesar texto crudo, entonces, necesitamos hacer un proceso que se llama Feature Extraction, en el que tomaremos el texto, haremos este proceso con el fin de pasar de texto a un vector numérico (Feature Vector) para que nuestro algoritmo sea capaz de trabajar con él.\n", | |
"\n", | |
"Ahora vamos a discutir el primer punto que es:\n", | |
"\n", | |
"- Count Vectorization:\n", | |
"\n", | |
"Para entender este método imagina que tienes una lista de mensajes\n", | |
"\n", | |
"~~~\n", | |
"mensages = ['Hola, vamos a ir a ver un concierto hoy',\n", | |
"'Llama a tu hermana',\n", | |
"'¿Saldras a pasear a los perros?]\n", | |
"~~~\n", | |
"\n", | |
"Esto es parecido al dataset que estamos tratando, lo que vamos a hacer es usar Scikit Learn para hacer el proceso de Count Vectorization\n", | |
"\n", | |
"~~~\n", | |
"from sklearn.feature_extraction.text import CountVectorizer\n", | |
"vec = CountVectorizer()\n", | |
"~~~\n", | |
"\n", | |
"Cabe resaltar que la sintaxis, es igual a cuando tratamos de localizar un modelo de machine learning.\n", | |
"\n", | |
"Entonces lo que haremos es pasar el texto por el algoritmo y a tratar cada termino como un numero, por lo que nos quedaría algo así.\n", | |
"\n", | |
"<img src=\"https://www.darrinbishop.com/wp-content/uploads/2017/10/Document-Term-Matrix-with-Title-1.png\" />\n", | |
"\n", | |
"Entonces tenemos solo un numero por cada palabra de un documento, por lo que, las palabras repetidas no tendrán valores repetidos.\n", | |
"\n", | |
"- Tfidfvectorizer:\n", | |
"\n", | |
"Una alternativa al Count vectorizer es el Tfidfvectorizer, que también crea un vector numérico de nuestro texto, Aunque en vez de en vez de convertir una palabra a numero solamente contando tokens, este algoritmo calcula algo que se llama **Term Frequency-Inverse Document Frecuency (TF-IDF)**.\n", | |
"\n", | |
"Esto consiste en contar la frecuencia con la que un termino o palabra aparece dentro de nuestros datos, básicamente se dice, cuantas veces este termino aparece dentro del texto.\n", | |
"\n", | |
"Aun así, la frecuencia de los términos no es suficiente, por lo que después de aplicar el count vectorizer se debe hacer un análisis y sacar características del texto, por la razón de que hay muchas \"stop words\" que son palabras que se repiten pero no le agregan significado a un texto, como lo vimos anteriormente, entonces el algoritmo tiene a enfatizar esas palabras, y les da un valor más alto del que deberían por su frecuencia, en vez de su significado dentro del texto.\n", | |
"\n", | |
"Entonces, la frecuencia inversa, es un factor incorporado, que evita la importancia de términos muy frecuentes e incrementa la importancia de los documentos que están presentes con menos frecuencia, para ocuparlo con sklearn se importa de la misma manera que el CountVectorizer\n", | |
"\n", | |
"~~~\n", | |
"from sklearn.feature_extraction.text import TfidfVectorizer\n", | |
"vec = TfidVectorizer()\n", | |
"dtm = vect.fit_transform(mensajes)\n", | |
"~~~\n", | |
"\n", | |
"Ahora veremos como se hace esto en la practica\n", | |
"\n", | |
"## 6.2. Text Feature Extraction Practico\n", | |
"\n", | |
"primero veremos si tenemos valores faltantes dentro del dataset, para eso usare la función is null, que devuelve verdadero o falso, y sumare esos valores booleanos, lo que pasara es que True = 1 y False = 0, por lo que si hay valores faltantes voy a tener un numero > a 0" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 49, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"v1 0\n", | |
"v2 0\n", | |
"dtype: int64" | |
] | |
}, | |
"execution_count": 49, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"df.isnull().sum()" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"En este caso no me faltan valores, ahora vamos a ver nuestros valores de label, con la función, value_counts" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 50, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"ham 4825\n", | |
"spam 747\n", | |
"Name: v1, dtype: int64" | |
] | |
}, | |
"execution_count": 50, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"df['v1'].value_counts()" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"En este caso tengo 4825 valores de spam, y 747 valores de ham, ahora procederé a hacer el set de entrenamiento y prueba." | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 51, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"from sklearn.model_selection import train_test_split\n", | |
"X = df['v2']#mensajes\n", | |
"y = df['v1']#target" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"La X va en mayúscula porque representa el vector más grande e y es solo un vector unidimensional." | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 52, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.33, random_state=42)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Aquí defino un random state para asegurarme que los resultados del lector sean iguales a los míos.\n", | |
"\n", | |
"Vamos a la vectorizacion" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 53, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"from sklearn.feature_extraction.text import CountVectorizer" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 54, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"count_vect = CountVectorizer()" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Ahora si revisamos el vector X aún es texto crudo" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 55, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"0 Go until jurong point, crazy.. Available only ...\n", | |
"1 Ok lar... Joking wif u oni...\n", | |
"2 Free entry in 2 a wkly comp to win FA Cup fina...\n", | |
"3 U dun say so early hor... U c already then say...\n", | |
"4 Nah I don't think he goes to usf, he lives aro...\n", | |
"Name: v2, dtype: object" | |
] | |
}, | |
"execution_count": 55, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"X[:5]" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Pero como dijimos, lo que quiero es pasarlo a números.\n", | |
"\n", | |
"Ahora vamos a usar la funcion fit, de nuestro CountVectorizer, que básicamente, arma el vocabulario, cuenta el numero de palabras, entre otras cosas.\n", | |
"\n", | |
"Y transformaremos nuestros mensajes de texto a vectores procesables por un algoritmo de machine learning" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 56, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"count_vect.fit(X_train) # Primer Paso \n", | |
"X_train_counts = count_vect.transform(X_train) #Transformacion a vector" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Hay otra forma de hacerlo, que es más rápida y hace lo mismo que yo efectúe en estas dos líneas de código pero en solo una, y se llama fit_transform" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 57, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"X_train_counts = count_vect.fit_transform(X_train)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Aquí tenemos una matriz, que parece gigante, pero Scikit Learn la comprimió para nosotros, por lo que en vez de ser una matriz gigante, suprimió todos los valores que no eran importantes y nos arrojo una matriz de 2 dimensiones" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 58, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"(3733,)\n" | |
] | |
}, | |
{ | |
"data": { | |
"text/plain": [ | |
"(3733, 7057)" | |
] | |
}, | |
"execution_count": 58, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"print(X_train.shape)\n", | |
"X_train_counts.shape" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Entonces después de esta vectorizacion, necesitamos hacer el proceso de TFIDF para calibrar los pesos de estos valores, porque recordemos que el count vectorizer solo generaba valores de frecuencia, y dijimos que el TFIDF suprimía las frecuencias más frecuentes y realzaba las más raras por lo que vamos allá" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 59, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"from sklearn.feature_extraction.text import TfidfVectorizer\n", | |
"vect = TfidfVectorizer()\n", | |
"X_train_tfidf = vect.fit_transform(X_train)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Y con esto estamos listos para entrenar el modelo" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 60, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,\n", | |
" intercept_scaling=1, loss='squared_hinge', max_iter=1000,\n", | |
" multi_class='ovr', penalty='l2', random_state=None, tol=0.0001,\n", | |
" verbose=0)" | |
] | |
}, | |
"execution_count": 60, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"from sklearn.svm import LinearSVC\n", | |
"clf = LinearSVC()\n", | |
"clf.fit(X_train_tfidf, y_train)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Antes de cerrar esto, quiero aclarar unas cosas, también podemos hacer nuestro vectorizado TFIDF a partir del count vector con el TFIDFTransformer como lo haré aquí" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 61, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"from sklearn.feature_extraction.text import TfidfTransformer\n", | |
"vect = TfidfTransformer()\n", | |
"\n", | |
"X_train_tfidf = vect.fit_transform(X_train_counts)" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 62, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,\n", | |
" intercept_scaling=1, loss='squared_hinge', max_iter=1000,\n", | |
" multi_class='ovr', penalty='l2', random_state=None, tol=0.0001,\n", | |
" verbose=0)" | |
] | |
}, | |
"execution_count": 62, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"from sklearn.svm import LinearSVC\n", | |
"clf = LinearSVC()\n", | |
"clf.fit(X_train_tfidf, y_train)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"El resultado seguiría siendo igual.\n", | |
"\n", | |
"Y también puedo agregar el vectorizado de forma automática como un pipeline de la siguiente manera" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 63, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"from sklearn.pipeline import Pipeline\n", | |
"text_clf = Pipeline([('tfidf',TfidfVectorizer()), ('clf', LinearSVC())])" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"De este modo pasaría el texto crudo y se haría el proceso automáticamente" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 64, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"Pipeline(memory=None,\n", | |
" steps=[('tfidf',\n", | |
" TfidfVectorizer(analyzer='word', binary=False,\n", | |
" decode_error='strict',\n", | |
" dtype=<class 'numpy.float64'>,\n", | |
" encoding='utf-8', input='content',\n", | |
" lowercase=True, max_df=1.0, max_features=None,\n", | |
" min_df=1, ngram_range=(1, 1), norm='l2',\n", | |
" preprocessor=None, smooth_idf=True,\n", | |
" stop_words=None, strip_accents=None,\n", | |
" sublinear_tf=False,\n", | |
" token_pattern='(?u)\\\\b\\\\w\\\\w+\\\\b',\n", | |
" tokenizer=None, use_idf=True,\n", | |
" vocabulary=None)),\n", | |
" ('clf',\n", | |
" LinearSVC(C=1.0, class_weight=None, dual=True,\n", | |
" fit_intercept=True, intercept_scaling=1,\n", | |
" loss='squared_hinge', max_iter=1000,\n", | |
" multi_class='ovr', penalty='l2', random_state=None,\n", | |
" tol=0.0001, verbose=0))],\n", | |
" verbose=False)" | |
] | |
}, | |
"execution_count": 64, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"text_clf.fit(X_train, y_train)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Y esto es extremadamente más eficiente que hacer los procesos por separado" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## 6.3 Metricas Aplicadas\n", | |
"\n", | |
"Ahora vamos a medir un poco el comportamiento del modelo con la matriz de confusión, sin más que explicar vamos allá" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 65, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"from sklearn.metrics import confusion_matrix, classification_report" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Primero voy a hacer que mi pipeline me clasifique todo el texto y me haga sus predicciones que pasaremos a evaluar" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 66, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"predictions = text_clf.predict(X_test)" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 67, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"[[1581 6]\n", | |
" [ 27 225]]\n" | |
] | |
} | |
], | |
"source": [ | |
"print(confusion_matrix(y_test, predictions))" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 68, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
" precision recall f1-score support\n", | |
"\n", | |
" ham 0.98 1.00 0.99 1587\n", | |
" spam 0.97 0.89 0.93 252\n", | |
"\n", | |
" accuracy 0.98 1839\n", | |
" macro avg 0.98 0.94 0.96 1839\n", | |
"weighted avg 0.98 0.98 0.98 1839\n", | |
"\n" | |
] | |
} | |
], | |
"source": [ | |
"print(classification_report(y_test, predictions))" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Tenemos un modelo que tiene una precisión bastante alta, ahora antes de cerrar vamos a lo que vinimos a hacer unas predicciones." | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 69, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"array(['ham'], dtype=object)" | |
] | |
}, | |
"execution_count": 69, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"text_clf.predict(['Hi, How are you today?'])" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Lo predijo como no spam, ahora generaremos un spam" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 70, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"array(['spam'], dtype=object)" | |
] | |
}, | |
"execution_count": 70, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"text_clf.predict([\"You're the 200 winner of $2000 dolars, call now\"])" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Y eso fue un spam, por lo que el modelo funciona bien. " | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## 7. Agradecimientos\n", | |
"\n", | |
"Si llegaste hasta aquí, te felicito, espero hayas disfrutado de este documento, apoyame, comparte para que más gente se entere de esta hermosa practica, nos vemos en el siguiente documento. :D" | |
] | |
} | |
], | |
"metadata": { | |
"kernelspec": { | |
"display_name": "Python 3", | |
"language": "python", | |
"name": "python3" | |
}, | |
"language_info": { | |
"codemirror_mode": { | |
"name": "ipython", | |
"version": 3 | |
}, | |
"file_extension": ".py", | |
"mimetype": "text/x-python", | |
"name": "python", | |
"nbconvert_exporter": "python", | |
"pygments_lexer": "ipython3", | |
"version": "3.7.2" | |
} | |
}, | |
"nbformat": 4, | |
"nbformat_minor": 2 | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment