-
-
Save anshoomehra/ead8925ea291e233a5aa2dcaa2dc61b2 to your computer and use it in GitHub Desktop.
you could use the query API from SEC API to batch retrieve 10Ks, then use the render API to download the filings and add your script to extract the data. awesome workflow!
I am able to get the code to work when I use the download from the SEC website; however, SEC is no longer allowing mass loops to circle and collect data (at least that is what I was told). I found a website which has every 10-K downloaded and I saved those onto my personal computer. When I use the code I changed the requests to an open and read file, but now I'm getting an error "KeyError: 'item1a'". I've tried different versions such as "Item 1A." etc. with no luck. Is there another way to get this code to work using SEC downloads. Downloads are from https://drive.google.com/drive/folders/1tZP9A0hrAj8ptNP3VE9weYZ3WDn9jHic. Thank you!
Can also be done with the item extraction API now.
from sec_api import ExtractorApi
extractorApi = ExtractorApi("YOUR_API_KEY")
# Tesla 10-K filing
filing_url = "https://www.sec.gov/Archives/edgar/data/1318605/000156459021004599/tsla-10k_20201231.htm"
# get the standardized and cleaned text of section 1A "Risk Factors"
section_text = extractorApi.get_section(filing_url, "1A", "text")
# get the original HTML of section 7 "Management’s Discussion and Analysis of Financial Condition and Results of Operations"
section_html = extractorApi.get_section(filing_url, "7", "html")
print(section_text)
print(section_html)
Docs: https://sec-api.io/docs/sec-filings-item-extraction-api
Hey, thanks for the code. It's wonderful.
One question, I can get information for most sections. However, for Item 1 (business section), I can't seem to get the information.
item_1_raw = document['10-K'][pos_dat['start'].loc['item1']:pos_dat['start'].loc['item1a']]
I receive a NoneType back. Any ideas?
m
I am having the same problem. Would be great to have any idea.
@Onapmek and @marcelinochamon
To fix NoneType, see below - include headers
response = requests.get(url, headers={'User-Agent': 'Mozilla'})
or better a longer one:
headers = {"User-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36"}
@sash236 This unfortunately doesn't fix the issue for me. I already used a header to successfully retrieve the 10k text file.
I can get sections explained in the example given by OP, but I can't seem to retrieve section 1 itself.
I don't know enough of REGEX to fix the issue, but what I noticed is start the start-position for Item 1 is extremely off:
# Set item as the dataframe index
pos_dat.set_index('item', inplace=True)
# display the dataframe
pos_dat
Outcome:
item
item2 198174 198182
item1 2360825 2360832
@Onapmek Ok, I found out what was going on. The model notes Item 11, Item 12 etc. as well when it's looking for Item 1. And thus it looked for the latest found item 10+, which goes after Item 1a or Item 2 of course and thus returns a None. I have an ugly fix, for the position of item 1 I selected the position of the latest Item 1 found before the position of the latest item 1a found.
@Onapmek - I always thought that as much as RegEx is accurate, it will also be sensitive causing reliability issues - can the RegEx approach made more reliable?
Secondly, some of these "Items" consist of tables. I wonder how the text is extracted excluding the tables?
@anshoomehra - could you elaborate this?
appreciate your work!
In the 3rd cell you mention document, which is not there in above two. how did you get it
I am able to get the code to work when I use the download from the SEC website; however, SEC is no longer allowing mass loops to circle and collect data (at least that is what I was told). I found a website which has every 10-K downloaded and I saved those onto my personal computer. When I use the code I changed the requests to an open and read file, but now I'm getting an error "KeyError: 'item1a'". I've tried different versions such as "Item 1A." etc. with no luck. Is there another way to get this code to work using SEC downloads. Downloads are from https://drive.google.com/drive/folders/1tZP9A0hrAj8ptNP3VE9weYZ3WDn9jHic. Thank you!
Hi Bill, thank you for sharing this. Have you got the chance to verify that this 10-Ks a complete list? Have you had the chance to validate the data? May I kindly ask you which website was it? Happy to take it offline if you prefer. Thanks
May I know how to remove the footer information, "Apple Inc. | 2018 Form 10-K |" as well as page number from the generated text?
Thanks for this! I've followed the steps to get historic numeric data and made a free API in case anyone else wants the data for training AI etc.
https://rapidapi.com/alexventisei2/api/sec-api2
i think the line below assumes same number of entries for all items, which is not necessarily the case for example nyt. in that case there are more item 1A items then 1B and the approach does not work. I would also add re.IGNORECASE to the re.compile
pos_dat = test_df.sort_values('start', ascending=True).drop_duplicates(subset=['item'], keep='last')
This was very helpful, thank you for taking the time to post this
Amazing! Thanks for sharing.
i have Html url i dont know how to get txt url of 10k file after that I am able to use above notebook code
any one can help me please
Jesus, you saved my life!
thanks sooooooo much. This is driving me crazy and you save my ass.