-
-
Save blaylockbk/d60f4fce15a7f0475f975fc57da9104d to your computer and use it in GitHub Desktop.
# Brian Blaylock | |
# Requres `s3fs` | |
# Website: https://s3fs.readthedocs.io/en/latest/ | |
# In Anaconda, download via conda-forge. | |
import s3fs | |
# Use the anonymous credentials to access public data | |
fs = s3fs.S3FileSystem(anon=True) | |
# List contents of GOES-16 bucket. | |
fs.ls('s3://noaa-goes16/') | |
# List specific files of GOES-17 CONUS data (multiband format) on a certain hour | |
# Note: the `s3://` is not required | |
files = np.array(fs.ls('noaa-goes17/ABI-L2-MCMIPC/2019/240/00/')) | |
print(files) | |
# Download the first file, and rename it the same name (without the directory structure) | |
fs.get(files[0], files[0].split('/')[-1]) |
Anyone else get ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833)
when running line 12 (fs.ls('s3://noaa-goes16/')
)?
@temidayooniosun
I made a code that can help
https://github.com/eduardoferreira97/download_data_noaa
Hey, nice work @joaohenry23 π Pretty useful stuff you have put together. I have some additional RGB recipes in my GOES-2-Go package.
Thanks for your comments @blaylockbk. I'm glad you liked the package, especially since your download_GOES_AWS.py script was the inspiration to decide to create the GOES package, and also, gave me some ideas to create my own script to download data, in recognition of this contribution, I mention you in the help of my script.
Your GOES-2-Go package is great! Congratulations! πππ
Thank you for sharing your package and thank you for all your contribution! πππ
The line fs.get() used to download the files does not work when python script called in crontab. please help
Just wanted to provide a version to load into memory:
import os
from io import BytesIO
import s3fs
import metpy
import xarray as xr
fs = s3fs.S3FileSystem(anon=True)
fs.ls('s3://noaa-goes16/')
files = fs.ls('noaa-goes16/ABI-L1b-RadC/2019/240/00/')
with fs.open(files[0], 'rb') as f:
ds = xr.open_dataset(BytesIO(f.read()), engine='h5netcdf')
ds = ds.drop(["x_image", "y_image"]).metpy.parse_cf()
This code downloads online the first file, what if I want to download all the files?