Fetching all the "log" files from website
Usage:
Create empty file names as extract-files-from-web.sh on unix/mac/linux or windows Git bash / Cygwin.
Usage
[me@myserver myfolder]$ ./extract-files-from-web.sh http://my.server.loc/hello/world/#!/bin/bash
if [ $# -eq 0 ]; then
echo >&2 "Usage: /extract-files-from-web.sh <server address like http://my.server.loc/hello/world/>
exit 1
fi
url=$1
for file in $(curl -s $1/ |
grep href |
sed 's/.*href="//' |
sed 's/".*//' |
grep '^[a-zA-Z].*'); do
curl -s -O $1/$file
done