- Make sure you are in the folder that contains the
Dockerfile
- If your folder is
/my-docker-image/
, there should be 2 files in your folder:/my-docker-image | |---Dockerfile |---requirements.txt
•100% ➜ docker build -t tensorflow-av .
Sending build context to Docker daemon 3.584kB
Step 1/6 : FROM python:3.6
---> 968120d8cbe8
Step 2/6 : RUN apt-get update
---> Running in 20271eafe0d0
Get:1 http://security.debian.org jessie/updates InRelease [63.1 kB]
Ign http://deb.debian.org jessie InRelease
Get:2 http://deb.debian.org jessie-updates InRelease [145 kB]
bleach==1.5.0 | |
certifi==2016.2.28 | |
cycler==0.10.0 | |
decorator==4.1.2 | |
entrypoints==0.2.3 | |
html5lib==0.9999999 | |
ipykernel==4.6.1 | |
ipython==6.2.1 | |
ipython-genutils==0.2.0 | |
ipywidgets==7.0.3 |
ssh-keygen -t rsa -C "<emai-id>@gmail.com"
ssh-add ~/.ssh/id_rsa_prato_git
ssh-add -D
ssh-add -l
vim config
Just a quickie test in Python 3 (using Requests) to see if Google Cloud Vision can be used to effectively OCR a scanned data table and preserve its structure, in the way that products such as ABBYY FineReader can OCR an image and provide Excel-ready output.
The short answer: No. While Cloud Vision provides bounding polygon coordinates in its output, it doesn't provide it at the word or region level, which would be needed to then calculate the data delimiters.
On the other hand, the OCR quality is pretty good, if you just need to identify text anywhere in an image, without regards to its physical coordinates. I've included two examples:
####### 1. A low-resolution photo of road signs
lsblk
sudo file -s /dev/xvdf
sudo file -s /dev/xvda1
sudo mkfs -t ext4 /dev/xvdf
sudo mkdir datasets
sudo mount /dev/xvdf datasets
ls
cd datasets/
There might be a case when you need to work at home or any remote place away from your own office. You have all the docker-machines
created and want to get them to your laptop/home system.
Follow the steps below:
.docker/machine/machines
to your .docker/machine/machines
.docker/machine/machines/docker1m/config.json
/Users/office/.docker/...
to /Users/home/.docker/...
docker-machine ls
to confirm.docker-machine create --driver azure --azure-subscription-id <subs-id> --azure-location <location:eg. eastus> --azure-resource-group <resource-group-name if created/otherwise created automatically> --azure-size <vm-name> <machine-name>
NOTE: Every Resource group has upto 10 CPU Core limit, so if you get issues creating newer instances solution is to create a new resource group.
VM Sizes:
Converting list of lists/zip(list) to a CSV
kaggle_submission = zip(image_id, label_predict, predictions)
with open("kaggle_submission.csv", "w") as f:
fileWriter = csv.writer(f, dialect='excel')
for img_id, pred, label in kaggle_submission:
row = [img_id, pred, label]
print(row)