Last active
September 25, 2017 16:48
-
-
Save Bondifrench/c38842684d797134bd14 to your computer and use it in GitHub Desktop.
Importing Excel table into Postgres
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env SQL | |
#Importing an Excel table into Postgres | |
# 1) Save Excel file in a .csv format with just 1 header | |
# ex: | |
example_file.csv | |
# 2) Save file in C:/PostgreSQL/9.3/data (important that this file is within "data") | |
# so Postgres has the right file permissions | |
# 3) Need to create an empty table schema capable of storing the data | |
# 4) ex: | |
create table example_table(gid numeric, province character varying(10), "name" character varying(68), | |
pop1996 numeric) | |
# 5) Now we can import the csv file into our empty table schema with the following query: | |
COPY example_table(gid, province, "name", pop1996) | |
FROM 'csv/example_file.csv' | |
WITH DELIMITER ',' | |
CSV header | |
# 6) Dont't forget putting Primary keys to new table |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
When bulk loading, dropping and recreating indexes is far faster than leaving indexes in place and forcing thousands or millions of individual updates to them
Do a VACUUM ANALYZE before and after bulk loading data to be safe. If you dropped indexes be sure to recreate them before doing the ANALYZE
See http://blog.gtuhl.com/2009/04/18/bulk-data-loading-with-postgresql/