Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save vnext-nguyen-quyen/7ab6010b26a70f5ccfa564c09d275209 to your computer and use it in GitHub Desktop.
Save vnext-nguyen-quyen/7ab6010b26a70f5ccfa564c09d275209 to your computer and use it in GitHub Desktop.
Improve insert in RDS of AWS?
There are two ways to efficiently (read: quickly) load 7000 rows.
LOAD DATA INFILE -- After you have built a 7000-line CSV file.
"Batch" INSERT -- like INSERT INTO t (a,b) VALUES (1,2),(5,6),(9,2), ...; -- Be cautious about the number of rows. 100 to 1000 is a good range of what to do at a time.
max_allowed_packet=536870912 -- NO, not in a tiny 2GB VM; change to 16M. Other likely settings to check:
key_buffer_size = 10M
innodb_buffer_pool_size = 1GB
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment