Below is some working code I used to read a large tab-delimited data file. The file was over 3 Gb uncompressed and couldn't be loaded on a laptop with 8 GB RAM.
There are a lot of different ways to handle insufficient memory problems in pandas. In this case I used the built-in chunksize method to first load the data in chunks, and then iterated over them before concatenating into a single dataframe.
import pandas as pd
. . .
cols = [...]