@users = User.all.includes(:profile) # SELECT * from users
Counts of things that depend on complicated queries or computations are usually good candidates for denormalization.
User.all.select{ |u| u.active }.each do |u|
vs
User.where(:active => true)
Consider using #pluck to only load the fields you need. Prove this is actually helping before you use this strategy.
# Returns an array of Person objects, with only the id & name
Person.select(:id, :name)
# Returns an array of the first 100 active persons: [1,2,4,6,...]
Person.active.limit(100).pluck(:id)
Given this model:
class User
has_many :friends
has_many :posts
end
How can we find the posts written by a user's friends?
Slow, round about approach
u = User.find(params[:id])
posts = []
u.friends.includes(:posts).each do |f|
posts << f.posts
end
[[p,p,p], [p]]
posts.flatten.sort_by do |p|
p.created
end
Getting to the point, with speed
Post.order("created_at DESC").where(:poster_id => u.friends.pluck(:id))
If the same queries are getting slower as your add more data to a table, you'd likely benefit from an index.
Indexes are more common to track foreign keys (e.g. company_id). But are more generally useful for any field you do a lot of searching against. For example, an index on users.email
would speed up the query to find a user by email. Which might be important once you reach a few thousand users.
Without an index the database must check every single row in a table (called a table scan).
@foo ||=
Rails.cache (maybe memcache)
Redis or similar. Support for data structures like lists, hashes & arrays. Built in operations list .include?
- Switch to background jobs
- Includes email & API requests
This is especially important for slower (mobile) clients where latency is an issue. Imagine the cost of 80ms of latency per every request when you're dependent on 12 CSS files and 8 JS files...ugh!
The asset pipeline is a great example to follow.
- Lazy load via JS
- Image resizing
- Image sprites