-
-
Save nz/1282013 to your computer and use it in GitHub Desktop.
# app/models/post.rb | |
class Post | |
searchable :auto_index => false, :auto_remove => false do | |
text :title | |
text :body | |
end | |
after_commit :resque_solr_update, :if => :persisted? | |
before_destroy :resque_solr_remove | |
protected | |
def resque_solr_update | |
Resque.enqueue(SolrUpdate, self.class.to_s, id) | |
end | |
def resque_solr_remove | |
Resque.enqueue(SolrRemove, self.class.to_s, id) | |
end | |
end | |
# lib/jobs/solr_update.rb | |
class SolrUpdate | |
@queue = :solr | |
def self.perform(classname, id) | |
classname.constantize.find(id).solr_index | |
end | |
end | |
# lib/jobs/solr_remove.rb | |
class SolrRemove | |
@queue = :solr | |
def self.perform(classname, id) | |
Sunspot.remove_by_id(classname, id) | |
end | |
end |
Nice snippet, I decided to use solr_index! this way it is easier to identify and rerun jobs from resque, perhaps with resque-web or anything else.
def self.perform(klass, id)
klass.find(id).solr_index!
end
@ilyakatz: the solr_index!
is just solr_index
call plus a commit. I highly recommend not using this in production, as a big burst in commits could easily overwhelm your index and start throwing 503 errors. Better to omit commits entirely and use the autoCommit
setting in your solrconfig.xml
, especially since you're already accepting some index latency by queuing. Of course, imho, ymmv, etc :)
@nz ah i see, ok, you convinced me :)
We were also getting stack level too deep when we queued the job to resque, as it doesn't like serializing ActiveRecord classes. We now just use the string like so:
Resque.enqueue(SolrUpdate, self.class.to_s, id)
and then constantize it when we actually do the job like so:
class SolrUpdate
@queue = :solr_update
def self.perform(classname, id)
classname.constantize.find(id).solr_index
end
end
Sounds like I'll need to work on a more general solution of this. Thanks for sharing your version @ciaranlee that looks pretty good.
Would be happy to merge something like this into Sunspot proper. If any of you want to put that together, branch it off 1-3-stable
and /cc me in the pull request.
I'll try to do that over the next few days!
When removing an index, the object might already be destroyed when the worker calls find.
You can use Sunspot.remove_by_id instead:
class SolrRemove
@queue = :solr
def self.perform(classname, id)
Sunspot.remove_by_id classname, id
end
end
Good point, @gudleik, thanks! Updated.
I just released a gem for doing just this called sunspot-queue (https://github.com/gaffneyc/sunspot-queue).
Great snippet, just what I was looking for, thanks. However, Nick, when is the Sunspot.commit
performed in this scenario?
@semmin, usually best to avoid issuing explicit commits and instead rely on your server's autoCommit
setting in the solrconfig.xml
.
I had to modify line 10 as follows: after_commit :resque_solr_update, if: :persisted?
since resque_solr_update
was triggered on delete operations.
Thanks, @semmin!
What do you think of adding this at the end of the Resque jobs, for specs and local development to work somewhat serially?
unless Rails.env.production?
Sunspot.commit
end
I did a minor change:
using the Model name cause stack level too deep, so I changed to self.
also, instead of putting the files in lib/jobs, I put in app/workers/
cheers.