Skip to content

Instantly share code, notes, and snippets.

@jkutner
Created January 22, 2012 22:26
Show Gist options
  • Save jkutner/1659141 to your computer and use it in GitHub Desktop.
Save jkutner/1659141 to your computer and use it in GitHub Desktop.
log from the second node in a torquebox cluster
16:24:09,870 INFO [org.jboss.as.clustering.CoreGroupCommunicationService.lifecycle.web] (Incoming-4,null) JBAS010267: New cluster view for partition web (id: 1, delta: 1, merge: false) : [largo/web, largo/web]
16:24:09,873 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-4,null) ISPN000094: Received new cluster view: [largo/web|1] [largo/web, largo/web]
16:24:10,014 INFO [org.jboss.as.clustering.CoreGroupCommunicationService.lifecycle.twitalytics-knob.yml] (Incoming-9,null) JBAS010267: New cluster view for partition twitalytics-knob.yml (id: 1, delta: 1, merge: false) : [largo/twitalytics-knob.yml, largo/twitalytics-knob.yml]
16:24:10,014 INFO [org.projectodd.polyglot.hasingleton] (AsynchViewChangeHandler Thread) inquire if we should be master
16:24:10,015 INFO [org.projectodd.polyglot.hasingleton] (AsynchViewChangeHandler Thread) Becoming HASingleton master.
16:24:07,042 INFO [org.jboss.as.server.deployment] (MSC service thread 1-3) Starting deployment of "twitalytics-knob.yml"
16:24:07,303 INFO [org.torquebox.core] (MSC service thread 1-4) evaling: "/Users/jkutner/workspace/twitalytics/config/torquebox.rb"
16:24:07,648 ERROR [stderr] (MSC service thread 1-4) STOMP HOSTS: [localhost]
16:24:07,867 WARN [org.torquebox.db] (MSC service thread 1-3) Not enabling XA for unknown adapter type: sqlite3
16:24:07,949 ERROR [stderr] (MSC service thread 1-1) webhosts: []
16:24:07,949 ERROR [stderr] (MSC service thread 1-1) stomphosts: [localhost]
16:24:07,950 ERROR [stderr] (MSC service thread 1-1) DEPLOY STANDALONE SESSION MANAGER
16:24:08,193 INFO [org.projectodd.polyglot.hasingleton] (MSC service thread 1-5) Start HASingletonCoordinator
16:24:08,198 INFO [org.projectodd.polyglot.hasingleton] (MSC service thread 1-5) Connect to twitalytics-knob.yml
16:24:08,211 INFO [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-1) trying to deploy queue jms.topic./topics/statuses
16:24:08,216 INFO [org.torquebox.stomp.binding] (MSC service thread 1-8) Advertising STOMP binding: ws://localhost:8675/
16:24:08,289 INFO [org.torquebox.core.runtime.SharedRubyRuntimePool] (MSC service thread 1-6) Starting web runtime pool asynchronously
16:24:08,293 INFO [org.torquebox.core.runtime.SharedRubyRuntimePool] (MSC service thread 1-8) Deferring start for services runtime pool.
16:24:08,297 INFO [org.torquebox.core.runtime] (Thread-99) Creating ruby runtime (ruby_version: RUBY1_8, compile_mode: JIT, app: twitalytics, context: web)
16:24:08,398 INFO [org.jboss.as.messaging] (MSC service thread 1-1) JBAS011601: Bound messaging object to jndi name java:/topics/statuses
16:24:08,405 INFO [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-3) trying to deploy queue jms.queue./queues/torquebox/twitalytics/tasks/torquebox_backgroundable
16:24:08,412 INFO [org.torquebox.core.runtime] (pool-5-thread-1) Creating ruby runtime (ruby_version: RUBY1_8, compile_mode: JIT, app: twitalytics, context: stomplets)
16:24:08,454 INFO [org.torquebox.core.runtime] (MSC service thread 1-2) Creating ruby runtime (ruby_version: RUBY1_8, compile_mode: JIT, app: twitalytics, context: services)
16:24:08,462 INFO [org.jboss.as.messaging] (MSC service thread 1-3) JBAS011601: Bound messaging object to jndi name java:/queues/torquebox/twitalytics/tasks/torquebox_backgroundable
16:24:08,756 INFO [org.quartz.core.SchedulerSignalerImpl] (MSC service thread 1-4) Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
16:24:08,759 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-4) Quartz Scheduler v.1.8.5 created.
16:24:08,761 INFO [org.quartz.simpl.RAMJobStore] (MSC service thread 1-4) RAMJobStore initialized.
16:24:08,763 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-4) Scheduler meta-data: Quartz Scheduler (v1.8.5) 'JobScheduler$twitalytics-knob.yml' with instanceId 'largo.local1327271048673'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 3 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
16:24:08,763 INFO [org.quartz.impl.StdSchedulerFactory] (MSC service thread 1-4) Quartz scheduler 'JobScheduler$twitalytics-knob.yml' initialized from an externally provided properties instance.
16:24:08,764 INFO [org.quartz.impl.StdSchedulerFactory] (MSC service thread 1-4) Quartz scheduler version: 1.8.5
16:24:08,764 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-4) JobFactory set to: org.torquebox.jobs.RubyJobProxyFactory@307c329d
16:24:08,764 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-4) Scheduler JobScheduler$twitalytics-knob.yml_$_largo.local1327271048673 started.
16:24:09,212 INFO [org.torquebox.core.runtime] (pool-8-thread-1) Creating ruby runtime (ruby_version: RUBY1_8, compile_mode: JIT, app: twitalytics, context: messaging)
16:24:09,214 INFO [org.torquebox.core.runtime] (pool-8-thread-2) Creating ruby runtime (ruby_version: RUBY1_8, compile_mode: JIT, app: twitalytics, context: messaging)
16:24:09,532 WARNING [org.jgroups.protocols.UDP] (MSC service thread 1-7) receive buffer of socket java.net.DatagramSocket@64564546 was set to 20MB, but the OS only allocated 65.51KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
16:24:09,533 WARNING [org.jgroups.protocols.UDP] (MSC service thread 1-7) receive buffer of socket java.net.MulticastSocket@569fc9fe was set to 25MB, but the OS only allocated 65.51KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
16:24:09,547 INFO [stdout] (MSC service thread 1-7)
16:24:09,587 INFO [stdout] (MSC service thread 1-7) -------------------------------------------------------------------
16:24:09,587 INFO [stdout] (MSC service thread 1-7) GMS: address=largo/web, cluster=web, physical address=192.168.6.201:55200
16:24:09,588 INFO [stdout] (MSC service thread 1-7) -------------------------------------------------------------------
16:24:09,843 INFO [stdout] (MSC service thread 1-5)
16:24:09,846 INFO [stdout] (MSC service thread 1-5) -------------------------------------------------------------------
16:24:09,849 INFO [stdout] (MSC service thread 1-5) GMS: address=largo/twitalytics-knob.yml, cluster=twitalytics-knob.yml, physical address=192.168.6.201:55200
16:24:09,850 INFO [stdout] (MSC service thread 1-5) -------------------------------------------------------------------
16:24:10,088 INFO [org.jboss.as.clustering.CoreGroupCommunicationService.twitalytics-knob.yml] (MSC service thread 1-5) JBAS010207: Number of cluster members: 2
16:24:10,089 INFO [org.jboss.as.clustering.CoreGroupCommunicationService.twitalytics-knob.yml] (MSC service thread 1-5) JBAS010268: New cluster view for partition twitalytics-knob.yml: 1 (org.jboss.as.clustering.CoreGroupCommunicationService$GroupView@15475116 delta: 0, merge: false)
16:24:10,154 INFO [org.jboss.as.clustering.CoreGroupCommunicationService.web] (MSC service thread 1-4) JBAS010207: Number of cluster members: 2
16:24:10,155 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-4) MSC00001: Failed to start service jboss.cluster.web: org.jboss.msc.service.StartException in service jboss.cluster.web: java.lang.IllegalStateException: JBAS010281: Found member largo/web in current view that duplicates us (largo/web). This node cannot join partition until duplicate member has been removed
at org.jboss.as.clustering.CoreGroupCommunicationServiceService.start(CoreGroupCommunicationServiceService.java:83)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1824) [jboss-msc-1.0.1.GA.jar:1.0.1.GA]
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1759) [jboss-msc-1.0.1.GA.jar:1.0.1.GA]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [:1.6.0_29]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [:1.6.0_29]
at java.lang.Thread.run(Thread.java:680) [:1.6.0_29]
Caused by: java.lang.IllegalStateException: JBAS010281: Found member largo/web in current view that duplicates us (largo/web). This node cannot join partition until duplicate member has been removed
at org.jboss.as.clustering.CoreGroupCommunicationService.verifyNodeIsUnique(CoreGroupCommunicationService.java:1198)
at org.jboss.as.clustering.CoreGroupCommunicationService.startService(CoreGroupCommunicationService.java:916)
at org.jboss.as.clustering.CoreGroupCommunicationService.start(CoreGroupCommunicationService.java:806)
at org.jboss.as.clustering.CoreGroupCommunicationServiceService.start(CoreGroupCommunicationServiceService.java:81)
... 5 more
16:24:10,514 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-5) MSC00001: Failed to start service jboss.deployment.unit."twitalytics-knob.yml".ha-singleton.coordinator: org.jboss.msc.service.StartException in service jboss.deployment.unit."twitalytics-knob.yml".ha-singleton.coordinator: java.lang.IllegalStateException: JBAS010281: Found member largo/twitalytics-knob.yml in current view that duplicates us (largo/twitalytics-knob.yml). This node cannot join partition until duplicate member has been removed
at org.projectodd.polyglot.hasingleton.HASingletonCoordinatorService.start(HASingletonCoordinatorService.java:52)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1824) [jboss-msc-1.0.1.GA.jar:1.0.1.GA]
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1759) [jboss-msc-1.0.1.GA.jar:1.0.1.GA]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [:1.6.0_29]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [:1.6.0_29]
at java.lang.Thread.run(Thread.java:680) [:1.6.0_29]
Caused by: java.lang.IllegalStateException: JBAS010281: Found member largo/twitalytics-knob.yml in current view that duplicates us (largo/twitalytics-knob.yml). This node cannot join partition until duplicate member has been removed
at org.jboss.as.clustering.CoreGroupCommunicationService.verifyNodeIsUnique(CoreGroupCommunicationService.java:1198)
at org.jboss.as.clustering.CoreGroupCommunicationService.startService(CoreGroupCommunicationService.java:916)
at org.jboss.as.clustering.CoreGroupCommunicationService.start(CoreGroupCommunicationService.java:806)
at org.projectodd.polyglot.hasingleton.HASingletonCoordinator.start(HASingletonCoordinator.java:52)
at org.projectodd.polyglot.hasingleton.HASingletonCoordinatorService.start(HASingletonCoordinatorService.java:50)
... 5 more
16:24:11,309 WARN [org.infinispan.config.ConfigurationValidatingVisitor] (MSC service thread 1-5) ISPN000152: Passivation configured without a valid eviction policy. This could mean that the cache store will never get used unless code calls Cache.evict() manually.
16:24:11,394 WARN [com.arjuna.ats.jta] (Periodic Recovery) ARJUNA016037: Could not find new XAResource to use for recovering non-serializable XAResource XAResourceRecord < resource:null, txid:< formatId=131077, gtrid_length=29, bqual_length=36, tx_uid=0:ffffc0a806c9:-1b6d9bd:4f1c86ff:15, node_name=1, branch_uid=0:ffffc0a806c9:-1b6d9bd:4f1c86ff:16, subordinatenodename=null, eis_name=unknown eis name >, heuristic: TwoPhaseOutcome.FINISH_OK com.arjuna.ats.internal.jta.resources.arjunacore.XAResourceRecord@2ad80b42 >
16:24:11,395 WARN [com.arjuna.ats.jta] (Periodic Recovery) ARJUNA016038: No XAResource to recover < formatId=131077, gtrid_length=29, bqual_length=36, tx_uid=0:ffffc0a806c9:-1b6d9bd:4f1c86ff:15, node_name=1, branch_uid=0:ffffc0a806c9:-1b6d9bd:4f1c86ff:16, subordinatenodename=null, eis_name=unknown eis name >
16:24:11,588 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-5) ISPN000078: Starting JGroups Channel
16:24:11,590 WARNING [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (MSC service thread 1-5) Channel Muxer already has a default up handler installed (org.jboss.as.clustering.jgroups.ClassLoaderAwareUpHandler@1706f7ec) but now it is being overridden
16:24:11,591 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-5) ISPN000094: Received new cluster view: [largo/web|1] [largo/web, largo/web]
16:24:11,592 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-5) ISPN000079: Cache local address is largo/web, physical addresses are [192.168.6.201:55200]
16:24:11,657 INFO [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-5) ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.0.CR1
16:24:11,844 INFO [org.infinispan.jmx.CacheJmxRegistration] (MSC service thread 1-4) ISPN000031: MBeans were successfully registered to the platform mbean server.
16:24:12,033 INFO [org.infinispan.jmx.CacheJmxRegistration] (MSC service thread 1-5) ISPN000031: MBeans were successfully registered to the platform mbean server.
16:24:12,117 INFO [org.jboss.as.clustering] (MSC service thread 1-5) JBAS010301: Started repl cache from web container
16:24:12,120 INFO [org.jboss.as.clustering] (MSC service thread 1-4) JBAS010301: Started registry cache from web container
16:24:14,212 INFO [org.torquebox.core.runtime] (Thread-99) Initialize? true
16:24:14,219 INFO [org.torquebox.core.runtime] (Thread-99) Initializer=org.torquebox.web.rails.RailsRuntimeInitializer@5f280b6e
16:24:14,234 INFO [org.torquebox.core.runtime] (pool-5-thread-1) Initialize? true
16:24:14,234 INFO [org.torquebox.core.runtime] (pool-5-thread-1) Initializer=org.torquebox.web.rails.RailsRuntimeInitializer@5f280b6e
16:24:14,247 INFO [org.torquebox.core.runtime] (MSC service thread 1-2) Initialize? true
16:24:14,247 INFO [org.torquebox.core.runtime] (MSC service thread 1-2) Initializer=org.torquebox.web.rails.RailsRuntimeInitializer@5f280b6e
16:24:14,579 INFO [org.torquebox.core.runtime] (pool-8-thread-1) Initialize? true
16:24:14,580 INFO [org.torquebox.core.runtime] (pool-8-thread-1) Initializer=org.torquebox.web.rails.RailsRuntimeInitializer@5f280b6e
16:24:14,594 INFO [org.torquebox.core.runtime] (pool-8-thread-2) Initialize? true
16:24:14,594 INFO [org.torquebox.core.runtime] (pool-8-thread-2) Initializer=org.torquebox.web.rails.RailsRuntimeInitializer@5f280b6e
16:24:14,897 INFO [org.torquebox.core.runtime.BundlerAwareRuntimeInitializer] (MSC service thread 1-2) Setting up Bundler
16:24:14,912 INFO [org.torquebox.core.runtime.BundlerAwareRuntimeInitializer] (Thread-99) Setting up Bundler
16:24:14,914 INFO [org.torquebox.core.runtime.BundlerAwareRuntimeInitializer] (pool-5-thread-1) Setting up Bundler
16:24:15,145 INFO [org.torquebox.core.runtime.BundlerAwareRuntimeInitializer] (pool-8-thread-1) Setting up Bundler
16:24:15,245 INFO [org.torquebox.core.runtime.BundlerAwareRuntimeInitializer] (pool-8-thread-2) Setting up Bundler
16:24:44,252 INFO [org.torquebox.core.runtime] (MSC service thread 1-2) Created ruby runtime (ruby_version: RUBY1_8, compile_mode: JIT, app: twitalytics, context: services) in 35.79s
16:24:44,295 INFO [org.torquebox.core.runtime] (pool-8-thread-2) Created ruby runtime (ruby_version: RUBY1_8, compile_mode: JIT, app: twitalytics, context: messaging) in 35.08s
16:24:44,356 INFO [org.torquebox.core.runtime] (Thread-99) Created ruby runtime (ruby_version: RUBY1_8, compile_mode: JIT, app: twitalytics, context: web) in 36.05s
16:24:44,382 INFO [org.torquebox.core.runtime] (pool-5-thread-1) Created ruby runtime (ruby_version: RUBY1_8, compile_mode: JIT, app: twitalytics, context: stomplets) in 35.97s
16:24:44,495 INFO [org.torquebox.core.runtime] (pool-8-thread-1) Created ruby runtime (ruby_version: RUBY1_8, compile_mode: JIT, app: twitalytics, context: messaging) in 35.28s
16:24:44,701 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS015856: Undeploy of deployment "twitalytics-knob.yml" was rolled back with failure message {"JBAS014671: Failed services" => {"jboss.cluster.web" => "org.jboss.msc.service.StartException in service jboss.cluster.web: java.lang.IllegalStateException: JBAS010281: Found member largo/web in current view that duplicates us (largo/web). This node cannot join partition until duplicate member has been removed","jboss.deployment.unit.\"twitalytics-knob.yml\".ha-singleton.coordinator" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"twitalytics-knob.yml\".ha-singleton.coordinator: java.lang.IllegalStateException: JBAS010281: Found member largo/twitalytics-knob.yml in current view that duplicates us (largo/twitalytics-knob.yml). This node cannot join partition until duplicate member has been removed"}}
16:24:44,701 INFO [org.jboss.as.controller] (DeploymentScanner-threads - 2) JBAS014774: Service status report
JBAS014777: Services which failed to start: service jboss.cluster.web: org.jboss.msc.service.StartException in service jboss.cluster.web: java.lang.IllegalStateException: JBAS010281: Found member largo/web in current view that duplicates us (largo/web). This node cannot join partition until duplicate member has been removed
service jboss.deployment.unit."twitalytics-knob.yml".ha-singleton.coordinator: org.jboss.msc.service.StartException in service jboss.deployment.unit."twitalytics-knob.yml".ha-singleton.coordinator: java.lang.IllegalStateException: JBAS010281: Found member largo/twitalytics-knob.yml in current view that duplicates us (largo/twitalytics-knob.yml). This node cannot join partition until duplicate member has been removed
16:24:44,710 ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) {"JBAS014653: Composite operation failed and was rolled back. Steps that failed:" => {"Operation step-2" => {"JBAS014671: Failed services" => {"jboss.cluster.web" => "org.jboss.msc.service.StartException in service jboss.cluster.web: java.lang.IllegalStateException: JBAS010281: Found member largo/web in current view that duplicates us (largo/web). This node cannot join partition until duplicate member has been removed","jboss.deployment.unit.\"twitalytics-knob.yml\".ha-singleton.coordinator" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"twitalytics-knob.yml\".ha-singleton.coordinator: java.lang.IllegalStateException: JBAS010281: Found member largo/twitalytics-knob.yml in current view that duplicates us (largo/twitalytics-knob.yml). This node cannot join partition until duplicate member has been removed"}}}}
16:24:44,855 INFO [org.jboss.as.messaging] (MSC service thread 1-2) JBAS011605: Unbound messaging object to jndi name java:/queues/torquebox/twitalytics/tasks/torquebox_backgroundable
16:24:44,856 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-3) Scheduler JobScheduler$twitalytics-knob.yml_$_largo.local1327271048673 shutting down.
16:24:44,867 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-3) Scheduler JobScheduler$twitalytics-knob.yml_$_largo.local1327271048673 paused.
16:24:44,870 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-3) Scheduler JobScheduler$twitalytics-knob.yml_$_largo.local1327271048673 shutdown complete.
16:24:46,098 ERROR [stderr] (RubyThread-146: /Users/jkutner/workspace/twitalytics/app/services/twitter_stream_service.rb:19) java.lang.UnsatisfiedLinkError: Native Library /private/var/folders/b6/1g6wmnr109l8q9xfwb60qy5r0000gn/T/sqlite-3.7.2-libsqlitejdbc.jnilib already loaded in another classloader
16:24:48,024 INFO [org.jboss.as.messaging] (MSC service thread 1-2) JBAS011605: Unbound messaging object to jndi name java:/topics/statuses
16:24:48,051 INFO [org.infinispan.eviction.PassivationManagerImpl] (MSC service thread 1-8) ISPN000029: Passivating all entries to disk
16:24:48,072 INFO [org.infinispan.eviction.PassivationManagerImpl] (MSC service thread 1-8) ISPN000030: Passivated 1 entries in 15 milliseconds
16:24:48,086 INFO [org.jboss.as.clustering] (MSC service thread 1-3) JBAS010302: Stopped registry cache from web container
16:24:48,101 INFO [org.jboss.as.server.deployment] (MSC service thread 1-4) Stopped deployment twitalytics-knob.yml in 3396ms
16:24:48,121 INFO [org.jboss.as.clustering] (MSC service thread 1-8) JBAS010302: Stopped repl cache from web container
16:24:48,240 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-8) ISPN000082: Stopping the RpcDispatcher
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment