Created
July 17, 2020 01:11
-
-
Save fancyerii/f2bbd19a5025ad8649e85b6da70d6526 to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2020-07-17 08:57:45,864 [INFO ] main org.pytorch.serve.ModelServer - | |
Torchserve version: 0.1.1 | |
TS Home: /home/lili/env-huggface/lib/python3.6/site-packages | |
Current directory: /home/lili/codes/huggface-transformer/test2 | |
Temp directory: /tmp | |
Number of GPUs: 1 | |
Number of CPUs: 8 | |
Max heap size: 7938 M | |
Python executable: /home/lili/env-huggface/bin/python3.6 | |
Config file: N/A | |
Inference address: http://127.0.0.1:8080 | |
Management address: http://127.0.0.1:8081 | |
Model Store: /home/lili/codes/huggface-transformer/test2/model_store | |
Initial Models: order.mar | |
Log dir: /home/lili/codes/huggface-transformer/test2/logs | |
Metrics dir: /home/lili/codes/huggface-transformer/test2/logs | |
Netty threads: 0 | |
Netty client threads: 0 | |
Default workers per model: 1 | |
Blacklist Regex: N/A | |
Maximum Response Size: 6553500 | |
Maximum Request Size: 6553500 | |
Prefer direct buffer: false | |
2020-07-17 08:57:45,872 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: order.mar | |
2020-07-17 08:57:51,285 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model order | |
2020-07-17 08:57:51,285 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model order | |
2020-07-17 08:57:51,285 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model order loaded. | |
2020-07-17 08:57:51,285 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: order, count: 1 | |
2020-07-17 08:57:51,295 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: EpollServerSocketChannel. | |
2020-07-17 08:57:51,347 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://127.0.0.1:8080 | |
2020-07-17 08:57:51,348 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: EpollServerSocketChannel. | |
2020-07-17 08:57:51,349 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://127.0.0.1:8081 | |
2020-07-17 08:57:51,398 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9000 | |
2020-07-17 08:57:51,399 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]22653 | |
2020-07-17 08:57:51,399 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. | |
2020-07-17 08:57:51,399 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.8 | |
2020-07-17 08:57:51,399 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change null -> WORKER_STARTED | |
2020-07-17 08:57:51,401 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000 | |
2020-07-17 08:57:51,409 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9000. | |
2020-07-17 08:57:51,854 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - PyTorch version 1.5.0+cu101 available. | |
2020-07-17 08:57:53,014 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - TensorFlow version 2.2.0 available. | |
2020-07-17 08:57:53,309 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - model: /tmp/models/5886359598784a97ace9c91df12d99590ade3efe/best_model.bin | |
2020-07-17 08:57:53,309 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died. | |
2020-07-17 08:57:53,309 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): | |
2020-07-17 08:57:53,309 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 175, in <module> | |
2020-07-17 08:57:53,309 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() | |
2020-07-17 08:57:53,309 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 147, in run_server | |
2020-07-17 08:57:53,309 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) | |
2020-07-17 08:57:53,310 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 111, in handle_connection | |
2020-07-17 08:57:53,310 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) | |
2020-07-17 08:57:53,310 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 84, in load_model | |
2020-07-17 08:57:53,310 [INFO ] epollEventLoopGroup-4-1 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_STARTED | |
2020-07-17 08:57:53,310 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) | |
2020-07-17 08:57:53,310 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED | |
2020-07-17 08:57:53,310 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_loader.py", line 102, in load | |
2020-07-17 08:57:53,310 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) | |
2020-07-17 08:57:53,311 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/5886359598784a97ace9c91df12d99590ade3efe/testtorchserving.py", line 129, in handle | |
2020-07-17 08:57:53,311 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise e | |
2020-07-17 08:57:53,311 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. | |
java.lang.InterruptedException | |
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1668) | |
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:435) | |
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:129) | |
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) | |
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) | |
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) | |
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) | |
at java.base/java.lang.Thread.run(Thread.java:832) | |
2020-07-17 08:57:53,311 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/5886359598784a97ace9c91df12d99590ade3efe/testtorchserving.py", line 118, in handle | |
2020-07-17 08:57:53,312 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - _service.initialize(context) | |
2020-07-17 08:57:53,312 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/5886359598784a97ace9c91df12d99590ade3efe/testtorchserving.py", line 49, in initialize | |
2020-07-17 08:57:53,313 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.model = torch.load(model_dir+"/best_model.bin") | |
2020-07-17 08:57:53,313 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: order, error: Worker died. | |
2020-07-17 08:57:53,313 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/torch/serialization.py", line 593, in load | |
2020-07-17 08:57:53,313 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STARTED -> WORKER_STOPPED | |
2020-07-17 08:57:53,313 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) | |
2020-07-17 08:57:53,313 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/torch/serialization.py", line 773, in _legacy_load | |
2020-07-17 08:57:53,313 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result = unpickler.load() | |
2020-07-17 08:57:53,313 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stderr | |
2020-07-17 08:57:53,313 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stdout | |
2020-07-17 08:57:53,313 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - AttributeError: Can't get attribute 'OrderClassifier' on <module '__main__' from '/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py'> | |
2020-07-17 08:57:53,313 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stdout | |
2020-07-17 08:57:53,314 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9000 in 1 seconds. | |
2020-07-17 08:57:53,331 [INFO ] W-9000-order_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stderr | |
2020-07-17 08:57:54,402 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9000 | |
2020-07-17 08:57:54,402 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]22688 | |
2020-07-17 08:57:54,402 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. | |
2020-07-17 08:57:54,402 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STOPPED -> WORKER_STARTED | |
2020-07-17 08:57:54,402 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.8 | |
2020-07-17 08:57:54,402 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000 | |
2020-07-17 08:57:54,403 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9000. | |
2020-07-17 08:57:54,826 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - PyTorch version 1.5.0+cu101 available. | |
2020-07-17 08:57:56,022 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - TensorFlow version 2.2.0 available. | |
2020-07-17 08:57:56,276 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - model: /tmp/models/5886359598784a97ace9c91df12d99590ade3efe/best_model.bin | |
2020-07-17 08:57:56,276 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died. | |
2020-07-17 08:57:56,276 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): | |
2020-07-17 08:57:56,276 [INFO ] epollEventLoopGroup-4-2 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_STARTED | |
2020-07-17 08:57:56,276 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 175, in <module> | |
2020-07-17 08:57:56,276 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() | |
2020-07-17 08:57:56,276 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED | |
2020-07-17 08:57:56,276 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 147, in run_server | |
2020-07-17 08:57:56,276 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. | |
java.lang.InterruptedException | |
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1668) | |
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:435) | |
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:129) | |
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) | |
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) | |
at java.base/java.lang.Thread.run(Thread.java:832) | |
2020-07-17 08:57:56,276 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) | |
2020-07-17 08:57:56,277 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: order, error: Worker died. | |
2020-07-17 08:57:56,277 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 111, in handle_connection | |
2020-07-17 08:57:56,277 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STARTED -> WORKER_STOPPED | |
2020-07-17 08:57:56,277 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) | |
2020-07-17 08:57:56,277 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 84, in load_model | |
2020-07-17 08:57:56,277 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stderr | |
2020-07-17 08:57:56,277 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) | |
2020-07-17 08:57:56,277 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stdout | |
2020-07-17 08:57:56,277 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9000 in 1 seconds. | |
2020-07-17 08:57:56,277 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_loader.py", line 102, in load | |
2020-07-17 08:57:56,277 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stdout | |
2020-07-17 08:57:56,295 [INFO ] W-9000-order_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stderr | |
2020-07-17 08:57:57,364 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9000 | |
2020-07-17 08:57:57,365 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]22709 | |
2020-07-17 08:57:57,365 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. | |
2020-07-17 08:57:57,365 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STOPPED -> WORKER_STARTED | |
2020-07-17 08:57:57,365 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.8 | |
2020-07-17 08:57:57,365 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000 | |
2020-07-17 08:57:57,366 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9000. | |
2020-07-17 08:57:57,791 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - PyTorch version 1.5.0+cu101 available. | |
2020-07-17 08:57:58,969 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - TensorFlow version 2.2.0 available. | |
2020-07-17 08:57:59,224 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - model: /tmp/models/5886359598784a97ace9c91df12d99590ade3efe/best_model.bin | |
2020-07-17 08:57:59,224 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died. | |
2020-07-17 08:57:59,224 [INFO ] epollEventLoopGroup-4-3 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_STARTED | |
2020-07-17 08:57:59,224 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): | |
2020-07-17 08:57:59,224 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED | |
2020-07-17 08:57:59,225 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 175, in <module> | |
2020-07-17 08:57:59,225 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. | |
java.lang.InterruptedException | |
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1668) | |
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:435) | |
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:129) | |
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) | |
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) | |
at java.base/java.lang.Thread.run(Thread.java:832) | |
2020-07-17 08:57:59,225 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() | |
2020-07-17 08:57:59,225 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: order, error: Worker died. | |
2020-07-17 08:57:59,225 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 147, in run_server | |
2020-07-17 08:57:59,225 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STARTED -> WORKER_STOPPED | |
2020-07-17 08:57:59,225 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) | |
2020-07-17 08:57:59,225 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 111, in handle_connection | |
2020-07-17 08:57:59,225 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stderr | |
2020-07-17 08:57:59,225 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) | |
2020-07-17 08:57:59,225 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stdout | |
2020-07-17 08:57:59,225 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 84, in load_model | |
2020-07-17 08:57:59,225 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9000 in 2 seconds. | |
2020-07-17 08:57:59,225 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stdout | |
2020-07-17 08:57:59,243 [INFO ] W-9000-order_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stderr | |
2020-07-17 08:58:01,311 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9000 | |
2020-07-17 08:58:01,312 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]22733 | |
2020-07-17 08:58:01,312 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. | |
2020-07-17 08:58:01,312 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STOPPED -> WORKER_STARTED | |
2020-07-17 08:58:01,312 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.8 | |
2020-07-17 08:58:01,312 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000 | |
2020-07-17 08:58:01,313 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9000. | |
2020-07-17 08:58:01,732 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - PyTorch version 1.5.0+cu101 available. | |
2020-07-17 08:58:02,891 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - TensorFlow version 2.2.0 available. | |
2020-07-17 08:58:03,148 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - model: /tmp/models/5886359598784a97ace9c91df12d99590ade3efe/best_model.bin | |
2020-07-17 08:58:03,148 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died. | |
2020-07-17 08:58:03,148 [INFO ] epollEventLoopGroup-4-4 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_STARTED | |
2020-07-17 08:58:03,148 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): | |
2020-07-17 08:58:03,148 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 175, in <module> | |
2020-07-17 08:58:03,148 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED | |
2020-07-17 08:58:03,149 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() | |
2020-07-17 08:58:03,149 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. | |
java.lang.InterruptedException | |
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1668) | |
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:435) | |
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:129) | |
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) | |
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) | |
at java.base/java.lang.Thread.run(Thread.java:832) | |
2020-07-17 08:58:03,149 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 147, in run_server | |
2020-07-17 08:58:03,149 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: order, error: Worker died. | |
2020-07-17 08:58:03,149 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) | |
2020-07-17 08:58:03,149 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STARTED -> WORKER_STOPPED | |
2020-07-17 08:58:03,149 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 111, in handle_connection | |
2020-07-17 08:58:03,149 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) | |
2020-07-17 08:58:03,149 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stderr | |
2020-07-17 08:58:03,149 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 84, in load_model | |
2020-07-17 08:58:03,149 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stdout | |
2020-07-17 08:58:03,149 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) | |
2020-07-17 08:58:03,149 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stdout | |
2020-07-17 08:58:03,149 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9000 in 3 seconds. | |
2020-07-17 08:58:03,167 [INFO ] W-9000-order_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stderr | |
2020-07-17 08:58:06,232 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9000 | |
2020-07-17 08:58:06,232 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]22754 | |
2020-07-17 08:58:06,233 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. | |
2020-07-17 08:58:06,233 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.8 | |
2020-07-17 08:58:06,233 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STOPPED -> WORKER_STARTED | |
2020-07-17 08:58:06,233 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000 | |
2020-07-17 08:58:06,234 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9000. | |
2020-07-17 08:58:06,658 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - PyTorch version 1.5.0+cu101 available. | |
2020-07-17 08:58:07,851 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - TensorFlow version 2.2.0 available. | |
2020-07-17 08:58:08,104 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - model: /tmp/models/5886359598784a97ace9c91df12d99590ade3efe/best_model.bin | |
2020-07-17 08:58:08,104 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died. | |
2020-07-17 08:58:08,104 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): | |
2020-07-17 08:58:08,104 [INFO ] epollEventLoopGroup-4-5 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_STARTED | |
2020-07-17 08:58:08,104 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 175, in <module> | |
2020-07-17 08:58:08,104 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() | |
2020-07-17 08:58:08,104 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED | |
2020-07-17 08:58:08,104 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 147, in run_server | |
2020-07-17 08:58:08,104 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. | |
java.lang.InterruptedException | |
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1668) | |
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:435) | |
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:129) | |
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) | |
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) | |
at java.base/java.lang.Thread.run(Thread.java:832) | |
2020-07-17 08:58:08,104 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) | |
2020-07-17 08:58:08,105 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: order, error: Worker died. | |
2020-07-17 08:58:08,105 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 111, in handle_connection | |
2020-07-17 08:58:08,105 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STARTED -> WORKER_STOPPED | |
2020-07-17 08:58:08,105 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) | |
2020-07-17 08:58:08,105 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 84, in load_model | |
2020-07-17 08:58:08,105 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) | |
2020-07-17 08:58:08,105 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stderr | |
2020-07-17 08:58:08,105 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stdout | |
2020-07-17 08:58:08,105 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_loader.py", line 102, in load | |
2020-07-17 08:58:08,105 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9000 in 5 seconds. | |
2020-07-17 08:58:08,105 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stdout | |
2020-07-17 08:58:08,124 [INFO ] W-9000-order_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stderr | |
2020-07-17 08:58:13,191 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9000 | |
2020-07-17 08:58:13,192 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]22787 | |
2020-07-17 08:58:13,192 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. | |
2020-07-17 08:58:13,192 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.8 | |
2020-07-17 08:58:13,192 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STOPPED -> WORKER_STARTED | |
2020-07-17 08:58:13,192 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000 | |
2020-07-17 08:58:13,193 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9000. | |
2020-07-17 08:58:13,652 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - PyTorch version 1.5.0+cu101 available. | |
2020-07-17 08:58:14,809 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - TensorFlow version 2.2.0 available. | |
2020-07-17 08:58:15,063 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - model: /tmp/models/5886359598784a97ace9c91df12d99590ade3efe/best_model.bin | |
2020-07-17 08:58:15,063 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died. | |
2020-07-17 08:58:15,063 [INFO ] epollEventLoopGroup-4-6 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_STARTED | |
2020-07-17 08:58:15,063 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): | |
2020-07-17 08:58:15,063 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 175, in <module> | |
2020-07-17 08:58:15,063 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED | |
2020-07-17 08:58:15,063 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() | |
2020-07-17 08:58:15,064 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. | |
java.lang.InterruptedException | |
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1668) | |
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:435) | |
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:129) | |
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) | |
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) | |
at java.base/java.lang.Thread.run(Thread.java:832) | |
2020-07-17 08:58:15,064 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 147, in run_server | |
2020-07-17 08:58:15,064 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: order, error: Worker died. | |
2020-07-17 08:58:15,064 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) | |
2020-07-17 08:58:15,064 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STARTED -> WORKER_STOPPED | |
2020-07-17 08:58:15,064 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 111, in handle_connection | |
2020-07-17 08:58:15,064 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) | |
2020-07-17 08:58:15,064 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 84, in load_model | |
2020-07-17 08:58:15,064 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stderr | |
2020-07-17 08:58:15,064 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) | |
2020-07-17 08:58:15,064 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stdout | |
2020-07-17 08:58:15,064 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9000 in 8 seconds. | |
2020-07-17 08:58:15,064 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_loader.py", line 102, in load | |
2020-07-17 08:58:15,064 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stdout | |
2020-07-17 08:58:15,081 [INFO ] W-9000-order_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stderr | |
2020-07-17 08:58:23,149 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9000 | |
2020-07-17 08:58:23,149 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]22813 | |
2020-07-17 08:58:23,149 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. | |
2020-07-17 08:58:23,149 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.8 | |
2020-07-17 08:58:23,149 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STOPPED -> WORKER_STARTED | |
2020-07-17 08:58:23,149 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000 | |
2020-07-17 08:58:23,150 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9000. | |
2020-07-17 08:58:23,574 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - PyTorch version 1.5.0+cu101 available. | |
2020-07-17 08:58:24,738 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - TensorFlow version 2.2.0 available. | |
2020-07-17 08:58:24,990 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - model: /tmp/models/5886359598784a97ace9c91df12d99590ade3efe/best_model.bin | |
2020-07-17 08:58:24,990 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died. | |
2020-07-17 08:58:24,990 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): | |
2020-07-17 08:58:24,990 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 175, in <module> | |
2020-07-17 08:58:24,990 [INFO ] epollEventLoopGroup-4-7 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_STARTED | |
2020-07-17 08:58:24,990 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() | |
2020-07-17 08:58:24,990 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 147, in run_server | |
2020-07-17 08:58:24,990 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED | |
2020-07-17 08:58:24,990 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) | |
2020-07-17 08:58:24,991 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. | |
java.lang.InterruptedException | |
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1668) | |
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:435) | |
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:129) | |
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) | |
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) | |
at java.base/java.lang.Thread.run(Thread.java:832) | |
2020-07-17 08:58:24,991 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 111, in handle_connection | |
2020-07-17 08:58:24,991 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: order, error: Worker died. | |
2020-07-17 08:58:24,991 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) | |
2020-07-17 08:58:24,991 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STARTED -> WORKER_STOPPED | |
2020-07-17 08:58:24,991 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 84, in load_model | |
2020-07-17 08:58:24,991 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) | |
2020-07-17 08:58:24,991 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stderr | |
2020-07-17 08:58:24,991 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_loader.py", line 102, in load | |
2020-07-17 08:58:24,991 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stdout | |
2020-07-17 08:58:24,991 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) | |
2020-07-17 08:58:24,991 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stdout | |
2020-07-17 08:58:24,991 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9000 in 13 seconds. | |
2020-07-17 08:58:25,008 [INFO ] W-9000-order_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stderr | |
2020-07-17 08:58:38,075 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9000 | |
2020-07-17 08:58:38,075 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]22845 | |
2020-07-17 08:58:38,075 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. | |
2020-07-17 08:58:38,075 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.8 | |
2020-07-17 08:58:38,075 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STOPPED -> WORKER_STARTED | |
2020-07-17 08:58:38,075 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000 | |
2020-07-17 08:58:38,076 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9000. | |
2020-07-17 08:58:38,496 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - PyTorch version 1.5.0+cu101 available. | |
2020-07-17 08:58:39,662 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - TensorFlow version 2.2.0 available. | |
2020-07-17 08:58:39,918 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - model: /tmp/models/5886359598784a97ace9c91df12d99590ade3efe/best_model.bin | |
2020-07-17 08:58:39,918 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died. | |
2020-07-17 08:58:39,918 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): | |
2020-07-17 08:58:39,918 [INFO ] epollEventLoopGroup-4-8 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_STARTED | |
2020-07-17 08:58:39,918 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 175, in <module> | |
2020-07-17 08:58:39,918 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() | |
2020-07-17 08:58:39,918 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED | |
2020-07-17 08:58:39,918 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 147, in run_server | |
2020-07-17 08:58:39,919 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. | |
java.lang.InterruptedException | |
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1668) | |
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:435) | |
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:129) | |
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) | |
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) | |
at java.base/java.lang.Thread.run(Thread.java:832) | |
2020-07-17 08:58:39,919 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) | |
2020-07-17 08:58:39,919 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: order, error: Worker died. | |
2020-07-17 08:58:39,919 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 111, in handle_connection | |
2020-07-17 08:58:39,919 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STARTED -> WORKER_STOPPED | |
2020-07-17 08:58:39,919 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) | |
2020-07-17 08:58:39,919 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 84, in load_model | |
2020-07-17 08:58:39,919 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) | |
2020-07-17 08:58:39,919 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stderr | |
2020-07-17 08:58:39,919 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stdout | |
2020-07-17 08:58:39,919 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_loader.py", line 102, in load | |
2020-07-17 08:58:39,919 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9000 in 21 seconds. | |
2020-07-17 08:58:39,919 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stdout | |
2020-07-17 08:58:39,936 [INFO ] W-9000-order_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stderr | |
2020-07-17 08:59:01,004 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9000 | |
2020-07-17 08:59:01,004 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]22886 | |
2020-07-17 08:59:01,004 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started. | |
2020-07-17 08:59:01,004 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.8 | |
2020-07-17 08:59:01,004 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STOPPED -> WORKER_STARTED | |
2020-07-17 08:59:01,004 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000 | |
2020-07-17 08:59:01,063 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9000. | |
2020-07-17 08:59:01,523 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - PyTorch version 1.5.0+cu101 available. | |
2020-07-17 08:59:02,678 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - TensorFlow version 2.2.0 available. | |
2020-07-17 08:59:02,932 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - model: /tmp/models/5886359598784a97ace9c91df12d99590ade3efe/best_model.bin | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died. | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 175, in <module> | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server() | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 147, in run_server | |
2020-07-17 08:59:02,933 [INFO ] epollEventLoopGroup-4-9 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_STARTED | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket) | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 111, in handle_connection | |
2020-07-17 08:59:02,933 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED | |
2020-07-17 08:59:02,933 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. | |
java.lang.InterruptedException | |
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1668) | |
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:435) | |
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:129) | |
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) | |
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) | |
at java.base/java.lang.Thread.run(Thread.java:832) | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg) | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_service_worker.py", line 84, in load_model | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size) | |
2020-07-17 08:59:02,933 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: order, error: Worker died. | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/lili/env-huggface/lib/python3.6/site-packages/ts/model_loader.py", line 102, in load | |
2020-07-17 08:59:02,933 [DEBUG] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-order_1.0 State change WORKER_STARTED -> WORKER_STOPPED | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - entry_point(None, service.context) | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/5886359598784a97ace9c91df12d99590ade3efe/testtorchserving.py", line 129, in handle | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise e | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/5886359598784a97ace9c91df12d99590ade3efe/testtorchserving.py", line 118, in handle | |
2020-07-17 08:59:02,933 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stderr | |
2020-07-17 08:59:02,933 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - _service.initialize(context) | |
2020-07-17 08:59:02,934 [WARN ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-order_1.0-stdout | |
2020-07-17 08:59:02,934 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/tmp/models/5886359598784a97ace9c91df12d99590ade3efe/testtorchserving.py", line 49, in initialize | |
2020-07-17 08:59:02,934 [INFO ] W-9000-order_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stdout | |
2020-07-17 08:59:02,934 [INFO ] W-9000-order_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9000 in 34 seconds. | |
2020-07-17 08:59:02,952 [INFO ] W-9000-order_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-order_1.0-stderr | |
2020-07-17 08:59:30,175 [INFO ] epollEventLoopGroup-2-2 org.pytorch.serve.ModelServer - Management model server stopped. | |
2020-07-17 08:59:30,175 [INFO ] epollEventLoopGroup-2-1 org.pytorch.serve.ModelServer - Inference model server stopped. | |
2020-07-17 08:59:32,401 [INFO ] main org.pytorch.serve.ModelServer - Torchserve stopped. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment