Skip to content

Instantly share code, notes, and snippets.

View simon-mo's full-sized avatar
:shipit:

Simon Mo simon-mo

:shipit:
View GitHub Profile
ip count is_travis_ip
104.196.57.92 18199 True
35.227.97.188 15061 True
34.74.253.255 10694 True
35.196.72.151 8435 True
35.237.8.208 8208 True
35.196.158.85 7857 True
35.237.212.185 6797 True
52.179.196.222 5702 False
35.237.56.208 5599 True
pip install uvicorn beautifulsoup4 requests aiohttp lxml scrapy
  • compute.py pings the mock_server.py and figure out the views per second
  • mock_server.py can be started with uvicorn mock_server:app
  • scrapy_spider is the existing solution, run it with scrapy runspider scrapy_spider.py
  • sync_spider use requests and fire request synchrounously
  • async_spider use aiohttp and fire 4 request concurrently at a time
from __future__ import print_function, absolute_import, division
import rpc
import os
import sys
import numpy as np
import torch
from torchvision import transforms
from torch.autograd import Variable
from PIL import Image, ImageEnhance
import logging
<!DOCTYPE html>
<html>
<head>
<style>
.error {
color: red;
}
</style>
<script type="text/javascript" src="https://cdn.jsdelivr.net/npm//vega@5"></script>
<script type="text/javascript" src="https://cdn.jsdelivr.net/npm//[email protected]"></script>
== Status ==
Memory usage on this node: 37.0/480.3 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/640 CPUs, 0/8 GPUs, 0.0/2526.95 GiB heap, 0.0/128.42 GiB objects
Result logdir: /home/ubuntu/ray_results/atari-impala
Number of trials: 4 (4 TERMINATED)
+---------------------------------------------+------------+-------+-----------------------------+--------+------------------+-------------+----------+
| Trial name | status | loc | env | iter | total time (s) | timesteps | reward |
|---------------------------------------------+------------+-------+-----------------------------+--------+------------------+-------------+----------|
| IMPALA_BreakoutNoFrameskip-v4_511a1830 | TERMINATED | | BreakoutNoFrameskip-v4 | 39 | 699.024 | 3050000 | 44.21 |
import asyncio
import ray
ray.init()
@ray.remote
class AsyncWorker:
async def do_work(self):
print("Started")
await asyncio.sleep(0.5) # simulate network I/O
@ray.remote
class AsyncActor:
def __init__(self):
...
async def some_function(self):
...
await asyncio.sleep(1)
...
# ray futures are directly awaited.
result = await object_id_1
result = await task.remote()
# Use asyncio.wait to wait with timeout
done_futures, not_done_futures = await asyncio.wait([ObjecIDs...], timeout=...)
# Use asyncio.gather to wait for a batch
results = await asyncio.gather([object_id_1.future(), object_id_2.future()])
@ray.remote
class LoadBalancer:
async def proxy_request(self, query):
actor = self.choose_actor()
return await actor.execute.remote(query)
# before
async def compute_intensive_coroutine():
do_work() # cpu heavy task here
... # blocking the event loop
... # threading doesn't help because of GIL
# after
async def compute_intensive_coroutine();
await do_work.remote()
# offload task to ray worker