Skip to content

Instantly share code, notes, and snippets.

@patx
Created January 29, 2025 00:35
Show Gist options
  • Select an option

  • Save patx/1269e389d7d19ea31eb3b9ffff47e0cb to your computer and use it in GitHub Desktop.

Select an option

Save patx/1269e389d7d19ea31eb3b9ffff47e0cb to your computer and use it in GitHub Desktop.

MicroPie vs FastAPI Benchmarks (with wrk)

FastAPI

The code used:

from fastapi import FastAPI
from typing import Optional

app = FastAPI()

@app.get("/")
async def read_root():
    return "Hello ASGI World!"

@app.get("/index")
async def read_index(name: Optional[str] = None):
    if name:
        return f"Hello {name}"
    return "Hello FastAPI World!"

The results:

Running 30s test @ http://127.0.0.1:8000/
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    49.15ms    7.67ms 102.39ms   78.03%
    Req/Sec   510.33     57.22   640.00     64.42%
  61027 requests in 30.04s, 8.39MB read
Requests/sec:   2031.28
Transfer/sec:    285.95KB
Running 30s test @ http://127.0.0.1:8000/
  4 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    91.97ms   36.16ms 365.85ms   72.52%
    Req/Sec   551.72    121.33   820.00     73.33%
  65928 requests in 30.03s, 9.06MB read
Requests/sec:   2195.34
Transfer/sec:    309.07KB

MicroPie

The code used:

from MicroPie import Server

class Root(Server):
    async def index(self, name=None):
        if name:
            return f'Hello {name}'
        return 'Hello ASGI World!'

app = Root()

The results:

Running 30s test @ http://127.0.0.1:8000/
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    44.58ms    2.89ms 101.74ms   90.52%
    Req/Sec   562.74     29.75   636.00     76.58%
  67283 requests in 30.04s, 17.02MB read
Requests/sec:   2239.68
Transfer/sec:    580.29KB
Running 30s test @ http://127.0.0.1:8000/
  4 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    81.46ms   11.84ms 142.96ms   68.89%
    Req/Sec   615.61     66.25   838.00     71.75%
  73571 requests in 30.03s, 18.61MB read
Requests/sec:   2450.02
Transfer/sec:    634.76KB

The ASGI Server

These tests were run with uvicorn on a StarLabs Starlite MkIV. The command used to start uvicorn was:

uvicorn app:app --workers 4

Results Table

100 Connections

Framework Avg Latency Max Latency Requests/Sec Data Transferred (MB) Transfer Rate (KB/s)
FastAPI 49.15ms 102.39ms 2031.28 8.39 285.95
MicroPie 44.58ms 101.74ms 2239.68 17.02 580.29

200 Connections

Framework Avg Latency Max Latency Requests/Sec Data Transferred (MB) Transfer Rate (KB/s)
FastAPI 91.97ms 365.85ms 2195.34 9.06 309.07
MicroPie 81.46ms 142.96ms 2450.02 18.61 634.76

Conclusion

The results show that MicroPie performs slightly better than FastAPI under similar conditions in terms of lower average latency and higher requests per second. The lightweight design of MicroPie gives it an edge in raw performance, particularly for scenarios where simplicity and speed are prioritized. However, FastAPI still offers broader functionality, including built-in validation and an interactive API documentation system, making it more versatile for complex projects.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment