Skip to content

Instantly share code, notes, and snippets.

@derekr
Last active November 12, 2025 22:44
Show Gist options
  • Select an option

  • Save derekr/fb8dd720d5c1600fefe3521d91ae83a4 to your computer and use it in GitHub Desktop.

Select an option

Save derekr/fb8dd720d5c1600fefe3521d91ae83a4 to your computer and use it in GitHub Desktop.
SSE FAQ

This is just a stub doc for collecting any SSE related tips/tricks/considerations others in the Datastar community want to share. While it will likely serve as a helpful resource itself, the content will be used to author a more user friendly doc or artifact to reference when learning and leveraging SSE.

SSE FAQ

Are there any SSE gotchas?

There are some known gotchas for SSE documented here:

@AndrooFrowns
Copy link

Here's my thoughts on a possible way to structure it based on FAQs we see in the discord. There's probably some room to defer to other parts of the documentation or omit things, but I haven't yet taken the time to reconcile where that makes sense to me. I removed the language saying "big red warning" section since I wasn't sure if it'd always stay red in the future, or if people using assistive devices or with colorblindness would easily identify the color.


SSE FAQ

If any information here contradicts the standards or MDN, assume those are more authoratative than this FAQ.

Are SSE required to use Datastar?

No, you can use Datastar with regular http responses as long as you adjust the headers appropriately. See (link to patching elements)

Do I have to keep my SSE connection long lived?

No, SSE is just the option to provide zero or more responses. If your connection needs are complete after zero or one events are added to the stream, then terminate the connection.

Are there any SSE gotchas?

@AndrooFrowns
Copy link

AndrooFrowns commented Nov 11, 2025

oh I realized the way I mentioned the 100 connection limit is misleading since on HTTP2 it is 100 by default but negotiated between client/server. My phrasing implies it's always 100.

@Regaez
Copy link

Regaez commented Nov 11, 2025

Here are a few things I've picked up along the way by using SSE that may help others:

  1. Compression should be done at the application level, not by a reverse proxy. Otherwise, depending on the frequency and content size of your event payloads, number of simultaneous connections, and their duration, you could potentially send large amounts of unnecessary data between your application and the reverse proxy. It will increase memory usage in both places, increasing server resource demands.
  2. Brotli compression should be preferred and used, if possible, as it can enable insane compression ratios (think 100s:1), if your response is well suited to it (ie the structure of your HTML is fairly similar across events and the SSE stream is long-lasting). This can result in barely any data transfer to the client over the lifetime of the connection.
  3. SSE responses cannot be cached, for example with the standard Cache-control header. Compression is basically the way you can cache, but it is scoped per connection.
  4. If not using the sdks, take care to ensure there are no errant newlines in your data response. Newlines missing an data: elements prefix, for example, will break the event parsing and result in no DOM update at all. The SDKs handle this for you automatically.
  5. For anyone using nginx as a reverse proxy, basically what this top answer says: https://serverfault.com/questions/801628/for-server-sent-events-sse-what-nginx-proxy-configuration-is-appropriate
  6. If opening a long-lived SSE connection on page load, ie to send updates via later (eg "fat morph" strategy), if your page structure/content supports it, and you are using Brotli compression on the stream, you may want to consider sending an initial event with identical content as is already present on the page in order to "warm up" the compression window cache. This may result in smaller transfer data payloads for future updates, potentially improving the "feel"/responsiveness of your UI.

@nickchomey
Copy link

  1. Compression should be done at the application level, not by a reverse proxy. Otherwise, depending on the frequency and content size of your event payloads, number of simultaneous connections, and their duration, you could potentially send large amounts of unnecessary data between your application and the reverse proxy. It will increase memory usage in both places, increasing server resource demands.

What about if the reverse proxy is on the same server - eg caddy in front of your app?

  1. Brotli compression should be preferred and used, if possible, as it can enable insane compression ratios (think 100s:1), if your response is well suited to it (ie the structure of your HTML is fairly similar across events and the SSE stream is long-lasting). This can result in barely any data transfer to the client over the lifetime of the connection.

What about zstd for browsers that support it?

@alvarolm
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment