Last active
August 27, 2025 18:16
-
-
Save bloeys/6a71908965d4097d8db6fabf53a5183d to your computer and use it in GitHub Desktop.
Example usage code for an Event loop in Jai.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| main :: () { | |
| /* | |
| Somewhere in main we start the event loop on a separate thread. | |
| The event loop enables cooperative multi-tasking between tasks. A task is a coroutine. | |
| We call this event loop 'Overloop'. | |
| */ | |
| // Prepare some user data for the event loop (*void) | |
| data := New(Send_Msg_Coroutine_Data, false); | |
| data.* = .{ | |
| overchat = oc, | |
| chat = c, | |
| }; | |
| /* | |
| Add work to the event loop. Can be called from any thread. | |
| 'proc' is the actual work function called each tick of the event loop on the event loop thread. | |
| Once 'proc' says it's finished it's added to a list of finished work. | |
| Main thread consumes any finished work each frame, calls finished_proc on main thread (if it exists), then frees the | |
| work. | |
| Each work item has its own pool that's freed when it signals it's fully finished. This enables you to *not* care | |
| about freeing anything. GC at home. | |
| A work item can create and 'depend' on other work items. A work item is only called when all the stuff it depends on are finished. | |
| This enables coroutines to wait on other long-running coroutines and to use their results without blocking the thread. | |
| A work item and all it's children use the same memory pool and are all freed at once when the parent finishes. | |
| Temp is reset after calling/ticking each work item. | |
| */ | |
| ol_add_work(*context.state.overloop, data.*, proc=overchat_send_msg_proc, finished_proc=overchat_send_msg_finished_proc); | |
| } | |
| /* | |
| This function is an event loop work proc (aka coroutine proc). | |
| It sends chat messages to the OpenAI API to speak to GPT 4o, and streams back | |
| the AI response until it's done or errors out. | |
| */ | |
| overchat_send_msg_proc :: (ol: *Overloop, work: *Overloop_Work) -> (Overloop_Proc_Result) { | |
| ok: bool; | |
| data := safe_cast(work.data, *Send_Msg_Coroutine_Data); | |
| // Yield done signals that we are finished. Do NOT call this proc again. | |
| if !data.chat.msgs.count ol_yield_done(); | |
| /* | |
| Each '#code' block is a step/tick run by the event loop. | |
| On the first call of this proc the first code block runs, second time the second block runs, and so on. | |
| 'ol_yield_done()' can be called to return and end the coroutine early. No further calls will be made. | |
| 'ol_yield_repeat()' can be called to return but tell the event loop to call the SAME step again. This allows | |
| you to create loops on the event loop. | |
| 'ol_yield_repeat()' enables long running tasks of unknown duration without blocking the event loop, like network requests. | |
| */ | |
| ol_coroutine( | |
| // First tick | |
| #code { | |
| // Prepare network request to OpenAI | |
| completion_req := Chat_Completion_Req.{ | |
| model = GPT_4O, | |
| messages = .[], | |
| max_completion_tokens = 8192, | |
| temperature = 1, | |
| stream = true, | |
| }; | |
| ok=, async_handle := create_chat_completion_async(data.overchat.oai_client, completion_req); | |
| if !ok { | |
| log("failed to create chat completion", flags=.ERROR); | |
| ol_yield_done(); | |
| } | |
| // Do initial tick (uses async CURL underneath) | |
| data.async_handle = async_handle; | |
| ok=, done, resps := tick_chat_completion_async(data.async_handle); | |
| if !ok { | |
| log("failed to tick chat completion", flags=.ERROR); | |
| ol_yield_done(); | |
| } | |
| if resps.count { | |
| // Do stuff with streamed AI response | |
| } | |
| // In case we got full response in one call! | |
| if done { | |
| data.success = true; | |
| ol_yield_done(); | |
| } | |
| }, | |
| // Second tick. | |
| // Loops forever (once per event loop run) until done streaming from OpenAI or error. | |
| #code { | |
| // Tick till done | |
| ok=, done, resps := tick_chat_completion_async(data.async_handle); | |
| if !ok { | |
| log("failed to tick chat completion", flags=.ERROR); | |
| ol_yield_done(); | |
| } | |
| if resps.count { | |
| // Do stuff with streamed AI response | |
| } | |
| // Not done, so tell event loop to call this same code block next time. | |
| if !done ol_yield_repeat(); | |
| // All done! | |
| // No need for explicit ol_yield_done, that is done implicitly when all code blocks are done :) | |
| data.success = true; | |
| }, | |
| ); | |
| // Just to stop compiler from complaining about not all paths returning. | |
| ol_yield_done(); | |
| } |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment