- By Edmond Lau
- Highly Recommended 👍
- http://www.theeffectiveengineer.com/
- They are the people who get things done. Effective Engineers produce results.
| <!-- livebook:{"persist_outputs":true} --> | |
| # Distribute | |
| ## Section | |
| A small toy to show how you might, given a stream, do a "fan out", processing different elements in separate streams. Powered by simple primitives like `Stream.resource` and `spawn_link`. | |
| ```elixir | |
| defmodule Distribute do |
| defmodule OperationsTest do | |
| use ExUnit.Case, async: true | |
| def make_mod() do | |
| String.to_atom("Elixir.Test#{System.unique_integer([:positive])}") | |
| end | |
| describe "operation/2" do | |
| setup do | |
| mod = make_mod() |
| import { useEffect, useRef } from 'react'; | |
| const useInactivityTimeout = (timeoutInHours) => { | |
| const timeoutInMillis = timeoutInHours * 60 * 60 * 1000; | |
| const timer = useRef(null); | |
| useEffect(() => { | |
| if (navigator.userActivation) { | |
| if(navigator.userActivation.isActive){ | |
| if (timer.current) { |
| @default_timeout 100 | |
| @check_interval 10 | |
| # Test Helpers | |
| defp wait_for(fun, timeout \\ @default_timeout) do | |
| start_time = System.monotonic_time(:millisecond) | |
| ref = make_ref() | |
| try do |
| #include <stdio.h> | |
| #include <windows.h> | |
| #pragma comment(lib, "winmm.lib") | |
| void Nothing(WORD wKey) | |
| { | |
| } | |
| void PrintKey(WORD wKey) |
I recently had several days of extremely frustrating experiences with service workers. Here are a few things I've since learned which would have made my life much easier but which isn't particularly obvious from most of the blog posts and videos I've seen.
I'll add to this list over time – suggested additions welcome in the comments or via twitter.com/rich_harris.
Chrome 51 has some pretty wild behaviour related to console.log in service workers. Canary doesn't, and it has a load of really good service worker related stuff in devtools.
Below is a summary of diverse use cases where companies fine-tuned large language models (LLMs) to solve business challenges that previous methods struggled with. Each case highlights the challenge, the fine-tuning approach, and the key results achieved.
Summary of Fine-Tuning Success Cases
| Use Case | Key Results | Source Link |
|---|---|---|
| Wealth Management Assistant (Finance) | 98% advisor adoption; document access up from 20% to 80% | OpenAI & Morgan Stanley |
| Insurance Claims AI (Insurance) | 30% accuracy improvement vs. generic LLMs | [Insurance News (EXL)](https://www.insurancenews.c |
| # Manus AI Assistant Capabilities | |
| ## Overview | |
| I am an AI assistant designed to help users with a wide range of tasks using various tools and capabilities. This document provides a more detailed overview of what I can do while respecting proprietary information boundaries. | |
| ## General Capabilities | |
| ### Information Processing | |
| - Answering questions on diverse topics using available information | |
| - Conducting research through web searches and data analysis |