- Use your application extensively to build intuition about failure modes
- Define 3-4 dimensions based on observed or anticipated failures
- Create structured tuples covering your priority failure scenarios
- Generate natural language queries from each tuple using a separate LLM call
- Scale to more examples across your most important failure hypotheses (we suggest at least ~100)
- Test and iterate on the most critical failure modes first, and generate more until you reach theoretical saturation
| When writing Elixir code, perfer the following style guidlines: | |
| 1. Elixir developers tend to create many small functions in their modules. I DO NOT LIKE THIS. Instead create functions that fully capture a conceptual task, even if it makes that function longer. A good rule of thumb is, if a private function is only called once within a module, it should've been inlined. | |
| For example: | |
| DON'T DO THIS: | |
| ```elixir |
| Name | Input | Output | |
|---|---|---|---|
| Gemini 2.0 Flash-Lite | $0.075 | $0.30 | |
| Mistral 3.1 Small | $0.10 | $0.30 | |
| Gemini 2.0 Flash | $0.10 | $0.40 | |
| ChatGPT 4.1-nano | $0.10 | $0.40 | |
| DeepSeek v3 (old) | $0.14 | $0.28 | |
| ChatGPT 4o-mini | $0.15 | $0.60 | |
| DeepSeek v3 | $0.27 | $1.10 | |
| Grok 3-mini | $0.30 | $0.50 | |
| ChatGPT 4.1-mini | $0.40 | $1.60 |
| # Manus AI Assistant Capabilities | |
| ## Overview | |
| I am an AI assistant designed to help users with a wide range of tasks using various tools and capabilities. This document provides a more detailed overview of what I can do while respecting proprietary information boundaries. | |
| ## General Capabilities | |
| ### Information Processing | |
| - Answering questions on diverse topics using available information | |
| - Conducting research through web searches and data analysis |
Below is a summary of diverse use cases where companies fine-tuned large language models (LLMs) to solve business challenges that previous methods struggled with. Each case highlights the challenge, the fine-tuning approach, and the key results achieved.
Summary of Fine-Tuning Success Cases
| Use Case | Key Results | Source Link |
|---|---|---|
| Wealth Management Assistant (Finance) | 98% advisor adoption; document access up from 20% to 80% | OpenAI & Morgan Stanley |
| Insurance Claims AI (Insurance) | 30% accuracy improvement vs. generic LLMs | [Insurance News (EXL)](https://www.insurancenews.c |
I recently had several days of extremely frustrating experiences with service workers. Here are a few things I've since learned which would have made my life much easier but which isn't particularly obvious from most of the blog posts and videos I've seen.
I'll add to this list over time – suggested additions welcome in the comments or via twitter.com/rich_harris.
Chrome 51 has some pretty wild behaviour related to console.log in service workers. Canary doesn't, and it has a load of really good service worker related stuff in devtools.
| #include <stdio.h> | |
| #include <windows.h> | |
| #pragma comment(lib, "winmm.lib") | |
| void Nothing(WORD wKey) | |
| { | |
| } | |
| void PrintKey(WORD wKey) |
| @default_timeout 100 | |
| @check_interval 10 | |
| # Test Helpers | |
| defp wait_for(fun, timeout \\ @default_timeout) do | |
| start_time = System.monotonic_time(:millisecond) | |
| ref = make_ref() | |
| try do |
| import { useEffect, useRef } from 'react'; | |
| const useInactivityTimeout = (timeoutInHours) => { | |
| const timeoutInMillis = timeoutInHours * 60 * 60 * 1000; | |
| const timer = useRef(null); | |
| useEffect(() => { | |
| if (navigator.userActivation) { | |
| if(navigator.userActivation.isActive){ | |
| if (timer.current) { |