-
Open the ChatGPT Codex task setup panel. This is where you configure your environment before starting a task.
-
Locate the "Setup Script" field. You’ll see a note that internet access is disabled after the script runs.
| // Website you intended to retrieve for users. | |
| const upstream = 'api.openai.com' | |
| // Custom pathname for the upstream website. | |
| const upstream_path = '/' | |
| // Website you intended to retrieve for users using mobile devices. | |
| const upstream_mobile = upstream | |
| // Countries and regions where you wish to suspend your service. |
- In general, binaries built just for x86 architecture will automatically be run in x86 mode
- You can force apps in Rosetta 2 / x86 mode by right-clicking app, click Get Info, check "Open using Rosetta"
- You can force command-line apps by prefixing with
arch -x86_64, for examplearch -x86_64 go - Running a shell in this mode means you don't have to prefix commands:
arch -x86_64 zshthengoor whatever - Don't just immediately install Homebrew as usual. It should most likely be installed in x86 mode.
Not all toolchains and libraries properly support M1 arm64 chips just yet. Although
| addEventListener('fetch', event => { | |
| event.respondWith(handleRequest(event.request)) | |
| }) | |
| // Following code is a modified version of that found at https://blog.cloudflare.com/dronedeploy-and-cloudflare-workers/ | |
| /** | |
| * Fetch and log a request | |
| * @param {Request} request | |
| */ |
| Parameters: | |
| Env: | |
| Description: An environment name that will be prefixed to resource names | |
| Type: String | |
| AllowedValues: ["development", "production"] | |
| resources: | |
| NetworkRole: | |
| Type: "AWS::IAM::Role" | |
| Properties: |
| Version: 2012-10-17 | |
| Statement: | |
| - Effect: Allow | |
| Action: | |
| - 'cloudformation:CreateChangeSet' | |
| - 'cloudformation:DescribeChangeSet' | |
| - 'cloudformation:ExecuteChangeSet' | |
| - 'cloudformation:DescribeStacks' | |
| Resource: | |
| - 'arn:aws:cloudformation:<region>:<account_no>:stack/<roles_permission_stack_name>/*' |
| // An usage of AWS.IotData in Lambdas | |
| // | |
| // This example assumes some things | |
| // 1. That you have a environment variable AWS_IOT_ENDPOINT. This is the url that you can find in AWS IoT dashboard settings | |
| // 2. The lambda and your aws iot devices are on the same account and region | |
| const AWS = require('aws-sdk'); | |
| const iotData = new AWS.IotData({ endpoint: process.env.AWS_IOT_ENDPOINT }); | |
| const handler = (event, context) => { |
| #!/bin/bash | |
| # creates a GitHub release (draft) and adds pre-built artifacts to the release | |
| # after running this script user should manually check the release in GitHub, optionally edit it, and publish it | |
| # args: :version_number (the version number of this release), :body (text describing the contents of the tag) | |
| # example usage: ./gh_release_bamboo.sh "1.0.0" "Release notes: ..." | |
| # => name: nRF5-ble-driver_<platform_name>_1.0.0_compiled-binaries.zip example: nRF5-ble-driver_win-64_2.0.1_compiled-binaries.zip | |
| # to ensure that bash is used: https://answers.atlassian.com/questions/28625/making-a-bamboo-script-execute-using-binbash |
This article shows how to apply Node.js Stream and a bit of Reactive programming to a real(tm) problem. The article is intended to be highly practical and oriented for an intermediate reader. I intentionally omit some basic explanations. If you miss something try to check the API documentation or its retelling(e.g.: this one)
So, lets start from the problem description. We need to implement a simple web scraper which grabs all the data from some REST API, process the data somehow and inserts into our Database. For simplicity, I omit the details about the actual database and REST API(in real life it was the API of some travel fare aggregator website and a Pg database)
Consider we have two functions(code of the IO simulator functions and the other article code is here):
getAPI(n, count) // pseudo API caA few conversations have circled around user-side structural profiling. For context, see React PR #7549: Show React events in the timeline when ReactPerf is active
One particular concern is the measurement overhead. This gist has a benchmarking script (measure.js) for evaluating overhead and initial results.
Runs about 0.65µs per mark() call. Naturally, that's ~= an overhead of 1ms for 1500 mark()s.
