Skip to content

Instantly share code, notes, and snippets.

View Plastix's full-sized avatar

Aidan Pieper Plastix

View GitHub Profile
@dlew
dlew / File.java
Created March 1, 2016 20:46
Automated onError() message generation
public static Action1<Throwable> crashOnError() {
final Throwable checkpoint = new Throwable();
return throwable -> {
StackTraceElement[] stackTrace = checkpoint.getStackTrace();
StackTraceElement element = stackTrace[1]; // First element after `crashOnError()`
String msg = String.format("onError() crash from subscribe() in %s.%s(%s:%s)",
element.getClassName(),
element.getMethodName(),
element.getFileName(),
element.getLineNumber());

FWIW: I (@rondy) am not the creator of the content shared here, which is an excerpt from Edmond Lau's book. I simply copied and pasted it from another location and saved it as a personal note, before it gained popularity on news.ycombinator.com. Unfortunately, I cannot recall the exact origin of the original source, nor was I able to find the author's name, so I am can't provide the appropriate credits.


Effective Engineer - Notes

What's an Effective Engineer?

@0xabad1dea
0xabad1dea / copilot-risk-assessment.md
Last active September 11, 2023 10:21
Risk Assessment of GitHub Copilot

Risk Assessment of GitHub Copilot

0xabad1dea, July 2021

this is a rough draft and may be updated with more examples

GitHub was kind enough to grant me swift access to the Copilot test phase despite me @'ing them several hundred times about ICE. I would like to examine it not in terms of productivity, but security. How risky is it to allow an AI to write some or all of your code?

Ultimately, a human being must take responsibility for every line of code that is committed. AI should not be used for "responsibility washing." However, Copilot is a tool, and workers need their tools to be reliable. A carpenter doesn't have to