I like the direction of minimizing error handling boilerplate and repetitive code. But, I think the propose soluiton is too heavy handed especially for the most common case of simply returning an error as is when a called function returns error:
func foo() int, error {
result, err := bar()
// If bar() returned an error just return it as is
if err != nil {
result , err
}
// do stuff
return result, nil
}
The code example of the proposal with opening files for reading and writing and the need to clesn them up is important, but doesn't represent the majority of code.
In this discussion I'll first highlight problems with the suggested solution and then propose alternatives
I disagree with this sentence: "In general Go programs have too much code checking errors and not enough code handling them". Most well-structured and adaptive programs check every error, but funnel error handling to several hubs. Deep chains of functions may call each other and just propagate the errors up (maybe with additional context) and then at some point there is central error handling code that performs operations like:
- Converting errors to higher-level errors
- Wrapping errors with additional metadata (e.g)
- Translating Go level errors to an external API (e.g. HTTP status code + description)
- Logging the error to the console, file or remtate logging service
Let's consider some broad categories of errors and see how they can be handled in theory:
-
Invalid input (user-provided, corrupted files, wrong format of payload)
-
Operational problems (invalid configuration, missing environment variables, out of memory, out of disk space, dropped connections)
-
Bugs
-
Input should be checked and if it's invalid just return an error (error checking, no handling)
-
Operational problems are checked and when discovered all the program can do is hope it's intermittent and retry the last operations again (error checking, central error handling)
-
Bugs by definition weren't designed for, so the program can do anything about them
The bottom line is that error checking is the dominant aspect of dealing with errors and in most cases the code that detects the error should just propagate it to the caller.
I agree with this sentence, but think it's outside the scope of this proposal: "Functions should typically include relevant information about their arguments in their errors, like os.Open returning the name of the file being opened. Returning the error unmodified produces a failure without any information about the sequence of operations that led to the error."
Including the stacktrace as well as the state of each stack frame (which will include for example the filename parameter in a call to os.Open
is often incredibly useful. Because, it's so important and so generic it should be supported by the language or a library (as an option) and not be left to the developer to implement whenever they return an error.
Anyway, moving on to the specifics of the proposal.
Many other reponses to the proposal listed issues with adding the check
construct:
- new keyword (may conflict with names in exisiting programs)
- there is nothing like it in Go so far
- can makes it diffiuclt to parse code
I have another objection, which is extra horizontal scrolling and reduced readability. After seeing the = or := sign I have to horizontally scroll past the check
keyword to find the called function. It's not a big deal, but it breaks the flow and introduces irregularity to the syntax.
a := 1
r := check foo()
The examples later of multiple calls to checked functions inside one line are even worse and make the code barely readable:
fmt.Println("result:", check strconv.Atoi(x) + check strconv.Atoi(y))
It makes the code so hard to read. It's also very easy to miss a check in the middle there.
Oh. one more issue is that the number of returned variables is one less than number in the called function signature because the err variable is not assigned. This is confusing compared to non-checked functions:
// returns 3 values
func bar() (int, int, error) {
...
}
func foo() (int, error) {
// Only assigns two return values
x, y := check bar()
}
I understand and agree with the motivation behind handlers, but they add so much complexity I cringe about reading and debugging code using them liberally. Following a handler chain and figuring out, which handler is calling return and which handler only just transforms the error is painful. The scoping rules and ordering rules of the handler chains are another huge cognitive load to deal with both when learnig the language and later when troubleshooting a problem. Even this function from the proposal with 6 lines of code and clear comments is not trivial to grasp
func process(user string, files chan string) (n int, err error) {
handle err { return 0, fmt.Errorf("process: %v", err) } // handler A
for i := 0; i < 3; i++ {
handle err { err = fmt.Errorf("attempt %d: %v", i, err) } // handler B
handle err { err = moreWrapping(err) } // handler C
check do(something()) // check 1: handler chain C, B, A
}
check do(somethingElse()) // check 2: handler chain A
}
Groking the difference between defer and handle blocks (don't accumulate in loops because lexically scoped) is another major source of confusion.
The default handler is cool by the way.
OK. Enough complaining. Let's see how we can preserve most of the benefits of the proposal while avoiding the pitfalls.
Go already uses _ as a special symbol for return values, which means ignore this return value and don't subject it to the rule that each returned value from a functon must be used. Let's take advantage of the same mechanism and use a special symbol like # to mean return the error if present. This code:
func foo() int, error {
x, y # := bar()
result := baz(x, y)
return result, nil
}
Is equivalent to Go 1:
func foo() int, error {
x, y, err := bar()
if err != nil {
return 0, err
}
result := baz(x, y)
return result, nil
}
When an error occurs the zero value of unnamed return values is returned. This is similar to the proposal's check + default handler, but there is no special check keyword.
func foo() int, error {
x, y := check bar()
result := baz(x, y)
return result, nil
}
The behavior for named return values is similar except the result vaiable is not zeroed out
func foo() (result int, err error) {
x, y # := bar()
result := baz(x, y)
return result, nil
}
It is equivalent to Go 1
func foo() (result int, err error) {
x, y, err := bar()
if err != nil {
return
}
result := baz(x, y)
return result, nil
}
This covers the simplest and most common case of error checking (and handling, which is just bailing out with the error as is). It is also backwards compatible. If you choose to assign the error value of the call to bar()
to a named variable like err then you can just check it yourself like you do today.
OK. The simple case is covered with the concise # symbol. What about error handling and even nested/chained error handling. I propose to simply use functions as error handlers. A error handling function will be invoked by the # symbol and its return value is always error. The rules are similar to the default # handling. If the caller function has unnamed return values they will be zeroed out. If there are named return values they will retain their values. Let's reqrite some examples from the proposal:
This example:
func process(user string, files chan string) (n int, err error) {
handle err { return 0, fmt.Errorf("process: %v", err) } // handler A
for i := 0; i < 3; i++ {
handle err { err = fmt.Errorf("attempt %d: %v", i, err) } // handler B
handle err { err = moreWrapping(err) } // handler C
check do(something()) // check 1: handler chain C, B, A
}
check do(somethingElse()) // check 2: handler chain A
}
Will be written as:
func process(user string, files chan string) (n int, err error) {
// handler A
handle1 := func(err error) (int, err error) { return 0, fmt.Errorf("process: %v", err) }
// handlers B + C
for i := 0; i < 3; i++ {
handle2 := func(err error) (int, err error) {
err = moreWrapping(err)
// i is available in the closure
err = fmt.Errorf("attempt %d: %v", i, err)
// explicitly call handle1
return handle1(err)
}
#handle2 := do(something()) // check 1: handler chain C, B, A
}
#handle1 := do(somethingElse()) // check 2: handler chain A
}
This form is a little more verbose then error handlers, but this verbosity comes with almost zero magic.
- We use functions.
- Multiple errors can be handled by the same error handling function
- It is very clear, which handler handles each error (It's right there in the call site)
- There is no need to scroll up and look for error handlers in the scope.
- The handler chaining is done by explicit function calls (handle2 calls handle1 to chain)
- It is more flexible, for example in case we don't want the top level handler to be called.
- It's possible to define the handler functions outside the function to avoid clutter
- It's possible to share a handler function across multiple functions (when defined externally)
If a common handler is useful for many different functions it can be defined as an external function:
func mistake(err error) error {
fmt.Println("I've made a huge mistake: ", err)
return err
}
func foo1() error {
#mistake := bar1()
}
func foo2() error {
#mistake := bar2()
}
All the complex semantics and interaction between handlers, handler chains and defers simply vanish.
👍 💯 Very good
I prefer to insert a comma before the special symbol
#
unless it's the leftmost operand.I think that a variable declaration for
#
is inappropriate.