This post presents an interesting way to make error handling more brief in Golang 1.18
One of the downsides to this approach is that defer
functions and catching panics
can be generally more expensive than using if/return for error handling. I decided to benchmark to see how significant the penalty turns out to be.
For some boilerplate, we have a giveError function:
func giveError(b bool) (int, error) {
if b {
return 0, fmt.Errorf("given error")
}
return 1, nil
}
Which returns an error if the input is true, and a nil error if the input is false.
The first set of benchmarks has a 50/50 chance of returning an error. We have one benchmark for the panic/defer approach, and one for if/return:
func BenchmarkHandler50p(b *testing.B){
for i := 0; i < b.N; i++ {
func() (err error) {
defer HandleErr(&err)
v := CheckAndAssign(giveError(i % 2 == 0))
if v != 1 {
b.Errorf("Unexpected v")
}
return
}()
}
}
func BenchmarkCondition50p(b *testing.B){
for i := 0; i < b.N; i++ {
func() (error) {
v, err := giveError(i % 2 == 0)
if err != nil {
return err
}
if v != 1 {
b.Errorf("Unexpected v")
}
return nil
}()
}
}
And the results:
BenchmarkHandler50p-16 9471970 129.6 ns/op
BenchmarkCondition50p-16 29270944 54.20 ns/op
So with a 50/50 chance of an error, the panic/defer approach is 2.4x slower than the if/return approach.
But what if we never actually throw the error, and never have to deal with a panic?
func BenchmarkHandlerNoErr(b *testing.B){
for i := 0; i < b.N; i++ {
func() (err error) {
defer HandleErr(&err)
v := CheckAndAssign(giveError(false))
if v != 1 {
b.Errorf("Unexpected v")
}
return
}()
}
}
func BenchmarkConditionNoErr(b *testing.B){
for i := 0; i < b.N; i++ {
func() (error) {
v, err := giveError(false)
if err != nil {
return err
}
if v != 1 {
b.Errorf("Unexpected v")
}
return nil
}()
}
}
The results:
BenchmarkHandlerNoErr-16 229893313 5.251 ns/op
BenchmarkConditionNoErr-16 789040215 1.396 ns/op
With no errors to handle, we see this case become 3.7x slower.
What if we're handling all errors?
func BenchmarkHandlerAllErr(b *testing.B){
for i := 0; i < b.N; i++ {
func() (err error) {
defer HandleErr(&err)
v := CheckAndAssign(giveError(true))
if v != 1 {
b.Errorf("Unexpected v")
}
return
}()
}
}
func BenchmarkConditionAllErr(b *testing.B){
for i := 0; i < b.N; i++ {
func() (error) {
v, err := giveError(true)
if err != nil {
return err
}
if v != 1 {
b.Errorf("Unexpected v")
}
return nil
}()
}
}
The results:
BenchmarkHandlerAllErr-16 5001079 254.9 ns/op
BenchmarkConditionAllErr-16 10938622 107.2 ns/op
So here we're back in the range of the orignal test, at abougt 2.4x slower.
But none of these test very realistic scenarios. Most of the time I'm writing code that just bubbles the errors up and doesn't assess them in the moment, I have a fair bit of code with a few error checks that very rarely result in actual errors. So what if we look at 50x more operations with no errors?
func BenchmarkHandlerNoErr50x(b *testing.B){
for i := 0; i < b.N; i++ {
func() (err error) {
defer HandleErr(&err)
for j := 0; j < 50; j++ {
v := CheckAndAssign(giveError(false))
if v != 1 {
b.Errorf("Unexpected v")
}
}
return
}()
}
}
func BenchmarkConditionNoErr50x(b *testing.B){
for i := 0; i < b.N; i++ {
func() (error) {
for j := 0; j < 50; j++ {
v, err := giveError(false)
if err != nil {
return err
}
if v != 1 {
b.Errorf("Unexpected v")
}
}
return nil
}()
}
}
The results:
BenchmarkHandlerNoErr50x-16 53827771 18.66 ns/op
BenchmarkConditionNoErr50x-16 76254039 14.01 ns/op
Here the gap closes dramatically, with the panic/defer approach coming in at just a third slower than the original implementation.
While I don't think I'd take this approach for a 2.4x to 3.7x penalty, I think there's a strong case to be made that the readability benefits are worth a 33% penalty in functions that aren't particularly performance sensitive. There are certainly low level functions where I absolutely wouldn't take the performance hit, but in much of the code I write I think the tradeoff could be worth it.