Checking for errors after every line (like in Go) is the worst. Used to do that in c/c++ calling win32 APIs. Know what happened when sloppy developers come along? They don’t bother checking and you have really bizarre impossible to debug problems because things fail in mysterious ways. At least with an exception if you “forget” to catch it blows up in your face and it’ll be obvious
Sure monads are cool and I’d be tempted to use them. They make it impossible for forget to check for errors and if you don’t care you can panic.
But JS is not Rust. And the default is obviously to use exceptions.
You’ll have to rewrap every API under the moon. So for Monads in JS to make sense you need a lot of weird code that’s awkward to write with exceptions to justify the costs.
I’m not sure the example of doing a retry in the API is “enough” to justify the cost. Also in the example, I’m not sure you should retry. Retries can be dangerous especially if you pile them on top of other retries: https://devblogs.microsoft.com/oldnewthing/20051107-20/?p=33...
Monadic style or not, the `if err != nil return err` pattern destroys critical information for debugging. `try/catch` gives you a complete stacktrace. That stacktrace is often more valuable than the error message itself.
Moving from try:catch to errors as values was so refreshing. Same company, same developers, but suddenly people were actually _thinking_ of their errors. Proper debugging details and structured logging became default.
I assert that try:catch encourages lazy error handling leading to a worse debugging experience and longer mean time to problem discovery.
> An interesting debate emerged about the necessity of checking every possible error:
> In JS world this could be true, but for Rust (and statically typed compiled languages in general) this is actually not the case… GO pointers are the only exceptions to this. There are no nil check protection at compile level. But Rust, kotlin etc are solid.
Yes it actually is the case. You cannot check/validate for every error, not even in rust. I recommend getting over it.
For a stupid-simple example: You can't even check if disk is going to be full!
The disk being full is a real error you have to deal with, and it could happen at any line in your code through no fault of your own, and no it doesn't always happen at write() but can also when you allocate pages for writing (e.g. SIGSEGV). You cannot really do anything about this with code- aborting or unwinding will only ever annoy users, but you can do something.
We live in a multitasking world, so our users can deal with out-of-disk and out-of-memory errors by deleting files, adding more storage, closing other (lower priority) processes, paging/swapping, and so on. So you can wait: maybe alert the user/operator that there is trouble but then wait for the trouble to clear.
Also: Dynamic-wind is a useful general-purpose programming technique awkward to emulate, and I personally dislike subclassing BackTrack from Error because of what can only be a lack of imagination.
> We live in a multitasking world, so our users can deal with out-of-disk and out-of-memory errors by deleting files, adding more storage, closing other (lower priority) processes, paging/swapping, and so on. So you can wait: maybe alert the user/operator that there is trouble but then wait for the trouble to clear.
That's a weird take. I've been working for multiple decades now with systems that have no UI to speak of; their end-users are barely aware that there's a whole system behind what they can see, and that's a good thing because they become aware of it when it causes them trouble.
I take from my mentor in programming this stance for many things, including error handling: the best solution to a problem is to avoid it. That's something everybody knows actually, but we can forget that when designing/programming because one has so many things to deal with and worry about. Making the thing barely work can be a challenge in itself.
For errors, this usually means: don't let them happen. E.g. avoid OOM by avoiding dynamic allocation as much as possible; statically pre-allocate everything, even if it means megabytes of unused reserved space. Don't design your serialization format with quotes around your keys just to allow "weird" key names, a feature that nobody will ever use and that creates opportunities for errors.
Of course it is not always possible, but don't miss the opportunity when it is.
> I've been working for multiple decades now with systems that have no UI to speak of; their end-users are barely aware that there's a whole system behind what they can see, and that's a good thing because they become aware of it when it causes them trouble.
Notice I said "user" not "end-user" or "customer".
This was not an accident.
In your system (as in mine) the "user" is the operator.
> the best solution to a problem is to avoid it.
That's your opinion man. I don't know if you can avoid everything (I certainly can't).
Something to consider is why Erlang people have been trying to get people to "let it crash" and just deal with that, because enumerating the solutions is sometimes easier than enumerating the problems.
That’s not his opinion, that’s the standard technique in systems programming. It’s why there’s software out there that does in fact never crash and shows consistent performance.
> For a stupid-simple example: You can't even check if disk is going to be full!
Isn’t this addressed by preallocating data files in advance of writing application data? It’s pretty common practice for databases for both ensuring space and sometimes performance (by ensuring a contiguous extent allocation).
This. There are errors and states you cannot predict. As a grandchild comment says: It's easier to provide solutions than to list all the errors. Find your happy path and write code that steers you back on to it. The code will be shorter, less surprising, and actually describable. It's also testable because you treat whole classes of errors consistently so your error combinations count is smaller.
JS aside, I recently tried my very best to introduce proper logging and error handling to otherwise "look ma, no handlebars" codebase.
Call it a thought experiment. We start with a clean implementation that satisfies requirements. It makes a bold assumption that every star in the universe will align to help us achieve to goal.
Now we add logging and error handling.
Despite my best intentions and years of experience, starting with clean code, the outcome was a complete mess.
It brings back memories when in 2006 I was implementing deep linking for Wikia. I started with a "true to the documention" implemention which was roughly 10 lines of code. After handling all edge cases and browser incompatibilites I ended up with a whooping 400 lines.
Doing exactly the same as the original lines did, but cross compatible.
Errors as values approach suffers similar problem as async/await - it's leaky. Once the function is altered to possibly return an error, its signature changes and every caller needs to be updated (potentially all the way to the main(), if error is not handled before that).
This approach is great when:
* program requirements are clear
* correctness is more important than prototyping speed, because every error has to be handled
* no need for concise stack trace, which would require additional layer above simple tuples
* language itself has a great support for binding and mapping values, e.g. first class monads or a bind operator
Good job by the author on acknowledging that this error handling approach is not a solver bullet and has tradeoffs.
Common Lisp has retries in addition to exceptions. Retry works almost the same way as exception except it allows exception handler to restart execution from the place it happened. I wish we have this in modern widespread languages.
Like most things in C++, I wish the default was `nothrow`, and you added throw for a function that throws. There's so many functions that don't throw, but aren't marked `nothrow`.
In my experience I've used exceptions for things that really should never fail, and optional for things that are more likely to.
Most of these proposals miss the point. Errors need a useful taxonomy, based on what to do about them. The question is what do you do with an error after you caught it. A breakdown like this is needed:
- Program is broken. Probably need to abort program. Example: subscript out of range.
- Data from an external source is corrupted. Probably need to unwind transaction but program can continue. Example: bad UTF-8 string from input.
- Connection to external device or network reports a problem.
-- Retryable. Wait and try again a few times. Example: HTTP 5xx errors.
-- Non-retryable. Give up now. Example: HTTP 4xx errors.
Python 2 came close to that, but the hierarchy for Python 3 was worse. They tried; all errors are subclasses of a standard error hierarchy, but it doesn't break down well into what's retryable and what isn't.
Severity in majority of library functions is undecidable, it’s decidable at the call site instead. That’s why language should be providing sugar to pick behaviour - exceptions (propagate as is, optionally decorate/wrap), refute (error value, result type), mute/predicate-like (use zero value, ie undefined in js/ts).
If you are using exceptional handlers for transmitting errors instead of exceptions (i.e. what should have been a compiler error but wasn't detected until runtime), wrapping should be mandatory, else you'll invariably leak implementation details, which is a horrid place to end up. Especially if you don't have something like checked exceptions to warn you that the implementation has changed.
It's strange that they didn't write about the Erlang /Elixir approach of
1. returning a tuple with an ok or fail value (so errors as values) plus
2. pattern matching on return values (which makes error values bearable) possibly using the with do end macro plus
3. failing on unmatched errors and trying again to execute the failed operation (fail fast) thanks to supervision trees.
Maybe that's because the latter feature is not available nearly for free in most runtimes and because Erlang style pattern matching is also uncommon.
The approach requires a language that's built on those concepts and not one in which they are added unnaturally as an afterthought (the approach becomes burdensome.)
try catch - where you catch the right types of errors at the right level is hard to beat.
However, many make the mistake to handle any errors at the wrong level. This leads to really buggy and hard to reason about code and in some cases really bad data inconsistency issues.
A rule of thumb is to never catch a specific error which you are not in a good position to handle correctly at that precise level of code. Just let them pass through.
Returning error codes was actually the first approach to error handling. Exceptions (try/catch) became widespread much later. The article got it backwards calling try/catch "traditional" and Go's approach "modern".
> Exceptions (try/catch) became widespread much later
Exceptions, complete with try-catch-finally, were developed in the 60s & 70s, and languages such as Lisp and COBOL both adopted them.
So I'm not sure what you're calling "much later" as they fully predate C89, which is about as far back as most people consider when talking about programming languages.
Typed try/catch was tried in Java. The typing was not well liked and people voted with their feet to untyped exceptions. Euphoria turned to misery and golang emerged with returning errors explicitly. Overall I would say that the return value and error value shouldn't be split as in Golang. A result type that forces the user to account for the error when accessing the return is a much better approach. The compiler should make it fast.
I’m of the opinion that the best error handling, is to not encounter the error, in the first place.
That means good UX, intuitive interfaces, good affordances, user guidance (often, without requiring them to read text), and simplicity.
When an error is encountered, then it needs to be reported to the user in as empathetic and useful manner as possible. It also needs to be as “bare bones” simple as can reasonably be managed.
Designing for low error rates, starts from requirements. Good error reporting requires a lot of [early] input from non-technical stakeholders.
Correct. What errors can happen PLUS how we communicate (and what we do: roll-back transaction? Etc.) PLUS how do we ensure correctness of both (sane programing language, good idioms, testing, proofs etc.)
Anyone that has maintained a shipping product, can relate to this.
Often, there is disagreement over the definition of “bug.”
There’s the old joke, “That’s not a bug, it’s a feature!”, but I have frequently gotten “bug reports,” stating that the fact that my app doesn’t do something that the user wants, is a “bug.”
They are sometimes correct. The original requirements were bad. Even though the app does exactly what it says on the tin, it is unsuitable for its intended demographic.
is evaluated immediately, so execution never makes it into fromThrowable(). Does it need to be
() => a / b
instead?
Similarly, withRetry()'s argument needs to have type "() => ResultAsync<T, ApiError>" -- at present, it is passed a result, and if that result is a RateLimit error, it will just return the same error again 1s later.
Well, IIUC, Java had (and still has) something called “checked exceptions”, but people have, by and large, elected to not use those kind of exceptions, since it makes the rest of the code balloon out with enormous lists of exceptions, each of which must be changed when some library at the bottom of the stack changes slightly.
> each of which must be changed when some library at the bottom of the stack changes slightly.
I hate checked exceptions too, but in fairness to them this specific problem can be handled by intermediate code throwing its own exceptions rather than allowing the lower-level ones to bubble up.
In Go (which uses error values instead) the pattern (if one doesn’t go all the way to defining a new error type) is typically to do:
if err := doSomething(…); err != nil {
return fmt.Errorf("couldn’t do something: %w", err)
}
which returns a new error which wraps the original one (and can be unwrapped to get it).
A similar pattern could be used in languages with checked exceptions.
> it makes the rest of the code balloon out with enormous lists of exceptions
That's mostly developer laziness: They write a layer that calls the exception-throwing code, but they don't want to to think about how to model the problem in their own level of abstraction. "Leaking" them upwards by slapping on a "throws" clause is one of the lowest-effort reactions.
What ought to happen is that each layer has its own exception classes, capturing its own model for what kinds of things can go wrong and what kinds of distinctions are necessary. These would abstract-away the lower-level ones, but carrying them along as linked "causes" so that diagnostic detail isn't lost when it comes time for bug-reports.
Ex: If I'm writing a tool to try to analyze and recommend music that has to handle multiple different file types, I might catch an MP3 library's Mp3TagCorruptException and wrap it into my own FileFormatException.
It is laziness to an extent, sure, but that's a huge part of language design. We wouldn't use Java or C# or Python or any of these high level languages if we weren't lazy, after all, we'd be writing assembly like the silicon gods intended!
The problem with Java checked exceptions is they don't work well with interfaces, refactoring, or layering.
For interfaces you end up with stupid stuff like ByteArrayInputStream#reset claiming to throw an IOException, which it obviously never will. And then for refactoring & layering, it's typical that you want to either handle errors close to where they occurred or far from where they occured, but check exceptions forces all the middle stack frames that don't have an opinion to also be marked. It's verbose and false-positives a lot (in that you write a function, hit compile, then go "ah forgot to add <blah> to the list that gets forwarded along..." -> repeat)
It'd be better if it was the inverse, if anything, that exceptions are assumed to chain until a function is explicitly marked as an exception boundary.
The biggest annoyance with Java checked exceptions IME is that it’s impossible to define a method type that’s generic over the type of exception it throws.
Checked exceptions should indicate conditions that are expected to be handled by the caller. If a method is throwing a laundry list of checked exceptions then something went wrong in the design of that method’s interface.
> The biggest annoyance with Java checked exceptions IME is that it’s impossible to define a method type that’s generic over the type of exception it throws.
Exactly. If Stream methods like filter() and map() could automatically "lift" the checked exceptions thrown by their callback parameters into their own exception specifications, it would solve one of the language's biggest pain points (namely: Streams and checked exceptions, pick one).
I think it's fair to say that having some sort of syntactically lightweight sum or union type facility makes this way nicer than anything Java ever had -- subclassing isn't really a solution, because you often want something like:
TypeScript's type system would hypothetically make this pretty nice if there were a common Result type with compiler support.
Rust needs a bit more boilerplate to declare FooError, but the ? syntax automatically calling into(), and into() being free to rearrange errors it bubbles up really help a lot too.
The big problem with Java's checked exceptions was that you need to list all the exceptions on every function, every time.
I love libraries that does a simple check and signals that it "failed" with ThingWasNotTrueException.
In surprising twist: Java has ConcurrentModificationException. And, to counter its own culture of exception misuse, the docs have a stern reminder that this exception is supposed to be thrown when there are bugs. You are not supposed to use it to, I dunno, iterate over the collection and bail out (control flow) based on getting this exception.
Checking for errors after every line (like in Go) is the worst. Used to do that in c/c++ calling win32 APIs. Know what happened when sloppy developers come along? They don’t bother checking and you have really bizarre impossible to debug problems because things fail in mysterious ways. At least with an exception if you “forget” to catch it blows up in your face and it’ll be obvious
Sure monads are cool and I’d be tempted to use them. They make it impossible for forget to check for errors and if you don’t care you can panic.
But JS is not Rust. And the default is obviously to use exceptions.
You’ll have to rewrap every API under the moon. So for Monads in JS to make sense you need a lot of weird code that’s awkward to write with exceptions to justify the costs.
I’m not sure the example of doing a retry in the API is “enough” to justify the cost. Also in the example, I’m not sure you should retry. Retries can be dangerous especially if you pile them on top of other retries: https://devblogs.microsoft.com/oldnewthing/20051107-20/?p=33...
Monadic style or not, the `if err != nil return err` pattern destroys critical information for debugging. `try/catch` gives you a complete stacktrace. That stacktrace is often more valuable than the error message itself.
Moving from try:catch to errors as values was so refreshing. Same company, same developers, but suddenly people were actually _thinking_ of their errors. Proper debugging details and structured logging became default.
I assert that try:catch encourages lazy error handling leading to a worse debugging experience and longer mean time to problem discovery.
Nice thing about Monads in JS with tools like neverthrow is that you can create the Monad boundary where you like.
It becomes very similar to try-catch exception handling at the place you draw the boundary, then within the boundary it’s monad land.
If you haven’t wrapped it in a monad, chances are you wouldn’t have wrapped it in a try-catch either!
Don’t accept sloppy development practices regardless of what programming language you’re going to use.
> An interesting debate emerged about the necessity of checking every possible error:
> In JS world this could be true, but for Rust (and statically typed compiled languages in general) this is actually not the case… GO pointers are the only exceptions to this. There are no nil check protection at compile level. But Rust, kotlin etc are solid.
Yes it actually is the case. You cannot check/validate for every error, not even in rust. I recommend getting over it.
For a stupid-simple example: You can't even check if disk is going to be full!
The disk being full is a real error you have to deal with, and it could happen at any line in your code through no fault of your own, and no it doesn't always happen at write() but can also when you allocate pages for writing (e.g. SIGSEGV). You cannot really do anything about this with code- aborting or unwinding will only ever annoy users, but you can do something.
We live in a multitasking world, so our users can deal with out-of-disk and out-of-memory errors by deleting files, adding more storage, closing other (lower priority) processes, paging/swapping, and so on. So you can wait: maybe alert the user/operator that there is trouble but then wait for the trouble to clear.
Also: Dynamic-wind is a useful general-purpose programming technique awkward to emulate, and I personally dislike subclassing BackTrack from Error because of what can only be a lack of imagination.
> We live in a multitasking world, so our users can deal with out-of-disk and out-of-memory errors by deleting files, adding more storage, closing other (lower priority) processes, paging/swapping, and so on. So you can wait: maybe alert the user/operator that there is trouble but then wait for the trouble to clear.
That's a weird take. I've been working for multiple decades now with systems that have no UI to speak of; their end-users are barely aware that there's a whole system behind what they can see, and that's a good thing because they become aware of it when it causes them trouble.
I take from my mentor in programming this stance for many things, including error handling: the best solution to a problem is to avoid it. That's something everybody knows actually, but we can forget that when designing/programming because one has so many things to deal with and worry about. Making the thing barely work can be a challenge in itself.
For errors, this usually means: don't let them happen. E.g. avoid OOM by avoiding dynamic allocation as much as possible; statically pre-allocate everything, even if it means megabytes of unused reserved space. Don't design your serialization format with quotes around your keys just to allow "weird" key names, a feature that nobody will ever use and that creates opportunities for errors.
Of course it is not always possible, but don't miss the opportunity when it is.
> That's a weird take
I appreciate that, but...
> I've been working for multiple decades now with systems that have no UI to speak of; their end-users are barely aware that there's a whole system behind what they can see, and that's a good thing because they become aware of it when it causes them trouble.
Notice I said "user" not "end-user" or "customer".
This was not an accident.
In your system (as in mine) the "user" is the operator.
> the best solution to a problem is to avoid it.
That's your opinion man. I don't know if you can avoid everything (I certainly can't).
Something to consider is why Erlang people have been trying to get people to "let it crash" and just deal with that, because enumerating the solutions is sometimes easier than enumerating the problems.
That’s not his opinion, that’s the standard technique in systems programming. It’s why there’s software out there that does in fact never crash and shows consistent performance.
> For a stupid-simple example: You can't even check if disk is going to be full!
Isn’t this addressed by preallocating data files in advance of writing application data? It’s pretty common practice for databases for both ensuring space and sometimes performance (by ensuring a contiguous extent allocation).
This. There are errors and states you cannot predict. As a grandchild comment says: It's easier to provide solutions than to list all the errors. Find your happy path and write code that steers you back on to it. The code will be shorter, less surprising, and actually describable. It's also testable because you treat whole classes of errors consistently so your error combinations count is smaller.
JS aside, I recently tried my very best to introduce proper logging and error handling to otherwise "look ma, no handlebars" codebase.
Call it a thought experiment. We start with a clean implementation that satisfies requirements. It makes a bold assumption that every star in the universe will align to help us achieve to goal.
Now we add logging and error handling.
Despite my best intentions and years of experience, starting with clean code, the outcome was a complete mess.
It brings back memories when in 2006 I was implementing deep linking for Wikia. I started with a "true to the documention" implemention which was roughly 10 lines of code. After handling all edge cases and browser incompatibilites I ended up with a whooping 400 lines.
Doing exactly the same as the original lines did, but cross compatible.
> We start with a clean implementation that satisfies requirements ... Now we add logging and error handling.
If error handling and logging isn't necessary to satisfy requirements, why bother with them at all?
I guess I’ll ask, did you try using exceptions?
Handlebars like on a bike, or like the templating language?
Errors as values approach suffers similar problem as async/await - it's leaky. Once the function is altered to possibly return an error, its signature changes and every caller needs to be updated (potentially all the way to the main(), if error is not handled before that).
This approach is great when:
* program requirements are clear
* correctness is more important than prototyping speed, because every error has to be handled
* no need for concise stack trace, which would require additional layer above simple tuples
* language itself has a great support for binding and mapping values, e.g. first class monads or a bind operator
Good job by the author on acknowledging that this error handling approach is not a solver bullet and has tradeoffs.
It’s only leaky if you do not consider failure cases to be as equally intrinsic to an interface’s definition as its happy-path return value :-)
Common Lisp has retries in addition to exceptions. Retry works almost the same way as exception except it allows exception handler to restart execution from the place it happened. I wish we have this in modern widespread languages.
Like most things in C++, I wish the default was `nothrow`, and you added throw for a function that throws. There's so many functions that don't throw, but aren't marked `nothrow`.
In my experience I've used exceptions for things that really should never fail, and optional for things that are more likely to.
[dead]
The three things I wish were more standardized in the languages I use are
1. Stacktraces with fields/context besides a string 2. Wrapping errors 3. Combining multiple errors
Observability tools give you this (as long as it can be handled and isn't a straight up panic).
Most of these proposals miss the point. Errors need a useful taxonomy, based on what to do about them. The question is what do you do with an error after you caught it. A breakdown like this is needed:
- Program is broken. Probably need to abort program. Example: subscript out of range.
- Data from an external source is corrupted. Probably need to unwind transaction but program can continue. Example: bad UTF-8 string from input.
- Connection to external device or network reports a problem.
-- Retryable. Wait and try again a few times. Example: HTTP 5xx errors.
-- Non-retryable. Give up now. Example: HTTP 4xx errors.
Python 2 came close to that, but the hierarchy for Python 3 was worse. They tried; all errors are subclasses of a standard error hierarchy, but it doesn't break down well into what's retryable and what isn't.
Rust never got this right, even with Anyhow.
Severity in majority of library functions is undecidable, it’s decidable at the call site instead. That’s why language should be providing sugar to pick behaviour - exceptions (propagate as is, optionally decorate/wrap), refute (error value, result type), mute/predicate-like (use zero value, ie undefined in js/ts).
> optionally decorate/wrap
If you are using exceptional handlers for transmitting errors instead of exceptions (i.e. what should have been a compiler error but wasn't detected until runtime), wrapping should be mandatory, else you'll invariably leak implementation details, which is a horrid place to end up. Especially if you don't have something like checked exceptions to warn you that the implementation has changed.
There's no universal taxonomy of "this error is retryable, this one non-recoverable"; it's context dependent.
As a boring example, I might write something that detects when a resource gets hosted, e.g. goes from 404 -> 200.
The best I imagine you can do is be able to easily group each error and handle them appropriately.
Well you don't usually want double retry loops, and sometimes that subscript error is because the subscript came from input.
What to do with an error depends on who catches it. That's probably why Python got it wrong and then Rust said worse is better
It's strange that they didn't write about the Erlang /Elixir approach of
1. returning a tuple with an ok or fail value (so errors as values) plus
2. pattern matching on return values (which makes error values bearable) possibly using the with do end macro plus
3. failing on unmatched errors and trying again to execute the failed operation (fail fast) thanks to supervision trees.
Maybe that's because the latter feature is not available nearly for free in most runtimes and because Erlang style pattern matching is also uncommon.
The approach requires a language that's built on those concepts and not one in which they are added unnaturally as an afterthought (the approach becomes burdensome.)
Pattern matching: https://hexdocs.pm/elixir/pattern-matching.html
With: https://hexdocs.pm/elixir/1.18.1/Kernel.SpecialForms.html#wi...
Supervisors: https://hexdocs.pm/elixir/1.18.1/supervisor-and-application....
try catch - where you catch the right types of errors at the right level is hard to beat.
However, many make the mistake to handle any errors at the wrong level. This leads to really buggy and hard to reason about code and in some cases really bad data inconsistency issues.
A rule of thumb is to never catch a specific error which you are not in a good position to handle correctly at that precise level of code. Just let them pass through.
This is called the "result pattern". I would not call this a novel concept. In C# we use this: https://github.com/ardalis/Result
Yes, I stopped reading at:
> The most common approach is the traditional try/catch method.
Weird to stop reading at a statement that is factually true.
Returning error codes was actually the first approach to error handling. Exceptions (try/catch) became widespread much later. The article got it backwards calling try/catch "traditional" and Go's approach "modern".
> Exceptions (try/catch) became widespread much later
Exceptions, complete with try-catch-finally, were developed in the 60s & 70s, and languages such as Lisp and COBOL both adopted them.
So I'm not sure what you're calling "much later" as they fully predate C89, which is about as far back as most people consider when talking about programming languages.
Typed try/catch was tried in Java. The typing was not well liked and people voted with their feet to untyped exceptions. Euphoria turned to misery and golang emerged with returning errors explicitly. Overall I would say that the return value and error value shouldn't be split as in Golang. A result type that forces the user to account for the error when accessing the return is a much better approach. The compiler should make it fast.
I’m of the opinion that the best error handling, is to not encounter the error, in the first place.
That means good UX, intuitive interfaces, good affordances, user guidance (often, without requiring them to read text), and simplicity.
When an error is encountered, then it needs to be reported to the user in as empathetic and useful manner as possible. It also needs to be as “bare bones” simple as can reasonably be managed.
Designing for low error rates, starts from requirements. Good error reporting requires a lot of [early] input from non-technical stakeholders.
Errors often come from the fact that we build on unreliable medium.
Lost packets, high latency, crashed disks, out of memory etc.
You can talk to your users sure but you need to handle this stuff at some level either way. Shit happens!
Absolutely.
But we need to plan for it from Day One, and that can also include things like choosing good technology stacks.
Like I said, when inevitable errors happen, how we communicate (or, if possible, mitigate silently) the condition, is crucial.
[EDITED TO ADD] Note how any discussion of improving Quality of software is treated, hereabouts. Bit discouraging.
Correct. What errors can happen PLUS how we communicate (and what we do: roll-back transaction? Etc.) PLUS how do we ensure correctness of both (sane programing language, good idioms, testing, proofs etc.)
Anyone that has maintained a shipping product, can relate to this.
Often, there is disagreement over the definition of “bug.”
There’s the old joke, “That’s not a bug, it’s a feature!”, but I have frequently gotten “bug reports,” stating that the fact that my app doesn’t do something that the user wants, is a “bug.”
They are sometimes correct. The original requirements were bad. Even though the app does exactly what it says on the tin, it is unsuitable for its intended demographic.
I call that a “bug.”
Somewhat related, from March: https://news.ycombinator.com/item?id=43297574
the blog lacks the review of one critical player effect.ts https://effect.website/docs/error-management/two-error-types...
Pretty sure this line: https://meowbark.dev/Better-error-handling#:~:text=return%20...
will immediately throw if b == 0, because
is evaluated immediately, so execution never makes it into fromThrowable(). Does it need to be instead?Similarly, withRetry()'s argument needs to have type "() => ResultAsync<T, ApiError>" -- at present, it is passed a result, and if that result is a RateLimit error, it will just return the same error again 1s later.
Very balanced post thank you. Often these posts tout an approach, and never consider downsides.
> Lack of Type System Integration
Well, IIUC, Java had (and still has) something called “checked exceptions”, but people have, by and large, elected to not use those kind of exceptions, since it makes the rest of the code balloon out with enormous lists of exceptions, each of which must be changed when some library at the bottom of the stack changes slightly.
> each of which must be changed when some library at the bottom of the stack changes slightly.
I hate checked exceptions too, but in fairness to them this specific problem can be handled by intermediate code throwing its own exceptions rather than allowing the lower-level ones to bubble up.
In Go (which uses error values instead) the pattern (if one doesn’t go all the way to defining a new error type) is typically to do:
which returns a new error which wraps the original one (and can be unwrapped to get it).A similar pattern could be used in languages with checked exceptions.
> it makes the rest of the code balloon out with enormous lists of exceptions
That's mostly developer laziness: They write a layer that calls the exception-throwing code, but they don't want to to think about how to model the problem in their own level of abstraction. "Leaking" them upwards by slapping on a "throws" clause is one of the lowest-effort reactions.
What ought to happen is that each layer has its own exception classes, capturing its own model for what kinds of things can go wrong and what kinds of distinctions are necessary. These would abstract-away the lower-level ones, but carrying them along as linked "causes" so that diagnostic detail isn't lost when it comes time for bug-reports.
Ex: If I'm writing a tool to try to analyze and recommend music that has to handle multiple different file types, I might catch an MP3 library's Mp3TagCorruptException and wrap it into my own FileFormatException.
It is laziness to an extent, sure, but that's a huge part of language design. We wouldn't use Java or C# or Python or any of these high level languages if we weren't lazy, after all, we'd be writing assembly like the silicon gods intended!
The problem with Java checked exceptions is they don't work well with interfaces, refactoring, or layering.
For interfaces you end up with stupid stuff like ByteArrayInputStream#reset claiming to throw an IOException, which it obviously never will. And then for refactoring & layering, it's typical that you want to either handle errors close to where they occurred or far from where they occured, but check exceptions forces all the middle stack frames that don't have an opinion to also be marked. It's verbose and false-positives a lot (in that you write a function, hit compile, then go "ah forgot to add <blah> to the list that gets forwarded along..." -> repeat)
It'd be better if it was the inverse, if anything, that exceptions are assumed to chain until a function is explicitly marked as an exception boundary.
The biggest annoyance with Java checked exceptions IME is that it’s impossible to define a method type that’s generic over the type of exception it throws.
Checked exceptions should indicate conditions that are expected to be handled by the caller. If a method is throwing a laundry list of checked exceptions then something went wrong in the design of that method’s interface.
> The biggest annoyance with Java checked exceptions IME is that it’s impossible to define a method type that’s generic over the type of exception it throws.
Exactly. If Stream methods like filter() and map() could automatically "lift" the checked exceptions thrown by their callback parameters into their own exception specifications, it would solve one of the language's biggest pain points (namely: Streams and checked exceptions, pick one).
I think it's fair to say that having some sort of syntactically lightweight sum or union type facility makes this way nicer than anything Java ever had -- subclassing isn't really a solution, because you often want something like:
TypeScript's type system would hypothetically make this pretty nice if there were a common Result type with compiler support.Rust needs a bit more boilerplate to declare FooError, but the ? syntax automatically calling into(), and into() being free to rearrange errors it bubbles up really help a lot too.
The big problem with Java's checked exceptions was that you need to list all the exceptions on every function, every time.
Java's sealed interfaces enable typing errors.
https://blogs.oracle.com/javamagazine/post/java-sealed-class...
Although syntactically lightweight it is the opposite of.
I agree; Java is constitutionally incapable of being lightweight. I much prefer Typescript's union syntax. I'm glad Python copied it.
I love libraries that does a simple check and signals that it "failed" with ThingWasNotTrueException.
In surprising twist: Java has ConcurrentModificationException. And, to counter its own culture of exception misuse, the docs have a stern reminder that this exception is supposed to be thrown when there are bugs. You are not supposed to use it to, I dunno, iterate over the collection and bail out (control flow) based on getting this exception.
[dead]