I have a new appreciation for Lenses in typed languages. When you learn parsing untyped data like JSON to strong/soundly typed data, Lenses seem wrong.
When you’re presented with 1,000+ of dynamically shaped JSON data you can’t predict, typed Lenses suddenly are The Solution.
In Elm, lenses are an anti-pattern. The compiler offers guarantees using either high-level or low-level functions in elm/json.
This encourages you to fight hard for JSON the way you want it in your Back-end For Front-end (BFF) to make your front-end easier
… still, even ReScript provides a lot of type safe facilities like Js.Json.parseExn and basic pattern matching.
Both languages, for good reason, eschew lenses because their typing is so powerful.
Yet when JSON is presented by users, not API’s you can control, what recourse do you have? Lenses. Dynamic languages in particular have a wonderful array of options here.
While Python lags behind in the safe digging department, and None-aware is deferred, …I feel like some of their lens libraries and associated documentation are amazing.
I almost quit twice, but I’m glad I stuck with it. While ReScript does offer community written Lens libraries, I wanted to do it by hand. You can learn a lot about a language’s ability to interact with dynamic data by creating your own Isomorphism.
I.e. text -> JSON -> type -> JSON -> text
Meaning, parsing some JSON from a text file over the network into strong types, making some modifications, and converting it back to JSON and then text to send back to a server.
Dynamic language libraries make this easy and fast.
However, the machinery around that inspection and modification is where errors can occur. While a lot more work, I’m glad I stuck with types. It ensures all the edge cases around the shapes of data not quite matching up (i.e. null and undefined being 2 different types), helped.
I’ve seen it argued that, at least for most use cases, Lens libraries are too much complexity, and it’s easier to just use simple gets/sets with Array.map and Array.reduce.
Lens show their power when you compose them so for basic parsing, I get the resistance if you’re just doing simple parsing.
Here’s an equivalent using focused:
It saves you maybe 1 line of code. The value is more about the ability to compose those iso’s more easily. If you’re not? Just use the native code.
What I was interested in was each and every possible problem in that original Promise chain as I need to know the various problems to mark data depending on what problem occurred, and some I can fix ahead of time with compiler support. TypeScript’s variadic tuples can help here, too, not just ReScript.
In conclusion, when I discovered Lenses, they provided a wonderful way to get pure code with dynamic data. As I moved to soundly typed languages, all the Lens libraries I saw seemed overcomplicated & dumb. Now I realize I was wrong and they have their cemented place when I cannot control the JSON.