I'm curious if you have any example of this? Even if it's an hyperbole, I don't really see how.
1. You’re mapping or reducing some dataset
2. Your iteration logic does not branch a lot
3. You can express your transformation logic using higher order functions (e.g. mapping a reduction operation across a multidimensional array)
Some domains have a log of this style of work—finance comes to mind—others do not. I suspect this is why I’ve personally seen a lot more of Clojure in finance circles than I have in other industries.
The 1:20+ is definitely not hyperbole though. Using transducers to stream lazy reductions of nested sequences; using case, cond-> and condp->; anywhere where you can lean on the clojure.core library. I don’t know how to give specific examples without giving a whole blog post of context, but 4 or 5 examples from the past year spring to mind.
It’s also often the case that optimizing my clojure code results in a significant reduction of lines of code, whereas optimizing Python code always resulted in an explosion of LoC
Personally I find Python particularly egregious. No map/filter/reduce, black formatting, no safe nested property access. File length was genuinely one of the reasons I stopped using it. The ratio would not be so high with some languages, ie JavaScript
Even with Elixir though, many solutions require 5-10 times the amount of lines for the same thing thing in Clojure. I just converted two functions yesterday that were 6 & 12 lines respectively in Clojure, and they are both 2 pages in Elixir (and would have been much longer in Python)
Usually these are problems where you need to run along a list and check neighboring elements. You can use amap or map-indexed but it's just not ergonomic or Clojure-y (vs for instance the imperative C++ iterator model)
The best short example I can think of is Fibbonacci
https://4clojure.oxal.org/#/problem/26/solutions
I find all the solutions hard to read. They're all ugly. Their performance characteristics are hard to know at a glance
(loop [[a b c & more] coll] (recur (apply list b c more)))
There’s also partition if you're working with transducers/threads/list comprehension (partition 3 1 coll)
Or if you need to apply more complicated transformations to the neighbors/cycle the neighbors (->> coll cycle rest (map xform) (map f coll))
Using map-indexed to look up related indices is something I don’t think I do anywhere in my codebase. Agreed that it’s not ergonomicEDIT: those Fibonacci functions are insane, even I don’t understand most of them. They’re far from the Clojure I would advocate for, most likely written for funsies with a very specific technical constraint in mind
You could do `(partition 5 1 coll)` and then average each element in the resulting seq.. It's very easy to reason about. But I'm guessing the performance will be abysmal? You're getting a lazy list and each time you access a 5 neighbor set.. you're rerunning down you coll building the 5 unit subsets? Maybe if you start with an Array type it'll be okay, but you're always coercing to seq and to me it's hard
Taking the first 5 elements, recurring on a list with the top element dropped is probably better, but I find the code hard to read. Maybe it's a familiarity issue..
If you come across a post or an example that shows those differences, I would be very interested!
(defn report [date]
(let [[d w m q y] (-> (comp tier* recall* (partial c/shift date :day))
(map [1 7 30 90 365]))]
(reduce (fn [memo {:keys [card code]}]
(cond-> memo
true (update code (fnil update [0 0 0 0 0 0 0 0 0 0]) (q card) inc)
(<= 4 (d card)) (update-in [code 6] inc)
(<= 4 (w card)) (update-in [code 7] inc)
(<= 4 (m card)) (update-in [code 8] inc)
(<= 4 (y card)) (update-in [code 9] inc)))
{}
(k/index :intels)))))
The elixir code I was able to condense down into: def report(facets, intels, day) do
[d, w, m, q, y] = for x <- [1, 7, 30, 90, 365], do: Date.shift(day, day: x)
Enum.reduce(intels, %{}, fn intel, acc ->
facet = Map.get(facets, intel.uuid, :zero)
[q0, q1, q2, q3, q4, q5, d4, w4, m4, y4] =
acc[intel.code] || [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
quarterly_tier = tier(facet, q)
Map.put(acc, intel.code, [
if(quarterly_tier == 0, do: q0 + 1, else: q0),
if(quarterly_tier == 1, do: q1 + 1, else: q1),
if(quarterly_tier == 2, do: q2 + 1, else: q2),
if(quarterly_tier == 3, do: q3 + 1, else: q3),
if(quarterly_tier == 4, do: q4 + 1, else: q4),
if(quarterly_tier == 5, do: q5 + 1, else: q5),
if(tier(facet, d) >= 4, do: d4 + 1, else: d4),
if(tier(facet, w) >= 4, do: w4 + 1, else: w4),
if(tier(facet, m) >= 4, do: m4 + 1, else: m4),
if(tier(facet, y) >= 4, do: y4 + 1, else: y4),
])
end)
end
It was much longer prior to writing this comment (I originally used multiple arity helper functions), but it was only fair I tried my best to get the elixir version as concise as possible before sharing. Still 2x the lines of effective code, substantially more verbose imho, and required dedicated (minor) golfing to get it this far.Replacing this report function (12 lines) + one other function (6 lines) + execution code (18 lines) is now spread across 3 modules in Elixir, each over 100 lines. It's not entirely apples to oranges, but trying to provide as much context as possible.
This is all just to say that the high effort in reading it is normally a result of information density, not complexity or syntax. There are real advantages to being able to see your entire problem space on a single page.
Also writing Clojure can be incredibly terse, resulting in quite high-effort when reading. Conversely, a lot of time I can condense hundreds of lines of equivalent python into 5 or 6 lines of Clojure. Having all of this functionality condensed into something you can fit in a tweet really helps for grokking larger parts of the dataflow or even larger system. So there are tradeoffs
Plus structural editing and the repl really help with the “reading” experience (reading in quotes because it’s much more interactive than reading)