My boss said I was “ranting” the other day, when I was trying to explain what I see when I see
|>, the elixir’s pipelining operator. I guess that’s something as valid as any other thing to share …
What the pipeline operator strictly is:
Well, that’s very easy, and you know it already. The pipeline operator is a way to express the first argument to a function in a different way:
some_function(arg1, arg2) == arg1 |> some_function(arg2)
some_function(arg1, arg2) we can write
arg1 |> some_function(arg2).
And that’s it! that’s all the pipeline “operator” does. Why do elixir people make such a big deal of it? We’ll get to that, but first …
There is a lot of function types. Not everything is the same
The most generic definition of a function I can think of is:
A function is some construct which, provided input arguments, “do” something, and can be invoked from somewhere else in the code
Please note that I didn’t say “returns something”, I said “do” something, because sometimes “doing” is not the same of “returning”. For instance (python)
print("This is your age: ")
you can see a construct (
print_my_age ), you see it has input arguments (
age ) and it “does” something: it prints the age provided as an argument.
With that in mind, it’s easy to see that there are “a lot of function types”
The kind of functions I like in elixir
These are a few things I think it’s to be expected in a function you’re using while programming in elixir
- It’s a blackbox: Nothing that happens inside the function is to alter anything outside the function’s scope
- It should always return something: I gotta say, elixir functions are by design that way: if their last line is “returnable”, that’s what you got when calling the function
This said, it’s important to know that is impossible that all functions are blackboxes. Some of the things we use on an everyday basis when programming, doesn’t comply with this; thing of a cache , for example:
Cachex.put(:some-cache, some_key, some_value)
Right there you have a function that does something outside its own scope (in my view, at least). The function communicates with some cache process named
:some-cache and tell it to add
some_key this is more a “doer” function that a “returner” one. This touches other basic concept of elixir, which is
A whole elixir application is just a bunch of processes messaging with each other: telling each other what to do and reacting to each other’s messages …
When to use the pipe operator
But let’s keep focus. I think the elixir “pipe”
|> operator works best with the following type of functions
- Functions that can be described as a verb
- Functions that are real blackboxes
- Functions that return a copy of their first argument with some transformations to it applied
In a nutshell:
The pipe operator is useful when applied to functions that can be described as “a transformation to their first argument”
Take for instance the cache function I talked about previously, and try to use the pipeline operator on that:
:some-cache |> Cachex.put(some_key, some_value)
It works, but at least I don’t find it very readable, unless I consider
:some-cache as the “cache object” itself (again there is no such a thing called “object” in elixir, it’s a process), but
:some-cache is really an atom!! Personally, I like most the traditional way
Cachex.put(:some-cache, some_key, some_value)
“Do something, and use the
some_value args to do it”. You can see that it doesn’t comply with what we said was good functions to use the pipeline operator on, it can’t be described as “doing some transform on its first argument”
But, take this function, for instance:
def add_pid(state, pid) do
pids = state.pids ++ [pid]
Map.put(state, :pids, pids)
It adds to some key of the variable
state some pid and return a copy of
state in which the key
:pids has one more
pid , these two expressions are the same:
new_state = add_pid(state, pid) # more traditional
new_state = state |> add_pid(pid) # bingo!!
Personally, I like the second expression more. Now, if we keep that in mind we can make long pipelines which is where the pipeline operator shines the most:
results = encoded_results |> decode() |> remove_small(arg) |> add_advertising(arg1, arg2)
That’s something I understand in one simple look: to some encoded search results, decode them, then remove the results that are considered “small” and then add something to those. But every function on that pipeline complies with what we called a “good pipeline” function before. Other way is noticing that the it’s all a bunch of transformations applied to
encoded_results but it doesn’t end in something completely unrelated, quite the opposite:
results are very relatable to
encoded_results if the final result of a pipeline is not relatable to its beginning, then I can’t read it well.
If some function does not comply with that, the long pipelines, although the seem pretty, don’t add too much semantics to the program. It’s just another way of writing things.
I may also add that, trying to do your programming in a way that most of the functions are “pipeable” functions will produce a “more functional” program, and “more elixirish” program.
It’s not just about following the standard which is:
- Don’t use the pipe operator for one function call only
- When in long pipelines , the first element is best if it’s a variable and not a function call
Because that can produce code harder to read, and the use of functions that doesn’t do elixir any good.
Don’t use the pipe as another syntax, do your programming in a fashion that the pipe operator adds meaning to it. First, do the right functions, then, use the pipe to add semantics.
There are lots of kinds of functions, “functional programming” is not just to do everything with function calls, it’s about using a certain type of functions.
I haven’t read this in any book, and probably I’m stepping on somebody’s toes. This is just me thinking about something.