Most profession defines themselves by their capacity to produce for the least costs.
And IT may define itself by its high level of experts: people that can tell you what to do, but actually don't know how to do it.
People tends to think the best approach to computers are mathematics, especially the knowledge of function(al).
But even if The Art of Computing (Donald Knuth) is a very inspiring book that should be read, even if LISP and related ideas are worthy, they are wrong at the intuition level.
Computer is about cleaning, because no computer intuition can be built from math.
It is like cooking.
Every one can cook. But a professional cook
- can make money out of it by mastering its costs (sustainable);
- clean its kitchen all the time and organize its work in a controlled fashion (guarantee of means);
- do not poison its customers (guarantee of results);
- can reproduce a same creation more than one time, while he has ability to create (Quality).
Every amateur can cook. But amateur that cleans efficiently are few. And cleaning all the time is the main hidden burden of professional cooking, however, without cleaning first you cannot cooperate in a limited space without burdening your coworkers, then you risk contaminating food.
People want to reduce professional cooking to chemistry and tour de main (poaching eggs, soufflé, risotto, emulsion, Maillard's reaction)... others to the capacity to feed people. However it is the tip of the iceberg ; it is all about cleaning.
And my point is Math is not even close to be the chemistry of computer science, it is physics : dealing with ugliness, hence a lot of words we use from math are misleading.
Let's take the classical approach of what is a "function" and let's see why it is an un-professional misleading intuition.
a function is easier to describe as something that maps an entry space to an output space. Function can be inverted, combined....
Let's define in a meta language that looks like python :
def apply(n, func, arg):
if not n:
return apply(n-1, func, func(arg))
We have a nice function that should be in every book as the simplest function possible
Well this is a lie. This is not a mathematical function because first it does not map an entry space to an output space and because it makes an ellipse on where the most of the hard work is : being a janitor.
Why a computer function will always be mutable
I call incr an effector. Given an output, it gives you a result that can range from expected one to unexpected. It changes the computer states.
Why can the result can be unexpected?
With ASM it is easy. The registry are made of bit values that according to the operator will be translated in various distinct operations. Addition on floats or integer, BCD ....
Computer integers/floats are segments varying according to the size of the computer architecture (from 4 to 128 bits usually) and conventions. The mathematical inversion of incr is decr defined with the following immutable rule:
incr(decr(x))== decr(incr(x))Thus if incr and decr where pure mathematical function for all integers x we would have that there is no way incr(decr(x)) can cause a problem.
incr . decr = decr . incr = identity
Since it is linear algebra I could also combine functions for optimization by reordering series of increment and decrement to avoid combining function
so I expect for all n and m with n>m that are integer for every values of x that all following results should be equal
1) apply(n , incr, x) + apply(m, decr, x)
2) apply(m , decr, x) + apply(n, incr, x)
3) apply(n - m, incr, x)
4) n - m + x
5) x + m - n
I have used the mathematical commutativity of operation and we know about integer space in mathematics. Most mathematicians I have met seems to ignore that all mathematics are not linear there is an amazing lack of knowledge on stuff like hilbertian algebra, groups, anneal .... They know the words, they cannot recognize the idea when they see it.
In fact, all of this are not equivalent.
The janitor of computers have to deal with the ugly part : flawed mental models of experts.
if x is a "computer" integer ... it belongs to a bounded segment and incrementing or decrementing too much can generate an out of bound error signaled by a flag that can be set after the operation on a register. Flag that modern languages signal mostly with errors or exception interrupting the code. Flags that can either raise a trap on CPU or set flags you must read to know something bad happened.
More over, if you use "bigint" (Perl, gmc, native in python), you can consume more than your available memory, raising another case.
And, the apply function that is a stupid tail recursivity can blow up my stack, raising another interesting case to handle. Functional addicts prefer recursion to simple for loops... because they don't use variable... instead they consume a precious resource call the call stack. Smart thinking. Instead of one register let's use unbounded memory that can explode to your face. Do functional programmers seems to care? Of course not. Their world is a world of an Aristolecian mathematics that are pure and perfect. Errors comes from the un-educated not understanding the beauty of pure theologic concepts such as monads.
And least but not last computers used human generated input (which are often strings) as an input raising yet another nightmarish circus of failures. (wrong encoding, incorrect inputs, mis-copy pasting, blame my mistakes on the computer, yet another silly misguiding UI...) that requires a lot of work.
So most of my code for dealing with this stupid "functional" example in real life will be about transforming the input, checking their validity and non commutativity of real world, handling resources, expected and unexpected failures and finding meaningful messages and way of signaling in the correct plane (logs, stderr, clearly redacted and internationalized exceptions what went bad.
Given n,m,x carefully chosen, I can have an expected result or an exception if I don't order them correctly. 1 != 2 != 3 != 4 != 5 when it comes to dealing with the boundaries and I can face multiple different output from stuff I do not control.
As such, the computer is mutable. The results given by a simple addition gives a state.
Most pointy haired boss would say forbid invalid input BEFORE processing.
Well, some of the input of my programs maybe "external". Like the input of a web service or a perf counter from an OS or a language. I do not control them.
So basically because computer do not work on space (that are infinite) but on segment (that are bounded) my so called function are NOT commutative. And worse, this non commutativity can result in exception for operations that the CPU could be able to peform.
incr(decr(decr(MAXINT))) = MAXINT - 1Thus when combining carefully protected function, I have to revalidate the input to make sure it will work. And there is still no guarantee it will works.
decr(decr(incr(MAXINT))) => KABOOM
Also, bear in mind human are using code. Some are distracted or incompetent (99%) others are evil or just bored (1%) and want to watch the world burn.
Every time I give access to a function to the outer world when I let code crash it can have unexpected results: my code maybe incorporated in a web stack that maybe configured poorly firing a debug console for each non caught exceptions. So most of my code is decoding strings, validating them and carefully handle the exceptions and distinguish the one I control (out of bound numbers) with the unexpected one (failures, attack, resource exhaustion).
So by nature what we call function in computer science should more appropriately called a model.
There should be 2 models :
- sensors modeling user input or data coming from sensors and their failures;
- effectors modeling expected output from input and signaling properly all the range of output possible.
Since, physics is great let's say how code should be handled.
Sensors/observators in physics all have probabilty of failures, they also have domains of validity.
Ex: x is not an integer it is a natural signed number belonging to a range of [MININT; MAXINT].
Observable have a probabillity of [true, false] x [ positive, negative] associated with costs. The role of a real project manager should be to clearly produce the matrix of costs for each False,True X positive, negative for every business model we code.
Our code exceptions can happen in mathematically legitimate case for the model. Ex: incr(decr(MAXINT)) should be MAXINT
Since we combine pipelines of codes and since the correct ordering of operation now can be disrupted at CPU level, we cannot guarantee the proper functioning of our code.
Hence errors should be considered not in a deterministic way but in a probabilistic way with a proper appreciation of the costs of failure.
Project managers and coders should be versed in business/cost analysis and probabilities (of rare events).
We also share our environment, a code eating all the memory (or IO) in another container can balloon to the point of disrupting our operation (or maybe we have bit hammering).
External influence should also be considered as a part of the costing.
Well, you see, being a real coder is being a janitor: we have to deal with a world of so called experts and professional messing up with users and expecting us to deal with the mess they create and hammering us down every time something goes wrong.