RFC 01 Human Handshaking protocol for instant messaging (work in progress)

Abstract


Instant Messaging (IM) can be disruptive and cognitively hard to handle because it requires context switching. This results in 2 potentially counter productive effects:

  • lowering the quality of the conversation for both parts that are not equally concentrated;
  • it can introduce a repulsion towards this protocol.

Since this is a human problem, this proposal is a human based solution.

Proposal


When you want to talk to someone you ask for «real availability» and a «time slot» and a «summary» of what you want to talk about  given a «priority». It is in the interest of both party to agree on something mutually benefiting.

The idea is to propose a multi cultural loosely formal flow of conversation for agreeing to a talk in good conditions.
 

Implementation. 


Casual priority is fine and is the only proposed level.
Default arguments are:
  • time slots : 10 minutes (explained later). NEVER ask more than 45 mins;
  • summary : What's up? (salamalecs explaiend later);
  • priority : casual (except if you want people to dislike you).

 

Time negociation


Ex: «hey man, can you spare some 10 minutes for me?»


The interrogative formulation should put your  interlocutor at ease so he understands he can refuse or postpones.

Asking for an explicit time slot helps your interlocutor answer truthfully.

If the receiver is not answering it means he or she cannot.

Don't retry the opening message aggressively. Spacing gracefully the requests should be based on the historic of conversation you had. If you had not talked to someone over 1 year, don't expect the person to answer you back in 5 mins, but rather in the same amount of time since you last interacted.

If you really want to push, multiply each retry by an order of magnitude. Min time for repushing should be done according to the how busy your interlocutor is, your proximity with the person, and your «average level of interaction» on a rough moving average of one month.

It should never go below 5 mins for the first retry (with a good friend you interact a lot with) and 15 mins for a good friend you have not talked in years.

(try to find a rough simple equation based on sociogram proximity)

Summary/context.

Announcing the context

At this point, the talk is NOT accepted.
A tad more negotiation may be needed.

It is cool for person to interact to have a short summary so that people can know if it will be "information" (asymmetric with a higher volume from the emitter) "communication" (symmetric), "advice" (asymmetric, but reversed).

Defaut is symmetric. Asymmetry is boring and if so you should think of NOT using IM.

Context: 

business related/real life related/balanced

Default:  balanced.

If you use IM for business related stuffs, I don't think this proposal applies to you. There are multiple ISO norms for handling support. People also tends to dislike doing free consulting in an interruptive way out of the blue. If you poke someone for asking him business related stuffs, you are probably asking for free consulting. Please, DON'T. There is no such things as free beers. If so you should propose clearly a compensation, even if it casual at the beginning.

Ex : Please, can you Can you give me 10 mins of your time between now and thursday on IEEE 802.1q? I will gladly pay you back a coffee sunday for your help.

Notice the importance of being polite. DON'T use imperative forms, they express orders. Use polite structured form. Give all the information in a single precise statement.

The more you are needing the advice, the less you should be pushy. It means you value this person much and you should not alienate her/his good will.

Default : Salamalecs (work in progress)

When greeting  each others you can't help but notice muslims/persians have an efficient advanced human protocol for updating news on a social graph called in french salamalecs.
http://en.wikipedia.org/wiki/As-salamu_alaykum

I don't know the religious part, but the human//cultural behaviour that results is clearly a handshaking protocol that seems pretty efficient.

I don't know how to transpose it yet in an occidental way of thinking, but I am working on it.

Receiver expected behaviour


People at my opinion tend to answer too much.

You have a life and a context. If you trust the person poking you, you expect him to know the obvious:
  1. you may not have time to answer;
  2. you may be dealing with a lot of stuff;
  3. it may be unsafe (either you are driving, or at a job interview)
  4. you may not be interested by the topic, but it does not mean you don't like the person.
Learn to not answer and not be guilty.

In the old days we tended to send an ACK to every sollicitations because network delivery could failed (poorly configured SMTP, netsplit....) and we could not know if the receiver was connected.

Today, we are receiving far more solicitations and we may forget about old messages.

If you did not answer, have faith in your interlocutor to repoke you in a graceful way. The x2 between every sollicitation is based on the law of «espérance»(find translation in english + reference) when having incomplete information about the measure of an event.
Believe me, mathematically, it is pretty much a good idea to make every solicitations if important spaced by a 2x factor (kind of like DHCP_REQUEST)

Once the topic/time are accepted, you can begin the conversation.
Content negociation SHOULD not exceed 4 lines/15 minutes (waiting/1st retry included). The speed of negotiation should give you an hint on the expected attention span of the receiver.
If you can't spare the time for negotiating DONT answer back. It is awkward for both parties.

Time agreement: When // for how long.


minimum time slot: 7 mins.

Experimentally it is good for better conversation, it makes you able to buffer your conversation in your head and be able to higher the bandwidth.

Using a slow start that is casual and progressively getting in the subject can be regarded as the human counterpart of old time modems negotiating for the best throughput.

You emitter is NOT a computer. Civility and asking questions about the context will help you adapt, it is not wasted time. It is clever to ask news that are correlated to the ability of your receiver to be intellectually available. Slow start means you should not chains the questions in one interaction.

ex: Are you fine? How are you kids? Is your job okay?

Multiple questions are NOT a good opening. Always serialize your opening.

Making a branch prediction with combined questions may give awful results.

What if the guy lost his wife and kids due to his tendency to workaholism?

Once the time is agreed you can set a hard limit: by saying : clock on.

It is cool to let the person with the busiest context tell the clock off.

It is fun to hold to your words about time. You'll learn in the process how chronophage IM are.

A grace time after the clock is off is required to close the conversation gracefully with the usual polite formulation. It should be short and concise.

Ex:
A :  thks, bye :)
B : my pleasure, @++

References: 


To be done

* netiquette (IETF RFC 1830?)
* multitasking considered harmfull
* something about RS232 or any actual low level HW protocol could be fun;
* maybe finding an outdated old fashioned book with funny pictures totally outdated with a pedantic title like «le guide de la politesse par l'amiral mes fesses» should be funny
* I really love salamalecs so finding a good unbiased article by an anthropologist is a must
* putting a fake normalization comity reference or creating one like HNETF could be fun: Human NOT an Engineer Task Force with a motto such as «we care about all that is way above the applicative OSI layer» to parody/make an hommage of IETF should be fun.
* some SERIOUS hard data to backup my claims (X2 estimations, concentration spans, ...)

TODO 


format that as a PEP or RFC
make RFC 00 for defining the RFC format/way of interacting to make that evolve
specify it is a draft somewhere
find an IRC channel for discussing :)
corrections (grammar/orthograph)
experiment, and share to have feedbacks, maybe it could actually work.
don't overdo it.
make a nice state/transition diagram
provide a full example with time line (copy/paste, prune, s/// of an actual conversation that worked this way).
add a paragraph about multi culturalism and the danger of expecting people to have the same expectation as you

EDIT : name it salamalec protocol, I really love this idea.


DevOps are doomed to fail: you never scale NP problems

We live in a wonderful world: all new technologies have proven that old wisdom about avoiding NP problems was stupid.

The Travelling salesman problem? (which is not NP, I know)
Well, can't google map give you wonderful optimized routes?

And K-SAT?
What is K-sat by the way?

SAT problem is the first problem to be known as NP. (wp, youhou)

What does NP mean in computer science nowadays, that can translate in word devops/business can understand?

It cannot scale by nature. 

Most devs reason as if we can always add CPU, bandwidth, memory to a computer.

The truth is the world is bounded. At least by one thing called money.

So here is what I am gonna do:
- first try to help you understand the relation ship between KSAT and dependency resolution;
- then we are gonna try to see roughly what are the underlying hidden problems;
- I am gonna tell you how we cheated so far;
- then we will show that the nature of the problem is predictably in contradiction with accepted actual business practices.

Solving the problem of knowing which package and in which order to install them given their dependency is NP complete. 


The more correct way to explain this homeomorphism is here.

So: K-SAT is about the generic problem of solving boolean equation with k-parameters where some parameters maybe in fact an expression of other parameters (solution of an another equation) (that are the cycles I will talk about later).

After all, boolean are the heart of a computer, it should be easy.

Seems easy as long as the equation is a tree. And aren't all our languages based on the parsing of an AST? Don't they work? So we could write a language for it.


Well, no. Computer manipulate data. A register does not tell you what the data is. Is it one variable of my equation, its name, its value, its relation with other adresses...?

The hard stuff in computer science is to make sense of data: to make it become information by handling the context.

Installing a package on a computer is in fact building a huge graph (40k nodes on debian) and when a package is to be installed you begin by asking the first equation
Ready_to_install = Union(dependencies == satisfied)
if false then we go to the dependency solving stage

For each dependency listed in the dependency (to the nth order)
build a list of package not installed that should be required.

Plan installation of this package with the actual solutions chosen (there may be more than one way to solve your dependency, so the equation as not one but potentially N solutions).

So ... you have to evaluate them ... recursively (because parameters are solution of other equations)... then stack them ... sometimes the solution is not good, so you backtrack to another solution, modify the stack ... and sol long.

And it's over?

Bof, not really.

What if package A is installed and package B requires A+1 and A & A+1 are mutually exclusive?  (small cycle ex: centos6.4 git requiers git-perl and git-perl requires git)
What if package B requires A, C , D. C requires E, E requires F and G, G requires B? This is a circular dependency or a cyclic graph.
In which order to install the package so that after every step all software still works? (I don't have the right to wipe software installed to solve a version dependency).

Where is the hic?

Variable subtitution can make the boolean equation impossible.

ex: A & B = True given A what is the value of B? easy True

The given equation is the desired state of the system package A and B should be installed.

A is True because package A is installed.

What if B = ~A ?

The equation is not solvable. Trivial case that normally don't happen.

What if B expressed requires C, D and D requires N and N is exclusive of A?
(Example A == apache, N == nginx and the software B requires nginx on 0.0.0.0:80).

Testing for cycle is easy given K determinated vertex. Finding how to check all the possibilities given all the N set of I partial solutions is quite more complex.

This is known as the DLL Hell!

That is called software requirements (see ITIL that makes a lot of fuss on this).

We already are facing small problems, but nothing that really matters. We have not talked about how we cheat, and why some are heading for a disaster.

Why do computer engineers avoid NP problems by the way?


The universe is bounded.

The data structure needed to solve the resolution dependency is a graph.

The edge are the packages (variables).
The vertex are the logical expression of software requirements (A.version > B.version)

So  before talking of algorithm just one basic:
in worst case, when you add a node to a graph with n nodes you add at least n-1 vertex.

Thus the number of total relations has grown more than linearly.

You still have to store the information ... in memory (for the work to be done fast).

Then, you have to detect the cyclic references. The first order are easy.

But not always. There are ambiguity in the vertices. A requiers B.version>1.1
and C requires B.version < 2.2 may conflict if B is only available in version 1.0 and 3.0. ... so ... there is much more than what the eyes can see :)

And cycle can be bigger than the usual classical 2 exclusive packages.

But that is not all.

The algorithmic normal way to solve the equation is to create the graph. And do the systemic evaluation of the cases.

The time of computing grows in worst case explosively.


But we are not in worst case: with my «blabla» OS it takes me 3s with 4k package to install and 3s with 41k packages installed


Well, we cheat.

One part of the cheat is not going for the exact solution but given known property of real world packages KSAT solvers are optimized.

We cheat even more by relying on human beings.

Maintainers in most distributions are doing an excellent job at testing, and fixing the bugs the OS users report and make very minimal dependency.
We are in a special case where the vertex are not very dense.

The algorithm seems to scale. But ... it can't... since we are changing the domain of validity of the KSAT solver we use. Optimization that relies on : sparse connections//few requirements per software.

DevOps problematic is not ONE computer. It is a set of computers with different Operating Systems. And in house developers that ignore what packaging is all about.

So you don't have one set of equations to solve your dependencies, you have n sets. And now, the requirements may link to other sets of equations :
exemple My python program on server X requires nginx on the front end Y.  

OOps, I don't have a graph of 40k nodes anymore, but 800k nodes now.
Do you want to compute the number of potential vertex with me? No. It is huge.

My sets of depencies has grown a lot. My input data in my algo have grown exponentially, so will my CPU time needed to solve the new problem.

if your apt-get install apache is 3 seconds on your ubuntu, your chef deployment will take you 3 minutes.

And, in real life, there are still people installing software from the sources without using a package manager (if that was not complex enough).

So your data are possibly not even accurate.

To sum up:
We are tending to :
- multiply the number of edges more than linearly;
- increase the number of vertices more than linearly
and feed that to an algorithm that takes exponentially more time given more input in the worst case and we tend to move towards the worst case.

The time and complexity is increasing very much.

Why old wisdom matters!

I tend to think the drawbacks of dynamic linking outweigh the advantages for many (most?) applications.” — John Carmack

The fashion for android and OSX is to prefer statically build application.  It diminishes the vertex in the graph a lot.  It diminishes the software requirements.... on the front.

But smartphones and tablets are CPU/IO/battery bound very much, so we deport more and more computing in a distributed system called the cloud.

And let's zoom on the cloud system requirements.

Since we exploded the resources available on one computer we are replacing in cache memory available to more than one threads to distributed in memory cache (memcached, mongo, redis...). We are adding software requirements. We are straffing/caching/backuping data everywhere at all levels.

Since we can't serve the application on one server anymore we create cross dependencies to higher the SLA

Ex: adding a dependency on HAproxy for web applications.

For the SLA.

So your standalone computer needs no 99.9% SLA when it is shut down.

But now, since we don't know when you are gonna use it, where you are, we have to increase the backend's SLA.  

By the way, SLA adds up.

My CDN is 99.9%
My heroku is 99.9%
Our's ISP is 99.9%
so my SLA is know ...between 99.9% and 99.3% yep, you forgot to add the necessary links between your CDN and heroku, and your customers ...

You need a 99.9% SLA. It is cool, it is your upper bound.

But you build a growing uncertainty for the worst case.

Or you could expect more SLA from your provider.

What is the SLA beast?


Service Level Agreement. The availability of a service over a given time on average.

99% SLA over one year ~= 3.65 days down.

Would you use still google/fb/twitter/whatever if it was down 4 day per year?

If you have a business 1% off on a critical service (like mail) you have 1% gross income less.

So ... our modern distributed technologies are aiming at 99.999%

Mathematically SLA is thus a decreasing function

And they are de facto based on increased requirements. They rely on an algorithm that is NP complete.

Mathematically resolution dependency is an exponentially time consuming function. And you are feeding more than linearly growing input.

So ....

Mathematically they are bound to intersect.

Just for memory: chef recommends 30 min per run // the equivalent of apt-get install on your computer that takes 3 to 45seconds.

These are

Availability per day per     month         per year
99.999%         00:00:00.4 00:00:26 00:05:15
99.99%         00:00:08 00:04:22 00:52:35
99.9%         00:01:26 00:43:49 08:45:56
99%         00:14:23 07:18:17 87:39:29

So, well... imagine a distributed deployment did happened bad, what do you think of the SLA?
And, do you trust people who says they never made any mistakes?

I don't say the days are near where this NP complete aspect of software deployment will bite us.
I say these days exist.
I say the non linear nature of the problem makes it impossible to predict when.
I say the phenomenon will be very abrupt due to nature of the phenomenon.
I say we are pushing towards choices that will create the problem.
I say business analysts, companies, CTO will not see it coming.

And that is my last point:

Our scientific education makes us blind to non linear problems

The first words of your scientific teachers that you have forgotten before teaching you science/math was: «most problems are not linear, but we only study these one because there are the only one for which we can make easily accurate predictions»

If you have a linear system you can predict, plan ... and make money. Without well, you are playing the lottery.


What are non linear stuff ?
- weather (weather forecast after 24 hours is still a scam even though our computers can crush even more data since 40 years);- actuariat/finance: selling products based on the probability connected problem will happen ; 
- resource consumption (coal, oil, fish, cows);
- biodiversity;
- cryptography (you search for symmetrical operations with non symmetrical CPU cost)
- floating point behaviour (a operator b != b operator a is not always true)
- economy;
- coupled moving systems in classical physics (randomness can be obtained easily with predictable system if you couple them correctly);
- quantum mechanics (it bounds the max frequency of the CPU);
- the movements of the planets (you can send me your exact solutions for where the moon will be relatively to the sun in one year (length) relatively to a referential made of 3 distant stars).
- internet bandwidth when bought to a tiers one;
- real life, sociology, politics, group dynamics .... 

You see a common point there?

We still have not solved these problems, and we do not learn how to solve them in our regular curriculum.

I don't say there is no solutions.
I say there is no solution. Yet. We will never find the solutions if we don't get aware of the problem.
Non linear problems are not a computer problem. They are an intellectual problem that requires proper thinking.
It requires education.

We are pretending to live under the empire of necessity, but there is no necessity to accept this reign.

We try to build a new world with the wrong tools because we are making the false assumption we can handle the problem we face with the methods we learned at school. We rely on the «giant's shoulder» to make the good tools. But, since we are not well educated, we invest money on the wrong tools for our problem. Tools often made by the right guys for often no actual problem.


Firstly, we should slow down our adoption of async/distributed systems;
Secondly, we should lower the SLA to reasonnable levels. If 2% of a service interruption in one of your production can kill you, your business is not reliable;
Lastly, we should understand how the more our systems is efficient the more fragile it is becoming.
It maybe the time to trade efficiency for durability. It maybe the time to slow down and enjoy all the progress we made.

The crushing of Big Data by the Dude: Ultimate Data!

(Hommage au big Lebowtsky).

I was enjoying leisure and was pretty happy with myself, but annoyed.

Annoyed by the non sense around big data. Big data does not measure in how big your data are but in how an heavily significantly simple data comes out.

Big data should be the ultimate data.

Now, let's try to do what all good programmers before coding.

Relax, close your eyes and imagine the better world for your ultimate customer.

Relax even more. Would I switch place with him?

Holy cow, yes!

But, I can't take his place. It sucks to be me. I want to be that guy. So I imagine I become the boss as a fraud. How would I keep my place?

I need something. Something like BIG data so that I can be easily, lazily make bucks. Big buck$. I need the most simple designed evilish software to ease me the trouble of working.

That's how you think. And you are right.
 What is it?

A something that makes the business alive. That I can read easily pretending I am a wizard and no one will understand.

Imagine a simple digital clock like device with one info on the wall. What would you put?

The actual flux of income/outcome without any artificial filters.

Imagine how trepiding it must be to have a direct "load" for your business.

Is there a bottleneck in your production? Your load will stagnate.Is it day or night for your customers? You will see if it matters.
Is it Christmas holidays? You will see if it matters on your load.

Is there a bug in a software that results in gains? You can give the coder a bonus to keep up doing the real bucks.
Is there a feature that results in a direct gross loss? Well. Even if it is correct, these people are gonna kill your business. 

It would be damn fun and exciting. Just wonder when a good news happens, you have to understand why.

You need to have flux of raw informations maybe coming from other channels (news papers for instance) and a timelime of your value to make the correlation correctly. 

You can actually read the results in real time. You could even have deconvolution filters to erase trends, saisonnal activities to improve the  results.
And like a load on a computer every scale time make a difference. But actually a skilled eye see the patterns fast so deconvolutions maybe overkill.

And, now, You too can be the big dude with big bucks, with only one skill: looking a figure. Being a boss is just a guy that pays attention to always have a positive cash flow on at least one time scale he controls.

You have to respect lazyness. So, you relax more, 'cause you are not Rockfeller and you have high standards. When you will be the boss, you want to go to Tahiti by cargo boat.

So you think of delegating. You are lazy, but smart.

 So you have to be able to delegate the coding without being coned. Because, if you hire another yourself, you know it is easier being well paid selling your boss IT craps than actually trying to provide real services.

Does it requires bigger data?

Nop it requires the ultimate big data: the direct results on the cash flow of every actions in real time expressed as a figure. Or anything that correlates it. One figure to rule them all: cash. We human are good at out of the box thinking. If I were having a hairdresser's franchise, I think I could correlate the water's consumption in all franchises with my activity with a certain confidence. That might me my ultimate data in this situation. Simple data correlated with my business. It won't be precise; I won't see the financial part.
But what about the finance if I lose my customers faster than my hairs?

It just require smarter ways of having datas. Growing the size of data in an uncontrolled way may not be the solution.

Smarter data may involve a little bit of an archeological domain called "empirical science".

It involves studying our math to sample a smaller set of data and be able to give results with their errors.

Adding errors to the ultimate data is also ultimate.

It enables a controlled tradeoff on cost over exactitude. Precision is about the quantity of information, exactitude is about the quality.

big data is precision.

But ultimate data is cost effective exact data handling with their level of confidence.

Not nice figures unchecked but labeled «exact». Less information that are more significant with their level of confidence that are reliable: data you can trust that now leveled up to the rank of information! You want a small information in real time: are my decisions good or bad. In fact, the trouble is this figure is a glasshouse. Everybody can also see the impact of your wrong decisions. A good tool should be dangerous, else there is no fun.

As soon as your IT teams will improve, then your error margins will improve (at the condition you cut some heads every time a value happened outside of the confidence values previously made and other precautions). 

If you are really paranoid add the IT related cost per customer on a side channel.

Nowadays, the data are growing exponentially with the size of your graph. And, also because of added dynamics, it grows more than linearly over the lifespan. And the more people uses messaging based system for the more operations, the more growth you add.

If we follow the reasoning then big data is just overwhelming bigger data: companies that buys are doomed. Data are consuming electricity to point the less. The OPEX are more than linearly growing by customers&providers. The bigger you become, the more vulnerable until you reach a lock in situation.

 Your interest is to control your cost per customers. You want to diminish your cost per costumer while your base of customers grows, not the opposite. If I were your competitor I would not contradict but rather encourage you.

Anyway, the ultimate big data: I will build it with my friends; for us to become the big dudes, relaxed, and having time to spare.  It can be done. You just have to relax and focus on what is the essence of the data, not on its accidental nature. The essence lies in simplicity, clarity causality in accord to your goal.

Ultimate data is the simplest tool for measuring in real time your success and failures according to what matters the most. No more, no less.

How FSF (and free software zealots) miss the point of Free Software

My physic teacher used to say:
why is a religious question. Actually the real important question is how; since once you understand the mechanism of something you can improve it. While wondering about the finality don't bring any useful answers that can be neither checked nor used to influence the system we observe.

This is my first attack on FSF: while I love the definition of free software given by FSF, I am very reluctant to adopt the ideological views of FSF.

Just to refresh memories, the definition of free software by the FSF is made of 4 freedoms that should be granted by the licence: non-discriminative usage, sharing, studying, modifying the code that should be provided when a software is released. It results in a single important property of these software: they can be forked.

This definition is accepted by everyone and is the reference (even for OSI), and some weired spirits says that the FSF licence is a tinge limiting the freedom of usage compared to others licences such as BSD.

The main distinction between Open Source and FSF (gnu project) is the finality.

Open source is considered a pragmatic or materialistic approach where softwares are viewed as economical externalities for which the cost of developing is so high given the available resources that sharing is just a mean to make the cost decrease while increasing the «frontier» of new problems to solve.
FSF thinks software is about freedom. The bits of code we share according to them have much more power: there are the foundations of a new free society to empower citizens. They are the partisans of an «intelligent design» of software.

For them, we face a competition of the bad proprietary software trying to enclose us in a technological lock-in due to the very nature of economy based on externalities; according to Chicago school, software/OS industry should tend towards natural monopolies: the more you use my goods the more my costs diminishes thus the more I will win money even if my product is crap as long as it is adopted.
FSF also thinks citizens can be free if they have tools to express themselves and see computer networks as the climax of the modern Gutenberg press.

To sum up: Free software is for the FSF a synonym for free speeches, and free society.

So in order to avoid the lock-in religious zealots focus on trying to provide alternative to «potential lock-in technologies» needed to build an independant functioning OS:
- kernel (Gnu HURD ;) );
- system (GNU bash, openSIP, GNU...);
- development (GCC, Glibc, Gnu ADA, mono, Guava, Gnu ...); 
- security (GNUTls, GPG...);
- desktop (Gnome..);
- office suite (Gnumeric :) );
....
It is exact to say actual linux distributions are using in order to be functional a lot of GNU technologies brought up by the FSF. Thus the claim of FSF one should not say Linux but GNU/Linux (the pronunciation is available in .au somewhere on the fsf sites and worths a good laugh).

While we live under the empire of necessity (externalities), it is wrong to praise the necessity; there is no necessity to leave under any empire may it be from the forces of «right». We should totally consider getting rid of most of the FSF sponsored software when they are harmful.

And that is my point of opposition with FSF. I kind of agree with their view, I strongly disagree with their way of trying to achieve their goals that for me is counter productive in terms of engineering and of education.

Hence the how vs the why approach.
   
FSF code alternatives to lock-in technologies without wondering if:
  1. they are competent in their field;
  2. these technologies are beneficial from the beginning.
GNUTls considered harmful, rants about how clueless the TLS team is about C coding and maintaining code. Since security is kind of like walking in a very dense minefield when the code looks like behaving like a drunken man from the most simple point of view of coding, then you don't trust the code.

FSF zealots will say: not a problem, by the shear property of openness and amelioration the code will tends towards better code.

Well, this is wishful thinking, crappy engineering even with good QA very rarely tends to be good engineering at the end. (bf110, fulmar ...)

And kaboom you walk on a mine: a bug in GNUtls made it possible for 5 years to bypass certificate checking. The NSA really must fear FSF claims that being convinced of freeing society makes software that rocks.

And still FSF is making a FUD (fear uncertainty doubt argument) on proprietary software dangers promoting «safer» free software... Please!  

Is GNU-tls an isolated case? Well, Gnome (for using
c# notably everywhere), gnumeric, gcc, glibc, mono... received a lot of criticizes for their engineering, and there are more. As Linus Torvalds says: all bugs can become security bugs, so the first rule of security is to adopt correct software.


Plus, some alternative are even worse than no atlernatives at all.

Office suite bringing to the computers all the confusion of mixing what you mean and what you see, and the stupid "paper" analogy of documents is harmful. People should focus on the content. Software are gifted for applying a lot of stupid rules: they are gifted for versioning, applying templates, typographic rules, having hyperlink, access control in a distributed environment. And we still use this bloatwares called office suites making you write on a virtual piece of dumb paper.

Our document papers designed by computers are when you know the rules of typography below what we can do with manual typesetting. Typographic rules are not for grumpy old men, they are a way to higher the speed of reading and comprehension of written documents. Yet with these awesome computers, we have documents that are pathetically less readable compared to what we could do.

FSF is by definition reactionary and conservative by «proposing alternatives» to adopted lock-in technologies.

Their ideological blindness make them back up wrong so called «new technologies» like sheeps. Maybe TLS is wrong. Maybe C is wrong, maybe traditions are wrong.  Maybe they are right. But I am sure blindly reacting to «proprietary lock-in» by quasi systematically proposing free software alternative is dumb. It sometimes helps the adoption of incorrect technologies.

Thus and here is my conclusion: FSF is harmful.

Against all evidences Free software zealots and fanatics are using the post Snowden era as a way to advocate GNU/free software. Saying that by property because people that do code are «good people» validated by a Political Kommissar inquiring their views they do «good software».

Well, fuck no.

openssl (which is not GNU), gnutls are below average crypto suites, with very harsh engineering critics since the snowden revelations.

I don't mean the proprietary alternatives inspire me much confidence (like RSA's stuff).

I say, we don't want «good or free» softwares. We want softwares that are well built and thus we can trust.

FSF says open source by enabling the «proprietarization» of software is evil.

Well, before windows adopted the BSD stack, their TCP/IP stack was much more vulnerable to sequence prediction attacks. They may have changed their stack since windows NT.
But what I can say is it is better the TCP/IP communication between to computer be safe. TCP/IP don't care about linux or windows or BSD, and one compromised computer make a lot of people unsecure.

So once more, I prefer windows to use open source software that is well engineered because it also selfishly helps me being safer.

And last and least: what matters in a software is not what it is said to do or any phantasmagorical values, but its correctness. FSF is just doing marketing for its own chapel chapel to which I don't belong in my name, and I strongly oppose it.

I am a dysfunctional small part of free software yes. I could even be rated a failure: I totally can live with it. But, still, I am part of it.

FSF should bear in mind it doesn't own free software or its values. Even a very insignificant developer as I have divergent opinions than theirs, like a lot of devs that code instead of writing stupid blog posts as much as I do.

Defining correctly the four freedom of software magnificently does not give FSF the right to pretend speaking or expressing the view of free software communities.

Collaborating on something does not imply we share a common view. And that is the freedom zero of free software they forgot: the radically non discriminative freedom to use free software whatever your opinions are.

And for all of us that are just «using» softwares for pragmatic reasons of having «correct» tools, when they push towards unsafe «uncorrect» softwares using a FUD on security, there is no way they cannot piss us off a little bit.

We don't need a unified free software. We don't need political strength. We don't need a wider adoptions of «free software» in security for instance, we need a wider adoption of «correct» approaches to security that will not be possible if FSF enforces the adoption of poor technologies by providing even more broken alternatives to proprietary approaches (based sometimes on the open standard they cherish like oauth2.0).

We don't need alternative to the desktop «à la windows or apple» we need correct desktop.

We finally don't need more «adoptants» that want free software everywhere, we just need more educated people that can make enlightened choices not based on fear but on understanding. And it would be great if they helped us find new disruptive approaches based on really new technologies to solve the old legacy problems left by crappy softwares and design some of them coming from GNU softwares.

And I am bored about their lack of culture in computer history: the first community aiming for users to be able to grow their own «vendor independent solution» is not born in 1984 with Stallman but the SHARE user group in 1953.

Please, you don't get credit for modifying history. You just looks like an Orwelian dystopic movement, or acculturated religious trapped in their closed mindset.

Ukraine: Peloponesian war 2.0 or Cold War 2.0?

Do you know how cold war ended?

Because we thought Reagan smartly triggered the CCCP1.0 to go for an arm race.

SDI program triggered the soviet program and since russian were «intoxicated» with news of the over performing results of the system, they set their economy full throttle to try to counter USA's ghost program and since it was all smoke, they diverted so much resources from their real economy that they collapsed.

Why do I say it was a ghost program?

ICMB interception program was based on a simple technology called tomography (using n 2D pictures to reconstruct a 3D scenery like used in MOCAP), and prediction of trajectory.

The reason why tomography was doomed since the beginning in this context, is the reason why patriot missiles failed to protect Israel from the scud missile in the 90's (Koweit war probably). Scud engineering was bad: the mass was not evenly distributed, thus creating a precession movement, thus, in term of trajectory making it unpredictable, and making dynamic tomography a nightmare. Thus patriots fully loaded with fuel failed to intercept the harmless scud (they had no more fuel and almost no explosives) and killed the civilians it was supposed to protect.  So patriots had not «bugs» as said in american scientifics newspaper of this time, but a design bug. It was made to intercept well engineered missiles. SDI, was also based on this technology.

Now, 15 years later, I think USA was not smart to trigger this fake arm's race, but as Roosevelt would have said, the USA are dominated by the military industrial context. They believed against all evidence this program could work.

And someone wise stopped it. Unlile any frenchs that should have stopped the rafale program a long time ago.

During Cold war, Europa (England, RFA, Poland, France) were scared like shit to take by mistake a nuclear ogive on the corner of the head. So they tried to deflate the «fear bubble» by playing a double game of transmitting info on each sides so that they were not overstating the power of each others. Basically, European countries were either under USA or Soviet domination and were betraying selectively each sides, and trying to get the situation to cool down.

Why am I speaking of Cold war?

It is hard to tell, but my guts of the old men I am, tell me we are back in cold war: I recognize in Putin's way, some ways of the KGB, and I also recognize some of them in the USA.

Wait, wut! I said USA make me thing of KGB?


Sorry, but, KGB was well known for his intensive program of spying on his own citizens with very invasive technic. For instance, when you hanged up your phone, the microphone was still open. This way KGB could listen to everyone.
Doesn't it looks like familiar?  Like NSA program?

The only difference is the «good guys» are now doing it for the same noble reasons the KGB was doing it: identifying threats against the nations that share a common interest in defending noble ideals and power structures (apparatchiks or 1% it is all the same). The finality justify the means they said.
Sorry to tell you, this was KGB's justifications and they were as legitimate as the NSA's actual one. At least USA does not torture people, do they?

Now, NSA program is exponentially costly an inefficient. They forgot the basics of physics once again.

Since I cannot find the content of «old» signal processing first lesson, let me use my memory instead of wikipedia and google:

When we were designing measure system in SP we before everything had to fill two matrices: one for the probability of right/false positive/negative then we had to put a price tag on all events:

probabilityTrueFalse
Positiveaiming for 1.0aiming for 0.0
Negativeaiming for 0.0aiming for 0.0


cost or impactTrue False
Positiveaiming for 1.0
(stuff detected
accurately)
aiming for 0.0
Negativeaiming for 0.0aiming for 0.0





At least the detection system costs had to be sustainable (fixed operational costs per month was cool, or sometimes 0).


The problem with NSA's technology is not their obvious capacity in detecting true positive, but the fact it has too much false negative and false positive, plus the cost of the system is exponentially raising over time.

Let me put real examples in front of the concepts:

False negative: the cost of not detecting a threat (9/11, ukrainian invasion?, MH370?, London bombing 10 years ago...)

False positive: misc people reporting they cannot take a plane because they are an homonym with a terrorist or having to justify to the secret services why they  went on a website for buying a backpack and then on a site to buy fertilizers... (globally all normal citizen harassed on false assumptions of being terrorist or because of identity mistakes)

The cost: in distributed systems we have a diminishing returns system. If you have a hard drive locally, you x2 its size you have x2 data. On the cloud for redundancy reasons it is much more x2 size = x 1.3 data (because of redundancy, fault tolerance, strafing). The data are an integral over time of the flux, they cumulate. So cost are cumulating with a more than linear law.


All the money that DARPA/NSA/government put in research for their weapon tends to percolate in real economy. The «cloud», «big data», «k clustering» technics are the new DARPA net. The first Darpa net was a success in having real time signal coming from all the radars across USA to track potential soviet bombers.
Here, all the economy is intoxicated with a system that encourage scaling with systems based on diminishing returns. So it is an integral of integral of people using resources more than linearly. This integral is divergent and is explosive over time (unless you are using string theory (why do I have difficulties with string theory?)).

As a result this time, it is not sure Russia or whatever country is gonna collapse its economy in the arm race, but USA is sure doomed.


Will Russia or CCCP2.0 win?

Russia has also another problem. Their wealth comes from USA/Europe using too much resources. And their country is blatantly a tinge more corrupted than «occidental», thus they also have to spend more and more resources to extract/access the resources they provide (add Hubbert law).

So will Europa wins? We all are part of a big negative feedback loop.

There is another possibility: I think we are entering a Peloponesian war case.

All of the nations taking part in the potential newcoming cold or hot wars are ready to trade their civilizational values to win. As Sparta/Russia (the dictature that was protecting the weakest) and Athena/USA (the democracy that became an imperialistic power) did.

Just for the record the issue of Peloponesian war is that after that war neither Sparta nor Athena where once again a major power.

The philosophical gain from liberalism, communism, socialism, capitalism all alike was a major breakthrough from the previous political philosophies: we stopped saying homo homini lupus est. These are the first ideologies to say mankind may not be good but we can make it good. So basically we trust citizens and mankind. Anyway, if we don't we cannot face the changing future.

Whatever USA, Russia, European countries, Muslim countries ... are saying they are not anymore following any of these philosophies. They are back to the XVII century where small elits (call them appartchik, 1%, «l'élite», the pure... or whatever) basically distrust the citizens, and are willing to provoke huge conflicts as in early XXth century.

That is a major problem: no future is possible neither without hope, nor without resources. The problem of the actual situation is we are disregarding the resources we are destroying that are necessary for living. The wealthiest don't care anyway, they might think poor people need to die to make a more sustainable planet anyway and they are preserved.

I do find the idea appealing, but... well, a society that kills its production capacity may not have wealthy people for long.

And the more all countries' elits disregard the people, the more we are destroying our civilizations.

So, let's hope we are only facing a cold war 2.0, but given the states of the economies (russian, USA, China...) and the exponential diversion of resources for non productive goals, I guess we pretty much are headed toward a collapse of our civilization.

And if you are interested, I can develop my conclusion and show the root of the problem is Socratic philosophy. Athenians were right to trial him to death even if it was too late to heal the society.

Backuping contacts from an iPhone

For the impatient : 

- connect to your iphone (ssh) and transfer  AddressBook.sqlitedb from your home dir;
- do locally sqlite3 AddressBook.sqlitedb;
- type:
.output address.csv
.mode csv

select
    p.First , p.Last , p.Organization,p.displayname,
    CASE WHEN v.label > 5 THEN 'mail' ELSE 'phone' END,
    v.value
from ABMultiValue as v, ABPerson as p
where p.ROWID=v.record_id;

 And that's done
 

I am pretty sure there are apps to backup your contact from your phones.

However, my iphone I discovered lately was illegal and dangerous: it was jailbreaked.

In order to strictly follow the law, and not because it does not work anymore in a scary fashion that make me think it is dying, I bought an android to replace it.

One thing occured to me. Had I no warning, I would have been dead without my contact I carelessly forgot to backup. So I decided to opt for a long lasting technology: paper printed notebook.

So first, I used my ssh server on my iPhone to connect (no my pass is not alpine). I found there was a very nice AddressBook.sqlitedb in  my home dir, and copied it on linux.

I wanted to make a demo of sqlsoup, but on linux mint sqlsoup support for sqlite is broken.

Anyway I needed to learn sqlite in less than 2 hours.  Because, sqlite cannot be as infuriating as "NoStandardEspciallyNotSqlBecauseReinventingTheWheelPoorlyIsSoCool"

I did a simple introspection (.schema)

sqlite> .schema
...
CREATE TABLE ABMultiValue (UID INTEGER PRIMARY KEY, record_id INTEGER, property INTEGER, identifier INTEGER, label INTEGER, value TEXT);
CREATE TABLE ABMultiValueEntry (parent_id INTEGER, key INTEGER, value TEXT, UNIQUE(parent_id, key));
CREATE TABLE ABMultiValueEntryKey (value TEXT, UNIQUE(value));
CREATE TABLE ABMultiValueLabel (value TEXT, UNIQUE(value));
CREATE TABLE ABPerson (ROWID INTEGER PRIMARY KEY AUTOINCREMENT, First TEXT, Last TEXT, Middle TEXT, FirstPhonetic TEXT, MiddlePhonetic TEXT, LastPhonetic TEXT, Organization TEXT, Department TEXT, Note TEXT, Kind INTEGER, Birthday TEXT, JobTitle TEXT, Nickname TEXT, Prefix TEXT, Suffix TEXT, FirstSort TEXT, LastSort TEXT, CreationDate INTEGER, ModificationDate INTEGER, CompositeNameFallback TEXT, ExternalIdentifier TEXT, StoreID INTEGER, DisplayName TEXT, ExternalRepresentation BLOB, FirstSortSection TEXT, LastSortSection TEXT, FirstSortLanguageIndex INTEGER DEFAULT 2147483647, LastSortLanguageIndex INTEGER DEFAULT 2147483647);
CREATE TABLE ABPersonChanges (record INTEGER, type INTEGER, Image INTEGER, ExternalIdentifier TEXT, StoreID INTEGER);
...

My arch nemesis is there an EAV (Entity Attribute Value) soft model: ABPerson are the persons you know, and ABMultiValue is where your values are, and the game is to found what reference them and the name of the attribute.

Normally getting EAV values is a 2 joints :
- you get the value attached to the person and display the category of the value.

Well, screw that: a fast scan on ABMultiValueLabel tells me phone id is less or equal to 5.
Then a quick trial error tells you the record_id referenced in the EAValue table is the person ROWID.


What we learnt?

SQL is cool: even if sqlite is not mysql/sqlserver/postgresql the learning curve is close to zero. No surprise, so I am happy with SQL.

Apple uses an MVC for the contact App.
- the model are the DB table;
- the controller are the triggers in the DB;
- the view is the objective C code.

KISS, nice, efficient.

I don't like EAV, but it is an easy way to attach n "things" to a contact. Thanks to this you can have 6, 9 or 666 mails/phone for a person. It is at my opinion a school case of when EAV should be used. There are a lot of ways to avoid EAV, especially when you have a DB table that supports multivalued entries (yes, my dear postgres, I look at you my dearling).


I am still pissed off at apple and google because I need days to access my smartphones -that are for me normal computers- the way I want: with ssh, sqlite, python, strace ....
I have a blackbox I should trust based on individual's works that have all the incentive to betray me when the stake will be high enough, and I cannot check anything. I hate it: I am a control freak, not an hipster who likes to show off with his costly gadgets. I am utterly disappointed by android and apple all alike. 

I could make wonderful application if I had not the stupid barrier of using their cumbersome SDK and frameworks. It is "an expert" realm, where the only expertise lies in the art of doing a simple thing (development) quite more complex than needed.

I hate smartphones.

I could not buy a normal phone that does only phone and whose battery can last a month.  I don't need a computer that I cannot use.



2 secondes de nawak

- Mais vous êtes infect!
- Oui, j'assume j'ai souvent raison certes pas souvent, mais bien plus souvent que mes détracteurs!
- Vous n'êtes malheureusement pas un génie.
- Non, un artiste, et l'art ne supporte pas la médiocrité.
- Oui, mais vous avez poussé les limites: demander une documentation à un chef de projet!
- On est plus dans le domaine de l'art mais de la supercherie, malheureusement. Comment voulez vous faire fonctionner un boeing 747 sans manuel? Pas la peine, je connais votre réponse. Ben c'est ce que j'ai dit au chef de projet, que c'était une usine à gaz, mais que ça me dérangeait pas, je voulais juste la porte d'entrée.
- Vous êtes vexant. Et jugez les autres.
- Parce que les gens me jugent sur mes performances, que je ne peux atteindre sans leur aide.
- Je veux dire sur le ton.
- Ah! Parce que notre directeur qui dit que les concurrents sont des couilles molles c'est mieux?
- Oui, mais lui pour un mot méchant il en dit cent gentils. Vous n'êtes vraiment pas constructifs.
- Existe-t'il une manière polie de dire à quelqu'un qu'on va être viré si il donne pas la doc pour que l'on puisse coder?
- Non, mais le demander par mail? On fait de l'agile, tu vas au bureau des gens et tu parles avec eux.
- Et je prends un crayon et je note? A une page de l'heure, et au coût du bic et du temps d'ingénieur perdu, je pense qu'une doc aurait été mieux, et moins chère non?
- Non, mais c'est vexant de demander des docs et des APIs qui marchent.
- APIs que je dois utiliser pour mon code, et pour la livraison duquel vous me menacer de me virer parce que je ne l'ai pas?
- Vous voyez, vous le refaites, vous êtes vexant à apprendre leur métier à tout le monde. Et à ne jamais demander d'aides à vos collègues.
- Wut? Mais je leur demande rien, parce que j'ai les réponses. J'ai besoin de la doc.
- Vous pourriez leur demander de vous aider.
- Pour aller demander la doc?
- Oui. Par exemple.
- Je vous ai dit que j'ai probablement un syndrôme d'Asperger, et j'ai du mal à savoir si vous faites de l'humour.
- Ben, non. Vous dérangez les gens et les empêchez de travailler.
- En leur demandant les choses qu'ils ont livrés, et qui ne sont pas disponibles? Et si je livre pas je me fais virer!
- Vous voyez, vous réutilisez ce langage ordurier de dénonciation. D'ailleurs qu'est ce qui vous fait penser que vous êtes Asperger?
- Oh! Des petites difficultés à comprendre les gens, et quand l'humour commence et s'arrête. Il m'a rapidement semblé que ça ressemblait à ça, puis j'ai regardé ce qu'il décrivait, et les gens que je connaissais qui leur ressemblait, et ils avaient tous arrêté d'être heureux. Et en fait je me suis aperçu que ses idiots avaient juste décidé que l'humour était tout le temps absente, alors que c'est le contraire: ils avaient juste à poser la matrice des jeux :
                      Se dit Aspie   ~Se dit Aspie    

Est Aspie             malheureux     Peut être    

Est ~Aspie            malheureux     Peut être

Il est évident qu'il faudrait être fou pour prendre le risque d'être malheureux.
- Vous vous moquez de moi?
- Je prends pas de risque, c'est mathématique, mon plan est parfait.
- Vous êtes insupportables.
- Mais j'ai plus souvent raison que vous.
- Vous allez être viré si vous continuez! 
- Une clause? Peut être pour me permettre de rester? J'aime bien cette boîte, entre les massages, le basket et les pause cafés, je me sens bien. Le salaire est 10% inférieur à la moyenne de la ville, mais je me plais bien.
- Vous pourriez peut être livrer la fonction login que vous nous promettez depuis 6 mois?
- Quand ?
- En une semaine!
- Si je livre en une semaine, on est d'accord que ça veut dire que j'aurais pu vous bullshiter grave.
- Hum, mouis, si vous livrez en une semaine, un truc promis en 3 mois alors que d'après nous vous avez clairement rien branlé, oui.
- Donc vous me virez.
- Oui.
- Oki. Au fait, vous pourriez me dire pourquoi j'ai du mal à garder mes boulots?