Nail programming : reinvented make in bash : you show your code, but if you don't tell how to build, you are a scammer

I always had a beef with so called open source project that ship their code but not their tooling for building.

You have code, but nothing is told about building the initial database, generating the doc, and how to fetch/build assets.
And that's exactly what I did in my previous blog posts about building a chronosociogram. But me I have an excuse : it's not ethical open source : it's wtf public license open source : MY FUN FIRST and no unpaid work hours for the benefit of parasits.

However, you can be as rebel as you want, you still need to build the code, but I do not build tool, I let them appear from « scratch programming ».

digression : the 50 shades of « hand programming »

I belong to the sect of the methodology « la rache ». « Programmer à la rache » (close to rush) is the main methodology appllied in France. Translated in US as « le » rush. Rush programming is an art that deserves a french article to distinguish it from the most common gruik programming (aka programming with your feet). Among this category they are the engineer that loves to do « assisted generative programming » either helped by IDE, « frameworks » or now the almighty Artifical Intelligence, and the rebel who prefer « hand crafted code ».

I live in Toulouse where rains comes in on flavor : raining or not. I came from the Vexin where rains comes as pluviote, crachin, drache, bruine, averse, giboulets ... Having the shades of perception for a simple task is useful when you don't went to be drenched by drache or overwhelmed by lack of defensive measures and sometimes, even in « la rache », we need to make things the right way to not drown oursleves in the complexity of our coding.

First is POGNE programming

It's how prototyping begins. With a clear view and a firm grasp of less than 100 sloc in a single NICE monolith.

Second is NAIL programming

Well, on the path of delivery you encounter difficulties you had not expected ... And instead of fetching your knife in the garage to scratch the spot on the ground you use your nails. Your code based gets disgusting, but YOU HAVE TOOLS to ease your job. But is hurts a little.

Last is mimine programming

Mimine is the gentle hand that comes to the rescue and make you spiral in building simple tooling for making your code manageable again.
As a craftman, you clear your workshop, sharpen your tools, clear the air and make your place ready to begin an another day of POGNE programming.

There is l'art (the academic way of building code) and la manière : your own personal sensitivity of « it works for me »®© packed in a few lines of code so that you have time to go to the savate boxe training (breaking knees the french elegant boxing way). La « Manière » is a pillar of « la rache » : don't go in another tool that you hate restepping the learning curve when you can brutally hack one.

Why you always need a makefile ?

Lol, if your code does one thing simply (like listing) I'm not sure you need it. :D

I needed a makefile because I introduced a 2 pass building for easing the life of my corei3 with 4Gb of RAM, which ... as a factor of serendipity helps building nice tools.

Let me explain it to you : using a dot file as a template is less expensive than REBUILDING the same dot file over each iterations... But since f strings/regexp are not my favourite in python, I did used perl to build the dot template generated from python to reinject it in the python script. An « n » stage build requires assets (artefacts) and sometimes your history forgot how you did it, and you too. So you need to create a makefile.

Good makefile tells you what they are doing, not only to understand when it fails but also, to help you build an intuition of where time is spent. It is as much an informative tool as a structuring tool.

What you want from a trivial makefile

Helping you when the day after a long focus on code your brain discarded every info that are touchy to remember : like how does your excited brain works. It includes :
  • in which order to do stuff
  • dependencies
  • parameters and API
  • wich stages can be resumed after an error in the stag
  • how to not go through a lot of useless stages when you lack time
  • Where you spend the most time when something changes
  • bash completion of course !

When you are an adept of « la rache », global states/variable are embraced like poison. At low dose they help, too much of them kills. In LARACHE you ALWAYS use global variable to avoid MAGIC NUMBERS BUT they MUST BE ON TOP of the code every time.

In LA RACHE we embrace universal key value passing from perl to python to bash to ffmpeg : for this we use a secret tool : non documented environment variables so that we can later choose what to expose.
It is a simple dispatch table called kind of recursively if you consider stack baded recursion as noble enough and it checks for artefact presence to call for depencies if required.
I am pretty proud of a « racherie » -the 3 perl-oneliners there- that actually transform the dot file into a f string for use a template for python. As a source of wisdom I dare say ; a free text written by an algorithm is always easier to parse with a regexp :D

Why does the making has more code than the main code ?



For the same reason in real life you often pass more time cleaning your environment before (and normally after) a task. A stuff manager never wants you to add in your timesheets because « it is non productive ». Well, here at home, I have no managers to tell me how to spend my time wisely. So ... I do whatever pleases me.

What is the interest of this ?

Practically, the debug messages gives me a clear intuition of where my computer passes more time. Intuition that I refined with printing dots when interesting. Thanks to this for instance I noticed
dot
and
convert
where 100% monocore, else I would have not parallelized part of the code according to my number of cores.
Also having an intuition of where time is taken can help see counter intuitive results. Like with sfdp. Not super fils de p..., the graphviz engine intended for speed. Except the algorithm uses scaling out to avoid overlaps efficiently making the converting waaaay slower. So all in all, sfdp + convert is slower and more prone to OOM than dot + convert. It is something you cannot guess if you mute all feedback. Call it visual profiling :D

And also, in makefile putting code is a major pain. When making an almost cascading model of tasks, involving few dependecies coding the logic (2hours) totally worth the fun of the process.

Having no code review demanding I begin my prototype with clever CLI libs is also nice. Environment variable have regain traction for argument passing (docker has got finally a good side), I can pass variable without having the craziness of getopt syntax to handle which make everything easy, including back and forth from the make and code.

Bash completion is fairly easy as a one shot :
complete -W "all still_images muscle backbone movie clean very_clean" ./make
of course I could let make have a completion function, but it does not worth the pain.
Makefiles and whatever flavour of the snake oil you are drinking is above all about focus. Separating your code that is complex and you want to focus on from the rest. Without this, juggling with your short term memory for building and debugging becomes hell. Of course, locally I use a version control software à la git, because, regression all bites us hard when the do. Especially the idiot forgetting to use a VCS.

Bref, I don't say makefiles are shitty, I say : I don't compile C here, my global needs are the same (reproducible builds) but my path is different since I use bash/perl/python/cli that are better invoked in a shell.

Being home is really the place where coding is nice and relaxing. I don't trust pro coders that refuses to touch a keyboard outside : how can they know their actual belief about style, dos and dont's, languages, frameworks are on par with reality ?

No comments: