Using Podman in bridged network mode

9/21/2019

Podman is really great for those of us who don't want the Docker daemon running in the background all the time.

It is mostly compatible with Dockerfiles and Docker CLI syntax (as far as I've read online and noticed, while poking at both of them), but some things are handled differently due to the nature of Podman's daemonless architecture.

I was trying to create a couple of Docker containers with IP addresses so I could test some Ansible scripts. I have to make sure the scripts work for different distributions and different versions of those distributions and managing virtual machines (or even clusters of those) in VirtualBox can become a bit frustrating.

So for Ansible, I usually need hostnames or IP addresses to define against which hosts my playbooks are run. But whenever I was running containers, they didn't have IP addresses:

[[email protected] ansible]$ podman run -dit --name centos1 centos:7
faf5abe92901c4757982bbcb39f0a89800e7378358fff88908bea09161922282
[[email protected] ansible]$ podman inspect centos1 | grep -i ipaddress
            "SecondaryIPAddresses": null,
            "IPAddress": "",

Of course, I could open up ports by publishing them with the -p flag, but that won't help me to use Ansible.

Google to the rescue??

Googling turned up no helpful advice, I saw that the --net host option is dangerous and should be used with caution, but no help on bridged networking, as I know it from VirtualBox.

After reading the docs on podman run, I figured it out: if you're running Podman without root, it can only use the network mode slirp4netns, which will create an isolated network stack so that you can connect to the internet from the inside of your container and bind certain ports of your container to the user-bindable ports on you host, but nothing more.

To be able to select the network mode bridged, which does exactly what I need, you'll have to run Podman as root. It turns out that the bridged mode is the default for running Podman as root.

Now, to further streamline my Ansible testing procedure, I can even specify which IP address should be used by the container:

[[email protected] ansible]$ sudo podman run -dit --ip 10.88.0.42 --name centos3 centos:7
3b56ae068a628027c7d8815485022f6cb59c7aa5d26e6bf4137961ecb6307952
[[email protected] ansible]$ sudo podman inspect centos3 | grep -i ipaddress
            "SecondaryIPAddresses": null,
            "IPAddress": "10.88.0.42",

And of course, I can now reach any ports I open in the container via this IP address.

Note however, that these containers can only be reached from your own machine, via the bridge interface usually called cni0, which is created by Podman. To access the containers by IP from other machines in your network, you'd need to bridge them to the 10.88.0.1/16 subnet somehow... But that will have to wait for another TIL for another day...


The HTML Base element

9/20/2019

The HTML <base> element specifies attributes for all links in the document at once.
Let's take a look at this example:

<head>
  <base target="_blank">
</head>

Now all links will open in a new tab by default.

Additionally, the href attribute can be set within <base> tag. This sets the base URL to be used throughout the document for relative URL addresses.

For example:

<head>
  <base href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/base">
</head>

The following link:

<body>
  ...
  <a target="_blank" href="#Usage_notes">Usage notes</a>
  ...
</body>

Will actually point to

https://developer.mozilla.org/en-US/docs/Web/HTML/Element/base#Usage_notes

Note:

  • <base> shouldn't have a closing tag. 🤷
  • There can be only one base element in a document.

More info ℹ️


Destructuring assignment features in es6

9/19/2019

Today I learned about the helpful ES6 "destructuring" feature to unpack arrays and objects.
It is a convenient way to extract values into distinct variables.

It is possible to object-destructure arrays:

const { 0: x, 2: y, 3: z } = ['a', 'b', 'c', 'd'];
console.log(x) // 'a'
console.log(z) // 'd'

This works because array indices are properties as well!

Alternatively, array-destructuring can be applied to any value that is iterable, not just to arrays:

// Sets are iterable
const mySet = new Set().add('a').add('b').add('c');
const [first, second] = mySet;
console.log(first) // 'a'
console.log(second) // 'b'

// Strings are iterable
const [a, b] = 'xyz';
console.log(a) // 'x'
console.log(b) // 'y'

Get Pull Request Approval with the GitHub API (v4)

9/18/2019

Natively, the GitHub API does not provide a way to obtain a pull request's approval status. Here's a workaround.

It is necessary to compare the date of the newest commit and the date of last approval, because new commits automatically invalidate any approvals (default behavior, can be configured).

import { graphql } from "@octokit/graphql"
import { Repository, PullRequest } from "./types"

const query = graphql.defaults({
  headers: {
    authorization: `token ${process.env.GITHUB_TOKEN}`,
  },
})

function isApproved(pr: PullRequest): Boolean {
  if (!pr.reviews.edges.length) return false

  const latestCommit = new Date(pr.commits.edges[0].node.commit.authoredDate)
  const latestApproval = new Date(pr.reviews.edges[0].node.updatedAt)

  return latestApproval > latestCommit
}

async function getAllApprovedPullRequests(): Promise<PullRequest[] | null> {
  const queryResult: any = await query(`{
      repository(owner: "cybertec-postgresql", name: "today-i-learned-content") {
        pullRequests(last: 25, states: OPEN) {
          edges {
            node {
              title
              number
              reviews(states: APPROVED, last: 1) {
                edges {
                  node {
                    updatedAt
                  }
                }
              }
              commits(last: 1) {
                edges {
                  node {
                    commit {
                      authoredDate
                    }
                  }
                }
              }
            }
          }
        }
      }
    }
  `)

  const repo: Repository = queryResult.repository

  if (!repo.pullRequests.edges) return null

  let pullRequests: PullRequest[] = repo.pullRequests.edges
    .map(edge => edge.node)
    .filter(pr => isApproved(pr))

  return pullRequests
}

Pretty CSS Hack to debug layouts

9/17/2019

  1. Create a new bookmark
  2. Add the following code to the bookmark URL:
    javascript: (function() {
	var elements = document.body.getElementsByTagName('*');
	var items = [];
	for (var i = 0; i < elements.length; i++) {
		if (elements[i].innerHTML.indexOf('* { background:#000!important;color:#0f0!important;outline:solid #f00 1px!important; background-color: rgba(255,0,0,.2) !important; }') != -1) {
			items.push(elements[i]);
		}
	}
	if (items.length > 0) {
		for (var i = 0; i < items.length; i++) {
			items[i].innerHTML = '';
		}
	} else {
		document.body.innerHTML +=
			'<style>* { background:#000!important;color:#0f0!important;outline:solid #f00 1px!important; background-color: rgba(255,0,0,.2) !important; }\
            * * { background-color: rgba(0,255,0,.2) !important; }\
            * * * { background-color: rgba(0,0,255,.2) !important; }\
            * * * * { background-color: rgba(255,0,255,.2) !important; }\
            * * * * * { background-color: rgba(0,255,255,.2) !important; }\
            * * * * * * { background-color: rgba(255,255,0,.2) !important; }\
            * * * * * * * { background-color: rgba(255,0,0,.2) !important; }\
            * * * * * * * * { background-color: rgba(0,255,0,.2) !important; }\
            * * * * * * * * * { background-color: rgba(0,0,255,.2) !important; }</style>';
	}
})();

To use it, just navigate to a website and click on the bookmark you defined.

The image below shows this website with the bookmark activated.

Screenshot of Cybertec Layout

You can use it on any page.

Ain't that cool? 😀

P.S.: Tested on Chrome and Firefox.

Check the official post and this Gist for more information.


Typescript json validation with io-ts

9/16/2019

There's a Typescript library called io-ts that can help to strong-type json data fetched from the server and at the same time provide static typescript typing.

Problem

Imagine we have this line that fetches some data from an endpoint:

const employee = await fetchEmployee();

employee will probably have the any type. If we knew the shape of the employee object, we could create a type and cast employee:

type Employee {
  firstName: string;
  lastName: string;
}

const employee = await fetchEmployee() as Employee;

But now we are assuming the shape of employee, and if it changes in future versions of the backend, it can lead to annoying runtime errors such as accessing properties on undefined objects, which can be hard to track. We could validate it with a lib such as ajv, but we wouldn't be able to have a single source of truth.

Solution

With io-ts we can define a type like this:

import * as t from 'io-ts';

// The runtime type we will use to validate the data fetched from the server
const Employee = t.type({
  firstName: t.string,
  lastName: t.string,
});

// The static type. The above runtime type acts as the single source of truth
type Employee = t.TypeOf<typeof Employee>;

// Now we can do this (in pseudo-code)
const employee = Employee.decode(await fetchEmployee());

console.log(employee.firstName);  // works
console.log(employee.foo);        // typescript compile error

Now employee is correctly typed, and the shape of the data is validated at runtime.

Implementing the pseudo-code

The lib is great, but the documentation found in the repository's README is a little confusing. The easiest way I found to just validate data and throw if the shape is incorrect is like this:

import { getOrElse } from "fp-ts/lib/Either";  // io-ts has fp-ts as a peer dependency
import { failure } from "io-ts/lib/PathReporter";

const toError = (errors: any) => new Error(failure(errors).join('\n'));
const employee = getOrElse(toError)(Employee.decode(await fetchEmployee()));

if (employee instanceof Error) {
  throw employee;
}

console.log('the first name is', employee.firstName)

This steps can be easily extracted to a helper function.


Practical use of CSS Grid min-content

9/15/2019

There are several use-cases for CSS Grid. In this example, it is used to solve the following requirements:

  • Create a table component with a fixed height
  • A header should always be at the top of the component
  • A footer should always be at the bottom of the component
  • The component should always have the same height, no matter how many rows are displayed
  • Overflow should be scollable

This can be accomplished with auto and min-content:

<div class="grid">
  <div class="header-filter">
    ...
  </div>
  <div class="table">
    ...
  </div>
  <div class="footer-pagination">
    ...
  </div>
</div>
.grid {
  display: grid;
  grid-template-rows: min-content auto min-content;
  height: 250px;
}

.table {
  overflow: auto;
}

Check out this CodePen for a working example.


Disable preview mode in VS Code

9/14/2019

Screenshot of file opened in normal and in preview-mode

What is the difference between the left and the right file? The right one is opened in preview mode.

Opening another file won't result in a new tab. Instead, the preview tab is changed to the new file - meaning the file currently previewed is closed.

The preview mode is used e.g. when you click on a file in the explorer or open a file through the Quick Open feature (Ctrl + p).

The following settings will disable the preview mode:

"workbench.editor.enablePreview": false,
"workbench.editor.enablePreviewFromQuickOpen": false

For more information on the preview mode, check out the official documentation.


Different color in Firefox and Chromium

9/13/2019

Today I opened a website in Firefox and Chromium simultaneously. I noticed that the same color looks different in the two browsers.

I used the following inline website to check it out:

data:text/html,
<style>
  body {
    margin: 0;
    padding: 0;
  }

  div {
    width: 100vw;
    height: 100vh;
    display: flex;
    align-items: center;
    justify-content: center;
    background: blue;
    color: white;
    font-size: 20vw;
  }
</style>
<body><div>0000ff</div></body>

Difference between Firefox and Chromium color

Turns out that this is due to the color profile selected by the browser.

You can change the color profile of Chrome / Chromium at chrome://flags/#force-color-profile.

Force Color Profile option

Check here for more information.


PostgreSQL CSV Import: missing data for column "..."

9/12/2019

Problem

The following .sql script will error out prematurely:

create table if not exists foo (
	bar text not null,
	baz text not null
);

copy foo (bar, baz) from stdin (format csv, delimiter ';', header true);
bar;baz
Lorem;ipsum
dolor;sit
amet,;consectetur
\.

Here's the accompanying psql output:

CREATE TABLE
psql:<stdin>:11: ERROR:  missing data for column "baz"
CONTEXT:  COPY foo, line 5: "\."

Solution

Adding a newline after the \. termination sequence fixes the error.


Collapsible Markdown

9/11/2019

When presenting code or logs in Markdown, things tend to get out of hand quickly.
The <details> and <summary> HTML tags can be used to mitigate this, which hide certain parts of your document.
Be aware that Markdown specific syntax constructs within those HTML tags are only guaranteed to be rendered correctly by CommonMark and / or GFM compliant parsers (for example the GitHub Markdown parser).

<details>
	<summary>Click to expand this section!</summary>
	<h5>A nice Javascript pitfall!</h5>

	```javascript
	console.log(['1', '7', '11'].map(parseInt));
	```
</details>

This Markdown snippet creates the following result:

Click to expand this section!
A nice Javascript pitfall!
console.log(['1', '7', '11'].map(parseInt));

Infinite rows with CSS Grid

9/9/2019

CSS Grids can be used to layout your website in columns and rows. But did you know you don't have to specify the amount of rows?

You can define the height for each row with grid-auto-rows.

#grid {
  background-color: #1a2b3c;
  display: grid;
  grid-template-columns: repeat(10, 1fr);
  grid-auto-rows: 50px;
}

#item1 {
  background-color: #6699ff;
  grid-column: 1 / 4; /* width: 3fr */
  grid-row: 1 / 5; /* height: 200px */
}

#item2 {
  background-color: #66ffff;
  grid-column: 2 / 7; /* width: 5fr */
  grid-row: 2 / 11; /* height: 450px */
}

Check it out on codepen.

You can also use grid-auto-columns for infinite columns.


Set CSS Grid item width

9/6/2019

In CSS Grid, you specify the size and position of elements with grid-column and grid-row. Normally you specify the position with {start-position} / {end-position}.

But you can also specify the width with span:

#item {
  /* 3 columns wide, 2 rows high */
  grid-column: 1 / span 3;
  grid-row: 1 / span 2;
}

Check out this codepen example.


Using dynamic variables in psql

9/5/2019

When developing in PostgreSQL, you may run into the fringe situation of being unable to use PL/pgSQL and instead having to fallback to plain psql (as was my case when writing tests with pgTap, since PL/pgSQL insists on using PERFORM over SELECT for void-returning functions).

Writing these tests in plain SQL quickly results in bloated code, since you have to reuse certain IDs over and over again, even when using CTEs.
While psql supports variables in the form of \set {name} {value}, these can not be dynamically set (i.e. using the result of a query).

However, it is possible to abuse runtime parameters (SET {name} TO {value}) for this purpose by making use of PL/pgSQLs EXECUTE, as shown in the following example:

DO $$
BEGIN
	EXECUTE format('SET %I TO %L', 'var.my_test_variable', (SELECT 1));
END $$;

Then, once you have returned to your plain SQL block, you may use SELECT current_setting('var.my_test_variable') to retrieve the value.

If used often, you could even move the EXECUTE block into its own function, receiving both name and value of the runtime parameter, and thus further removing unnecessary boilerplate code.


Find DHCP server with nmap

9/4/2019

TL;DR

Unknown DHCP Server in your network? sudo nmap --script broadcast-dhcp-discover

Story time

You enter the office like every morning, go upstairs and suddenly 3 Sales-colleges shout at you - the Internet is down.

You sit down next to one of their PCs and start debugging.
You see the PC got an IP address - 192.168.88.54.
Wait a second - our router is configured for the network 192.168.0.0/24!
What's going on here?
You start your own PC - same thing.

First of all, you set static IP addresses for the correct network on all PCs - they can reach the outside world again, and your co-workers can continue working.

Next step: You need to find out where the .88.*-IPs come from.

Thankfully, there is a nice nmap script for that:

$ sudo nmap --script broadcast-dhcp-discover

Pre-scan script results:
| broadcast-dhcp-discover:
|   Response 1 of 1:
|     IP Offered: 192.168.88.133
|     DHCP Message Type: DHCPOFFER
|     Server Identifier: 192.168.0.208
|     IP Address Lease Time: 10m00s
|     Subnet Mask: 255.255.255.0
|     Router: 192.168.88.1

Service Info: Host: the_office; OSs: Linux, RouterOS; Device: router; CPE: cpe:/o:mikrotik:routeros, cpe:/o:linux:linux_kernel

"We don't have any MikroTik products in our office", you think.

You do an image search for MikroTik routers on your phone and wander around in the office to find a similar-looking small box.

Half an hour later, you find it under someones desk, burried between ethernet cables. You remove it from the network and voila, everyone gets IP addresses from the correct DHCP server again.

Later you find out, that one of your co-workers wanted to add a switch to the network to expand the available ethernet ports. In a misunderstanding, they added a router instead.

Thanks to you (and this handy script), the internet is saved and you can finally start your working day.


Specify docker exec user

9/3/2019

With docker exec, you can execute commands inside of running Docker containers. The --user flag allows you to declare the user to use inside the container.

Example:

$ docker run -d
cf4bea1aa03eafc0a4adf49cc1f38e98de66ab586cbf026d369de2d51f83fbc3
$ docker exec -it --user postgres cf4bea1a /bin/bash
[email protected]:/$

Postgres Constraint Naming Convention

9/2/2019

Sometimes it's necessary to manually specify a constraint name, which should then ideally follow some sort of naming convention or pattern.

Postgres already has an implicit naming convention in place, which goes like this:

{tablename}_{columnname(s)}_{suffix}
  • pkey for primary key constraints

    • Single column

      create table foo (
        bar integer primary key
      );
                                  Table "public.foo"
       Column |  Type   | Collation | Nullable |             Default
      --------+---------+-----------+----------+---------------------------------
       bar    | integer |           | not null |
      Indexes:
          "foo_pkey" PRIMARY KEY, btree (bar)
    • Multiple columns

      create table foo (
        bar integer,
        baz integer,
        primary key (bar, bar)
      );
                      Table "public.foo"
       Column |  Type   | Collation | Nullable | Default
      --------+---------+-----------+----------+---------
       bar    | integer |           | not null |
       baz    | integer |           | not null |
      Indexes:
          "foo_pkey" PRIMARY KEY, btree (bar, baz)
  • key for unique constraints

    • Single column

      create table foo (
        bar integer unique
      );
                  Table "public.foo"
       Column |  Type   | Collation | Nullable | Default
      --------+---------+-----------+----------+---------
       bar    | integer |           |          |
      Indexes:
        "foo_bar_key" UNIQUE CONSTRAINT, btree (bar)
    • Multiple columns

      create table foo (
        bar integer,
        baz integer,
        unique (bar, baz)
      );
                      Table "public.foo"
       Column |  Type   | Collation | Nullable | Default
      --------+---------+-----------+----------+---------
       bar    | integer |           |          |
       baz    | integer |           |          |
      Indexes:
          "foo_bar_baz_key" UNIQUE CONSTRAINT, btree (bar, baz)
  • excl for exclusion constraints

    create table foo (
      bar text,
      baz text,
      exclude using gist (bar with =, baz with =)
    );
                 Table "public.foo"
     Column | Type | Collation | Nullable | Default
    --------+------+-----------+----------+---------
     bar    | text |           |          |
     baz    | text |           |          |
    Indexes:
        "foo_bar_baz_excl" EXCLUDE USING gist (bar WITH =, baz WITH =)
  • idx for indices

    Indices can not be created without manually specifying a name.

  • fkey for foreign key constraints

    • Single column

      create table foo (
        bar integer primary key
      );
      create table qux (
        bar integer references foo
      );
                      Table "public.qux"
       Column |  Type   | Collation | Nullable | Default
      --------+---------+-----------+----------+---------
       bar    | integer |           |          |
      Foreign-key constraints:
          "qux_bar_fkey" FOREIGN KEY (bar) REFERENCES foo(bar)
    • Multiple columns

      create table foo (
        bar integer,
        baz integer,
        primary key(bar, baz)
      );
      create table qux (
        bar integer,
        baz integer,
        foreign key(bar, baz) references foo (bar, baz)
      );
                      Table "public.qux"
       Column |  Type   | Collation | Nullable | Default
      --------+---------+-----------+----------+---------
       bar    | integer |           |          |
       baz    | integer |           |          |
      Foreign-key constraints:
          "qux_bar_fkey" FOREIGN KEY (bar, baz) REFERENCES foo(bar, baz)
  • check for check constraints

    • Single column

      create table foo (
        bar integer check (id > 10)
      );
                      Table "public.foo"
       Column |  Type   | Collation | Nullable | Default
      --------+---------+-----------+----------+---------
       bar    | integer |           |          |
      Check constraints:
          "foo_bar_check" CHECK (id > 10)
    • Multiple columns

      create table foo (
        bar integer,
        baz integer,
        check (bar = baz)
      );
                      Table "public.foo"
       Column |  Type   | Collation | Nullable | Default
      --------+---------+-----------+----------+---------
       bar    | integer |           |          |
       baz    | integer |           |          |
      Check constraints:
          "foo_check" CHECK (bar = baz)
  • seq for sequences

    create table foo (
      id serial
    );
                                Table "public.foo"
    Column |  Type   | Collation | Nullable |             Default
    --------+---------+-----------+----------+---------------------------------
     id     | integer |           | not null | nextval('foo_id_seq'::regclass)

Disable pager for psql

9/1/2019

PostgreSQL's CLI psql offers a myriad of helpful features.

For example, psql detects whenever a large result-set is returned and uses a pager to display the content.

While this is great for viewing your data, it is really inconvenient for automating tasks, as the pager needs user input to be terminated.
So, how can we circumvent / deactivate the pager?

# original, shows the pager
psql -h localhost -U postgres -c "SELECT * FROM pg_class"

# just pipe the output to `cat`
psql -h localhost -U postgres -c "SELECT * FROM pg_class" | cat

# if you are not interested in the output, you can also write to /dev/null
psql -h localhost -U postgres -c "SELECT * FROM pg_class" > /dev/null

# alternatively, you can use the environment variable `PAGER` to choose which pager should be used
PAGER=cat psql -h localhost -U postgres -c "SELECT * FROM pg_class"

# best method: completely turn off the pager
psql -h localhost -U postgres -P pager=off -c "SELECT * FROM pg_class"

Additionally, if you want to disable the pager while in interactive mode, just type \pset pager off.


Typescript ReturnType

8/31/2019

Typescript includes several useful utility types to enhance the type declarations of your code-base.

The ReturnType function is one of my favorite ones, as it helps reduce type definition duplication.

Suppose you have the following function definition:

type IsInText = (
  text: string
) => (
  term: string,
  minCount: number,
  maxCount?: number,
  caseSensitive?: boolean
) => boolean

Now we want to write a function allTermsInText, that takes the function returned by isInText as an argument. It should be used like:

allTermsInText(["Typescript", "awesome"], isInText("Typescript is awesome!"))

Here is the definition without the utility type:

type AllTermsInText = (
  terms: string[],
  search: (
    term: string,
    minCount: number,
    maxCount?: number,
    caseSensitive?: boolean
  ) => boolean
) => boolean

And here the same function definition, but using ReturnType for the parameters:

let AllTermsInText = (terms: string[], search: ReturnType<IsInText>) => {
  return !terms.find(term => !search(term, 1))
}

Pretty Printing JSON

8/30/2019

JSON is everywhere, but reading nested JSON without proper formatting can be a nightmare.

PostgreSQL

The function jsonb_pretty allows you to pretty print jsonb data.

\pset format unaligned
SELECT jsonb_pretty('{"name": "Lorenz", "team": {"name": "Team #1", "color", "blue"}}'::jsonb);

{
    "name": "Lorenz",
    "team": {
        "name": "Team #1",
        "color": "blue"
    }
}

Javascript

If you work with JSON data in Javascript, you surely know the function JSON.stringify. But did you know it can prettify your JSON as well?

JSON.stringify({"name": "Lorenz", "games_won": 4, "games_lost": 1}, null, 4)
                                      // number of spaces for indentation ^

{
    "name": "Lorenz",
    "games_won": 4,
    "games_lost": 1
}

Python

Python's json module adds functions to work with JSON.

>>> import json
>>> print(json.dumps({"players": [{"name": "Lorenz"}, {"name": "Philip"}]}, indent=4))

{
    "players": [
        {
            "name": "Lorenz"
        },
        {
            "name": "Philip"
        }
    ]
}

Command line using Python

You can also directly run the tool exposed by the json module from the command line:

$ echo '{"name": "Lorenz", "has_eyes": true}' | python3 -m json.tool

{
    "name": "Lorenz",
    "has_eyes": true
}

Delete already merged branches

8/28/2019

The code example below shows how to delete all branches which have already been merged into the current branch:

$ git branch
  feature-1
  feature-2
  feature-3
* master

$ git branch --merged
  feature-1
* master

$ git branch --merged | egrep -v "(^\*|master)"
  feature-1

$ git branch --merged | egrep -v "(^\*|master)" | xargs git branch -d
Deleted branch feature-1 (was 1d7fd54).

Check out this great Stack Overflow answer for more information.