HTTPS/TLS/SSL certificate debugging

W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://packages.cloud.google.com/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05


        
            # Certificate valid dates
            openssl s_client -servername example.com -connect example.com:443 2>/dev/null | openssl x509 -noout -dates
            # Certificate CN and DNS info
            openssl s_client -servername example.com -connect example.com:443 2>/dev/null | openssl x509 -noout -text | grep DNS:
        
    

apt-get: missing public key for a Debian repository

W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://packages.cloud.google.com/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05


        
            sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B53DC80D13EDEF05
        
    

A swapfile on the btrfs file system to resolve Fedora's OOM SIGKILLs

By default on Fedora systemd-oomd kills application when all memory, incl. swap memory is exhausted. This behavior can be mitigated by adding additional swap space.


        
            sudo btrfs subvolume create /.swap
            sudo chattr +C /.swap # prevent the btrfs volume to create snapshots and disable copy-on-write

            sudo dd if=/dev/zero of=/.swap/space bs=1M count=8192 status=progress # create an 8GB swap space
            sudo chmod 0600 /.swap/space # restrict access to the swap space file
            sudo mkswap -U clear /.swap/space # initialize swap space
            sudo nano /etc/fstab
                # add the line below to your /etc/fstab
                /.swap/space none swap defaults 0 0

            sudo swapon -a # enable all swap spaces
            swapon # show all enabled swap spaces
            free -ght # show the available swap space
        
    

Setting username/password credentials for Ubuntu cloud images

By default no credentials are set for cloud images like bionic-server-cloudimg-amd64.img or focal-server-cloudimg-amd64.img.


        
            virt-customize -a focal-server-cloudimg-amd64.img --root-password password:this_is-your-password

            qemu-img convert -p -f qcow2 -O raw focal-server-cloudimg-amd64.img focal-server-cloudimg-amd64.raw

            ./cloud-hypervisor/target/release/cloud-hypervisor \
                --kernel ./hypervisor-fw \
                --disk path=focal-server-cloudimg-amd64.raw \
                --cpus boot=4 \
                --memory size=1024M \
                --net "tap=,mac=,ip=,mask=" \
                --rng

            # now log-in as root user using "this_is-your-password" as a password
        
    

Installing wasm-pack Rust package issue

"error: failed to run custom build command for `openssl-sys v0.9.70`"


        
            dnf install perl-FindBin.noarch
            # Also install a few additional Perl modules as suggested by the error message description
        
    

Remove Fedora's "Managed by your organization" in Chrome/Chromium/Brave

"Managed by your organization" in all Chromium based browsers implies all sorts of things that many people would like to evade.


        
            sudo dnf remove fedora-chromium-config
        
    

List/show all available XX options for Java.

To show you – among others – all available garbage collector options in Java:

java -XX:+PrintFlagsFinal -version

List/show all available XX options for Java.


        
        
    

Java, JDK, GC, JDK15, JDK16, Help, UnlockExperimentalVMOptions, UseShenandoahGC, UseZGC

reason": "index [site-page] blocked by: [TOO_MANY_REQUESTS/12/index read-only / allow delete (api)

You get something like this

{ "error": { "root_cause": [ { "type": "cluster_block_exception", "reason": "index [site-page] blocked by: [TOO_MANY_REQUESTS/12/index read-only / allow delete (api)];" } ], "type": "cluster_block_exception", "reason": "index [site-page] blocked by: [TOO_MANY_REQUESTS/12/index read-only / allow delete (api)];" }, "status": 429 }

When your storage is reaching its capacity boundaries of >90%, Elasticsearch locks write operations to the index to avoid corruption. The solution is to get some extra disk space from your local DevOp of choice and the lock will automatically disappear one minute later.


        
        
    

Elasticsearch, Elastic

Podman's default volume mapping is not ready for prime time yet

You get something like that:

ls: cannot open directory '.': Permission denied

{"msg":"exec container process `/srv/./smoke-test.main.kts`: Permission denied","level":"error","time":"2020-04-30T02:52:11.000663814Z"}

...when you run something like this:

podman run -it -v $(pwd):/srv loxal/kotlin-toolbox ./smoke-test.main.kts


        
            podman run -it -v $(pwd):/srv:z loxal/kotlin-toolbox ./smoke-test.main.kts
            # to get the difference between z vs Z, consult the manual below
            # https://github.com/containers/libpod/blob/master/docs/source/markdown/podman-run.1.md
        
    

Docker, Ubuntu, Fedora, DevOps

Podman is the new Docker

Having problems with Fedora 32 and Docker? Well Docker has problems with Linux kernel's CGroupsV2 too. You see messages like docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?. Just use podman which is almost a drop-in replacement for the docker command.


        
            docker_tag=latest
            podman build --tag loxal/kotlin-toolbox:$docker_tag .
            echo $DOCKER_PASSWORD | podman login --username loxal --password-stdin docker.io
            podman push loxal/kotlin-toolbox:$docker_tag
        
    

Docker, Ubuntu, Kubernetes, DevOps

Update a K8s pod's image at runtime


        
            # K8s-tested stateful node: https://hub.docker.com/r/loxal/stateful-node
            $ kubectl set image statefulset/stateful-node stateful-node=loxal/stateful-node:1.1.10-java14
        
    

Kubernetes, DevOps

Stop a K8s pod inside a StatefulSet / Deployment via rescaling

There is no concept of "stopping" a pod in Kubernetes. So you need rescale its replicas.


        
            # K8s-tested stateful node: https://hub.docker.com/r/loxal/stateful-node
            $ kubectl scale statefulset/stateful-node --replicas 0 # stop a pod
            $ kubectl scale statefulset/stateful-node --replicas 1 # start a pod
        
    

K8s, DevOps

Resize / grow / increase an XFS file system to occupy the entire partition


        
            # Grow / resize your XFS file system to occupy the entire (expanded/grown) partition
            $ xfs_growfs /mnt/HC_Volume_0815 # reference the mountpoint and not its device /dev/sdb
        
    

Administration, XFS, DevOps

K8s cluster down because of "disk pressure" / low disk space


        
            # When you discover
            $ kubectl get nodes -o json | jq .items[].spec.taints
            [
                {
                    "effect": "NoSchedule",
                    "key": "node.kubernetes.io/disk-pressure",
                    "timeAdded": "2020-04-12T09:44:09Z"
                }
            ]

            # The following won't help you as it would when a problem would be a temporary one
            $ kubectl taint nodes --all node.kubernetes.io/disk-pressure-

            # You need in free or increase your disk space
            $ xfs_growfs /mnt/HC_Volume_0815 # reference the mountpoint and not its device /dev/sdb

            # For some surprising reason, tainting the disk pressure key does not work
            $ systemctl restart kubelet.service
        
    

Kubernetes, DevOps

Viewing log tail output from a systemctl service


        
            sudo journalctl -u ssh -f
            sudo journalctl -u ssh -f -b # log messages from the current boot only
        
    

systemctl, Administration, CLI

Return 404 for a specific page in nginx


        
            location = /swagger-ui.html {
                return 404;
            }
        
    

nginx, Configuration

Handel nginx errors via client code through query parameter propagation

...unfortunately you need to provide not the pathname only but the entire URL.


        
            proxy_intercept_errors on;
            error_page 404 500 502 503 /oops.html?status=$status&upstream_status=$upstream_status&server_time=$time_iso8601;
        
    

nginx, Errors, Client

Passing a timestamp in nginx response headers & query parameters

The ISO 8601 time format is both machine- and human-friendly. Do not use millisecond notations for performance reasons as they won't give you any performance advantages whatsoever.


        
            $time_iso8601
        
    

nginx, Configuration, Proxy

How to switch the Dockerfile shell to Bash and support the source builtin

docker build does not run /bin/sh but /bin/bash which has implications, specifically there is no source directive available which you need sometimes to refresh your env.


        
            # inside your Dockerfile
            SHELL ["/bin/bash", "-c"]
        
    

Docker, Bash, Shell

A Docker container with Java 14, Kotlin 1.3.71, and common utilities


        
            $ docker pull loxal/kotlin-toolbox:1.3
            $ docker run -it loxal/kotlin-toolbox:1.3
            # https://hub.docker.com/r/loxal/kotlin-toolbox
            # common utilities include: ping, nano, curl, zip
        
    

Docker, Kotlin, Java

How to run/schedule "Pending" pods on a master node which is prevented by default?


        
            $ kubectl get nodes -o json | jq .items[].spec.taints
            [
              {
                "effect": "NoSchedule",
                "key": "node-role.kubernetes.io/master"
              }
            ]

            $ kubectl taint nodes --all node-role.kubernetes.io/master-
            node/your-master-node-that-runs-service-applications untainted
        
        
    

Node, CLI, K8s

Kotlin Script for shell scripting on steroids


        
        #!/usr/bin/env kotlin

        // Install
        //   sudo snap install kotlin
        // Run
        //  chmod 755 script.main.kt; ./script.main.kt

        @file:Repository("https://dl.bintray.com/jakubriegel/kotlin-shell")
        @file:DependsOn("eu.jrie.jetbrains:kotlin-shell-core:0.2.1")
        @file:DependsOn("org.slf4j:slf4j-simple:1.7.30")
        @file:DependsOn("com.squareup.okhttp3:okhttp:4.4.1")
        @file:CompilerOptions("-Xopt-in=kotlin.RequiresOptIn")

        import eu.jrie.jetbrains.kotlinshell.shell.shell
        import okhttp3.OkHttpClient
        import okhttp3.Request

        @OptIn(kotlinx.coroutines.ExperimentalCoroutinesApi::class)

        println("args.size: ${args.size}")
        if (args.isNotEmpty()) {
        println("args: ${args.joinToString()}")
        }

        val httpClient = OkHttpClient.Builder()
        .followRedirects(false)
        .followSslRedirects(false)
        .build()

        val request = Request.Builder()
        .url("https://example.com")
        .build()
        httpClient.newCall(request).execute().use { response ->
        println(response.body)
        println(response.code)
        }

        shell {
        val toUpper = stringLambda { it.toUpperCase() to "" }
        pipeline { "curl https://example.com".process() pipe "grep example".process() pipe toUpper }
        }


        println(Runtime.getRuntime().maxMemory() / 1024)
        println(Runtime.getRuntime().totalMemory() / 1024)
        println(Runtime.getRuntime().availableProcessors())
        println(Runtime.getRuntime().freeMemory())
        println(Runtime.getRuntime().freeMemory())

        Runtime.getRuntime().exit(-1)
        
        
    

Java, Kotlin, CLI