October 1, 2022
Fedora Upgrades
Following Fedora Docs’ DNF System Upgrade works reasonably well.
Here are how I resolve a few issues I run into every time I upgrade.
“The password you use to log in to your computer no longer matches that of your login keyring.”
With Automatic Login, the error message above appears e.g. when
using GNOME Tweaks to auto-start Brave.
Work around it by using the Passwords and Keys (Seahorse) app to delete the Brave Safe Storage.
This will break Brave’s Saved Passwords and Sync; to fix that, go to brave://sync-internals to Disable Sync (Clear Data),
close Brave, and set up Sync again.
September 11, 2022
September 10, 2022
Here is one way to run a CLI process in a new window, make closing
that window kill that process, yet wait for the user when said process
exits, e.g. not to loose some start-up error message:
gnome-terminal -- bash -c 'ls / ; read -p "Press Enter to close..."'
I used this as follows for qemu-system-x86_64 in this script:
export KERNEL
gnome-terminal -- bash -c '\
qemu-system-x86_64 \
(...)
-serial stdio -nographic -display none \
-kernel "$KERNEL" ; \
read -p "QEMU has exited - press any key to close this window..."'
Note the export - that’s needed for those environment variables to work as arguments to the process in the gnome-terminal.
September 9, 2022
Linux Kernel Random Number Entropy
The Linux Kernel uses “entropy” to generate random numbers:
$ cat /proc/sys/kernel/random/entropy_avail
256
$ cat /proc/sys/kernel/random/poolsize
256
$ cat /proc/sys/kernel/random/write_wakeup_threshold
256
This 256 seems to have recently changed.
Older blog posts state that 256 available entropy is too low.
I doubt that on modern 2022 Kernels, such as a 5.17 from Fedora 34 or a 5.18 in Fedora 36, this is still accurate. It now actually remains at 256 forever, even with keyboard and mouse and disk events; even a restart does not budge it.
September 9, 2022
Today while LearningLinux I noticed that
my ArchLinux VMs started as fast as always, but it seemed to take longer and longer for sshd to be ready.
This clarified what was happening:
$ systemd-analyze
Startup finished in 1.270s (kernel) + 42.586s (userspace) = 43.857s
graphical.target reached after 42.585s in userspace.
$ systemd-analyze blame
41.769s pacman-init.service
29.292s reflector-init.service
1.198s systemd-networkd-wait-online.service
601ms dev-vda2.device
393ms ldconfig.service
360ms sshdgenkeys.service
234ms systemd-networkd.service
168ms systemd-tmpfiles-setup.service
155ms systemd-timesyncd.service
145ms systemd-resolved.service
102ms systemd-udev-trigger.service
94ms systemd-logind.service
93ms systemd-udevd.service
82ms systemd-machine-id-commit.service
79ms systemd-journal-catalog-update.service
69ms user@1000.service
54ms systemd-tmpfiles-setup-dev.service
43ms systemd-journald.service
42ms systemd-journal-flush.service
37ms systemd-tmpfiles-clean.service
37ms sys-kernel-tracing.mount
36ms kmod-static-nodes.service
36ms dev-mqueue.mount
36ms modprobe@configfs.service
36ms sys-kernel-debug.mount
36ms modprobe@drm.service
35ms dbus.service
35ms modprobe@fuse.service
35ms dev-hugepages.mount
$ systemd-analyze critical-chain
The time when unit became active or started is printed after the "@" character.
The time the unit took to start is printed after the "+" character.
graphical.target @42.585s
└─multi-user.target @42.585s
└─sshd.service @42.585s
└─pacman-init.service @814ms +41.769s
└─basic.target @811ms
└─sockets.target @811ms
└─dbus.socket @810ms
└─sysinit.target @805ms
└─systemd-update-done.service @798ms +5ms
└─ldconfig.service @404ms +393ms
└─local-fs.target @402ms
└─run-user-1000.mount @20.308s
└─local-fs-pre.target @402ms
└─systemd-tmpfiles-setup-dev.service @346ms +54ms
└─systemd-sysusers.service @314ms +30ms
└─systemd-firstboot.service @293ms +20ms
└─systemd-remount-fs.service @258ms +29ms
└─systemd-journald.socket @242ms
└─-.mount @237ms
└─-.slice @237ms
Huh, so critical-chain shows that sshd was blocked by pacman-init… we can see that from the Before=sshd.service here:
August 16, 2022
March 13, 2022
DAppNode
https://dappnode.io
ToDo
ipfs.dappnode with HTTPS? See https://github.com/dappnode/DAppNode/issues/406, and below.- Fix http://alice.eth etc. as it’s still NOK when now that IPFS works, see https://github.com/dappnode/DAppNode/issues/492
- Install http://dappnode.local/#/installer/prysm.dnp.dappnode.eth
- http://dappnode.local/#/installer More packages to install?
- http://dappnode.local/#/community => https://sourcecred.dappnode.io/#/explorer PAN?
- DAppNode DApp list
- Configure
/etc/wireguard/wg0.conf to “route” / “lookup” (?) ONLY .eth and .dappnode domain names through that VPN? Test by shutdown DAppNode. - (Re-)install and configure http://dappnode.local/#/packages/rotki.dnp.dappnode.eth/info
- git server (local at first, then on IPFS); e.g. on https://github.com/linuxserver?
- Backups, for git server and other, on IPFS
Use
- IPFS
- Ethereum RPC API
*.eth domain names:- alice.eth or freedomain.eth (or e.g. radek.freedomain.eth from this tutorial are some examples
- These normally do not resolve over traditional root DNS servers
- Brave has built-in ENS resolution by using Infura, instead of decentralized DAppNode
- When connected to DAppNode’s WireGuard VPN, that will resolve both
.eth and .dappnode domain names - nonexistantx.eth should show an error message from DAppNode’s
/usr/src/app/webpack:/@dappnode/dappmanager/src/ethForward/resolveDomain.ts: _Decentralized website not found Make sure the ENS domain exists" ipfs resolve -r /ipfs/: invalid path "/ipfs/": not enough path components means TODO, IDK, fix IPFS #first? See https://github.com/dappnode/DAppNode/issues/492
- brave://settings/ipfs
- brave://ipfs-internals/
- If using Brave Local IPFS Node:
- http://127.0.0.1:45005/webui
- brave://settings/ipfs/peers
Manage & Maintenance
Set up
As per official documentation, and then: