%appdata%\.minecraft\mods
for WindowsTo install voidlinux on a Pi we'll have to do a chroot
install. For official documentation on installing from chroot
for void see here.
We need to install via chroot
because the live images are made specifically for 2GB SD cards.
"These images are prepared for 2GB SD cards. Alternatively, use the ROOTFS tarballs if you want to customize the partitions and filesystems."
The installation can split out into 4 rough steps
Because we're going to be creating an aarch64
system you'll need some tool that will allow you run aarch64
binaries from a x86
system. To accomplish this we'll need the binfmt-support
and qemu-user-static
packages. To install them you can run
# NOTE: You have to install qemu-user-static _second_
# if you don't you wont get the files you need in /var/lib/binfmts/
# If you do it the wrong way you can try
# doing xbps-reconfigure -f qemu-user-static
sudo xbps-install binfmt-support qemu-user-static
We'll also need to enable the binfmt-support
service. To do this, run
sudo ln -s /etc/sv/binfmt-support /var/service/
Now you're one step away from being able to run aarch64
binaries in the chroot
on your x86
system, but we'll get to that later.
This is tricky because it can depend a little based on what you want to do. In my case I didn't allocate any swap space and kept the home directory on the root partition which keeps things pretty simple.
In this case we're going to need two partitions. One 64MiB partition that is marked with the bootable flag and has the vfat type (0b in fdisk
). And the other that takes up the rest of the SD card with type linux
(83 in fdisk
).
To create these partitions with fdisk
run sudo fdisk /dev/sda
where /dev/sda
is the path to your disk. The path to your disk can be found running lsblk
before and after plugging in the disk and seeing what shows up. Once fdisk
drops you into the repl
you can delete the existing partitions with the d
command.
Make a new partition with the n
command, make it a primary partition with p
, make it partition 1, and leave the first sector blank, which will keep it as the default. For the last sector put +64M
which will give us a 64MiB partition (if you're asked to remove the signature it doesn't matter because we'll be overwriting that anyway). Use the a
command to mark partition 1 bootable and lastly use the t
command to make partition 1 type 0b, which is vfat.
Now the root partition, use n
to make a new partition, then leave everything else default. This will consume the rest of the disk for this partition. Same as before, if it asks you to remove the signature it doesn't matter because we'll be overwriting now. To set the type label use the t
command and set it to type 83 which is the linux type.
That's all we need to do to setup the partitions. Make sure to save your changes with the w
command!
The disk should be correctly partitioned now!
This part is easy. Assuming the device is located at /dev/sda
, partition 1 is the boot partition, and partition 2 is the root partition, just run these two commands.
mkfs.fat /dev/sda1 # Create boot vfat filesystem
mkfs.ext4 -O '^has_journal' /dev/sda2 # Create ext4 filesystem on the root partition (with journaling)
For this step we'll need both partitions we set up earlier to be mounted. To mount the partitions run
MOUNT_PATH='/mnt/sdcard' # Replace with any path to an empty directory. By convention it would be in /mnt
mount /dev/sda2 $MOUNT_PATH # Mount the root partition to the mount point
mkdir -p $MOUNT_PATH/boot # Create a directory named "boot" in the root partition
mount /dev/sda1 $MOUNT_PATH/boot # Mount the boot partition to that boot directory
Now we just need to extract the rootfs into our mount point.
MOUNT_PATH='/mnt/sdcard' # Replace with any path to an empty directory. By convention it would be in /mnt (same mount path as above)
ROOTFS_TARBALL='/home/me/Downloads/void-rpi3-PLATFORMFS-20210930.tar.xz' # Replace with the path to the tarball you download from https://voidlinux.org/download/
# x - Tells tar to extract
# f - Tells tar to operate on the file path given after the f switch
# J - Tells tar to extract using xz, which is how the rootfs happens to be compressed
# p - Tells tar to preserve permissions from the extracted directory
# -C - Tells tar where to extract the contents to
tar xfJp $ROOTFS_TARBALL -C $MOUNT_PATH
That's it for this step! You might notice that we didn't explicitly copy anything into the $MOUNT_PATH/boot
directory. The rootfs provided by void contains a /boot
directory which will get placed into the $MOUNT_PATH/boot
directory when we extract the tarball.
This step is technically optional. If we just wanted to get a system up and running, we could plug the SD card in right now and it would boot up. We wouldn't have any packages (including base-system
, which gives us dhcpcd
, wpa_supplicant
and other important packages), but it would boot. Additionally, the RaspberryPi's (at least mine) doesn't have a hardware clock so without an ntp
package we won't be able to validate certs (because the time will be off) which prevents us from installing packages.
Some of the things we want to configure are most easily through a chroot
. The problem is that the binaries in the rootfs
we copied over are aarch64
binaries.
aarch64
binaries in the chroot
Because your x86
system cannot run aarch64
binaries we need to emulate the aarch64
architecture inside the chroot
. To accomplish this we copy an x86
binary that can do that emulation for us into the chroot
, and then pass all aarch64
binaries through it when we go to run them.
If you've installed the qemu-user-static
package you should have a set of qemu-*-static
binaries in /bin/
. For a RaspberryPi 3, we want qemu-aarch64-static
. Copy that into the chroot
.
cp /bin/qemu-aarch64-static <your-chroot-path>
Now you're ready to run the aarch64
binaries in your chroot
.
To create a usable system there's a few things we need to setup that are somewhere between recommended and mandatory; the base-system
package, ssh
access, ntp
, dhcpcd
and a non-root user.
Because running commands in the chroot
is slightly slower due to the aarch64
emulation we'll try to setup as much of the rootfs
as possible without actually chroot
ing.
First we should update all the packages that were provided in the rootfs
.
MOUNT_PATH='/mnt/sdcard' # Replace with any path to an empty directory. By convention it would be in /mnt (same mount path as above)
# Run a sync and update with the main machine's xbps pointing at our rootfs
env XBPS_ARCH=aarch64 xbps-install -Su -r $MOUNT_PATH
Just install the base-system
package from your machine with the -r
flag pointing at the $MOUNT_PATH
.
MOUNT_PATH='/mnt/sdcard' # Replace with any path to an empty directory. By convention it would be in /mnt (same mount path as above)
# Install base-system
env XBPS_ARCH=aarch64 xbps-install -r $MOUNT_PATH base-system
We just need to activate the sshd
service in the rootfs
.
MOUNT_PATH='/mnt/sdcard' # Replace with any path to an empty directory. By convention it would be in /mnt (same mount path as above)
ln -s /etc/sv/sshd $MOUNT_PATH/etc/runit/runsvdir/default/
There's two thing here that look odd; 1. we're symlinking to our main machines /etc/sv/sshd
directory and 2. we're placing the symlink in /etc/runit/runsvdir/default/
instead of /var/service
like is typical for activating void services.
chroot
'ed in, or when the system is running on the Pi /etc/sv/sshd
will point to the Pi's sshd
service./var/service
doesn't exists until the system is running and it when the system is up /var/service
will be a series of symlinks pointing to /etc/runit/runsvdir/default/
so we can just link the sshd
service directly to the /etc/runit/runsvdir/default/
.For security reasons I recommend disabling password authentication.
MOUNT_PATH='/mnt/sdcard' # Replace with any path to an empty directory. By convention it would be in /mnt (same mount path as above)
sed -ie 's/#PasswordAuthentication yes/PasswordAuthentication no/g' $MOUNT_PATH/etc/ssh/sshd_config
sed -ie 's/#KbdInteractiveAuthentication yes/KbdInteractiveAuthentication no/g' $MOUNT_PATH/etc/ssh/sshd_config
We need an ntp
package because the RaspberryPi doesn't have a hardware clock so when we boot it up the time will be January 1, 1970 which causes cert failures resulting in certificate validation failures that prevent us from installing packages and more.
MOUNT_PATH='/mnt/sdcard' # Replace with any path to an empty directory. By convention it would be in /mnt (same mount path as above)
env XBPS_ARCH=aarch64 xbps-install -r $MOUNT_PATH openntpd
ln -s /etc/sv/openntpd $MOUNT_PATH/etc/runit/runsvdir/default/
Same as before we just install the package with our local xbps
package manager pointing to the chroot
and then setup the package to run at the end of symlink chain.
The base-system
package should have covered the install of dhcpcd
, so all we have to do is activate the service. Like before, we'll symlink directly to the end of the symlink chain.
MOUNT_PATH='/mnt/sdcard' # Replace with any path to an empty directory. By convention it would be in /mnt (same mount path as above)
ln -s /etc/sv/dhcpcd $MOUNT_PATH/etc/runit/runsvdir/default/
This probably depends on your use-case, but having everything running as root is usually bad news, so setting up a non-root user which we can ssh
in as is probably a smart idea.
This is the first part of the configuration that is truly best done inside the chroot
, so make sure you have the filesystem mounted and have copied the qemu-aarch64-static
binary into chroot
.
MOUNT_PATH='/mnt/sdcard' # Replace with any path to an empty directory. By convention it would be in /mnt (same mount path as above)
# After executing this command all subsequent commands will act like
# you're running on Pi instead of your main machine
chroot $MOUNT_PATH
USERNAME='me' # Replace with your desired username
groupadd -g 1000 $USERNAME # Create our user's group
# Add our user and add it to the wheel group and our personal group
# Depending on your needs you could additionally add yourself to
# other default groups like: floppy, dialout, audio, video, cdrom, optical
useradd -g $USERNAME -G wheel $USERNAME
# Set our password interactively
passwd $USERNAME
sed -ie 's/# %wheel ALL=(ALL) ALL/%wheel ALL=(ALL) ALL/g' $MOUNT_PATH/etc/sudoers # Allow users in the wheel group sudo access
At this point the root account's password is still "voidlinux". We wouldn't want our system running with the default root password, so to remove it run
MOUNT_PATH='/mnt/sdcard' # Replace with any path to an empty directory. By convention it would be in /mnt (same mount path as above)
chroot $MOUNT_PATH # Run this if you're not in the chroot
passwd --delete root
If you set up ssh
access and disabled password authentication you'll want to add your ssh
key to the rootfs
.
MOUNT_PATH='/mnt/sdcard' # Replace with any path to an empty directory. By convention it would be in /mnt (same mount path as above)
USERNAME='me' # Replace with your desired username
mkdir $MOUNT_PATH/home/$USERNAME/.ssh
cat /home/$USERNAME/.ssh/id_rsa.pub > $MOUNT_PATH/home/$USERNAME/.ssh/authorized_keys
According to the void docs we should remove the base-voidstrap
package and reconfigure all packages in the chroot
to ensure everything is setup correctly.
MOUNT_PATH='/mnt/sdcard' # Replace with any path to an empty directory. By convention it would be in /mnt (same mount path as above)
chroot $MOUNT_PATH
xbps-remove -y base-voidstrap
xbps-reconfigure -fa
Now that we're done in the chroot
we can delete the qemu-aarch64-static
binary that we put in there.
MOUNT_PATH='/mnt/sdcard' # Replace with any path to an empty directory. By convention it would be in /mnt (same mount path as above)
rm $MOUNT_PATH/bin/qemu-aarch64-static
Make sure to unmount the disk before removing it from your machine because we wrote a lot of data and that data might not be synced until we unmount it.
MOUNT_PATH='/mnt/sdcard' # Replace with any path to an empty directory. By convention it would be in /mnt (same mount path as above)
umount $MOUNT_PATH/boot
umount $MOUNT_PATH
Lastly, with some care, a lot of these steps can be combined. To see what that might look like check out this repo
Now you should be able to put the SD card into the Pi, boot it up and have ssh
access!
In my case I wanted to have tiles that were could be destoryed after multiple hits.
There's three ways I considered doing this:
TileMap
, just use Node
s with Sprite2D
s attached and have some logic that makes sure they are placed on a grid, as if they were rendered with a TileMap
TileMap
class and maintain a Dictionary
of Vector2 -> <Custom class>
TileMap
class and maintain a Dictionary
of Vector2 -> <Node with a script attached>
Options 2 and 3 are very similar one might be better than the other depending on the use case.
extends TileMap
export(PackedScene) var iron_ore
# This holds references to the nodes so we
# can access them with TileMap coordinates
var cell_data : Dictionary = {}
# Called when the node enters the scene tree for the first time.
func _ready():
# Create 10 ores in random locations on the tilemap
for x in range(10):
var node = spawn_ore()
var cell = world_to_map(node.position)
set_cellv(cell, node.id)
cell_data[cell] = node
func spawn_ore():
# This iron_ore Node has no sprites attached to it
# it's just a Node that holds a script which contains
# helper functions
var node = iron_ore.instance()
var width = 16
var height = 16
var x = randi() % 30
var y = randi() % 30
add_child(node)
node.position = Vector2(x * 16 + width / 2, y * 16 + height / 2)
return node
# This function deals with the player hitting a tile
# when a player presses the button to swing their pickaxe
# they call this function with the tilemap coords that their aiming at
func hit_cell(x, y):
var key = Vector2(x, y)
# Check if that cell is tracked by us
if cell_data.has(key):
# Note: cell_data[key] is a Node
cell_data[key].health -= 1
# If the ore is out of health we destory it
# and clean it up from our cell_data map
if cell_data[key].health == 0:
# Set the tiles sprite to empty
set_cell(x, y, -1)
# Destory the Node
var drops = cell_data[key].destroy()
# Get drops from the ore
for drop in drops:
add_child(drop)
# Clean up the cell_data map
cell_data.erase(key)
return true
return false
This is the script attached to the Node
s we reference in the TileMap
extends Node2D
# The chunk that's dropped after mining this ore
export(PackedScene) var iron_chunk
const id: int = 0
var health: int = 2
func destroy():
var node = iron_chunk.instance()
node.position = position
queue_free()
return [node]
When the player mines the ore you can see that the nodes in the remote scene view (on the very left) are replaced with an iron chunk.
This is the iron chunk generated from destory()
in iron_ore.gd
.
After the player picks up the iron chunk it's gone for good.
TileMap
and using Node2D
directly?TileMap
which means that our ore can't be placed some where it shouldn't be.TileMap
s tend to be slightly more optimized for rendering. I don't know about Godot specifically, but this probably has some minor performance benifits. Although, this is probably irrelvent for my case.Node
s, because the tiles are backed by actual Node
instances.Here's what a class that might look like:
class IronOre:
const id: int = 0
var health: int = 2
var iron_chunk: PackedScene
func destroy():
var node = iron_chunk.instance()
node.position = position
queue_free()
return [node]
func _init(chunk):
iron_chunk = chunk
# We could remove the need to pass in chunk
# if we loaded the chunk scene with a hardcoded string
# load("res://iron_ore.tscn")
Notice that it's basically the same as iron_ore.gd
.
We'd use IronOre.new(iron_chunk)
instead of iron_ore.instantiate()
to create it, but that's not necessarily a problem.
Where this does run into issues is with getting the iron_chunk
reference.
When using the class we need to load the PackedScene
somehow, and this could be done by hardcoding it in.
i.e. load("res://iron_ore.tscn")
, this would remove the need for the _init(chunk)
constructor.
Or we could export a varible in our TileMap
which is then passed through when we instantiate the IronOre
class like this.
extends TileMap
# Notice this is iron_chunk (the thing that iron_ore drops), _not_ iron_ore (the thing that a player mines)
export(PackedScene) var iron_chunk
...
func spawn_ore():
# Pass the iron_chunk PackedScene through
var node = IronOre.new(iron_chunk)
var width = 16
var height = 16
var x = randi() % 30
var y = randi() % 30
...
This works, but if we need to pass in more PackedScene
s to IronOre
we'll have to export those through the TileMap
.
And if we introduce more types of ore, we'll have to export even more variables through the TileMap
.
The worst part of this is that these scenes don't have anything to do with the TileMap
.
On the other hand, by having Node
s be the backend we can use the editor to drag-and-drop the correct chunk for each ore scene.
We still have to export a variable in the TileMap
for each ore type, but that's it!
There are some trade-offs we make by using this method.
queue_free
it if we remove it.Node
has a position and we have a position which acts as a key for the dictionary. The Node
position should never be used, so it doesn't have to be kept in sync, but you need to make sure you never use it.While writing this I thought it might be possible to get the best of both worlds by using Resource
s instead of Node
s to hold the state.
I think this might give us all the ability to
Node
method can do)Node
s in the node tree, which could reduce clutter (like the class solution can do).I'm not totally sure if 3 is possible, but this seems worth investigating!
These are some helpful tips I found when trying to set up an nfs for persistent volumes on my k8s cluster. Setting up the actual persistent volumes and claims will come later.
Some of the specifics of these tips (package names, directories, etc.) are going to be specific to voidlinux which is the flavor of linux I'm running my nfs on. There is almost certainly an equivalent in your system, but the name may be different.
statd
, nfs-server
, and rpcbind
services enabled on the server./etc/exports
to configure what directories are exported.exportfs -r
to make changes to /etc/exports
real.Actually setting up the nfs is pretty easy.
Just install the nfs-utils
package and enable the nfs-server
, statd
, and rpcbind
services.
That's it.
Now that you have an nfs server you need to configure which directories are available for a client to mount.
This is done through the /etc/exports
file.
I found this site to be quite useful in explaining what some of the options in /etc/exports
are and what they mean.
Specifically, debugging step 3 (setting the options to (ro,no_root_squash,sync)
) was what finally got it working for me when I was receiving mount.nfs: access denied by server while mounting 192.168.0.253:/home/jeff/test
.
My /etc/exports
file is just one line:
/watermelon-pool 192.168.0.0/24(rw)
/watermelon-pool
is the path to my zfs pool which is where I store this kind of data.
192.169.0.0/24
is the network prefix that my machines are in.
(rw)
allows those machines to read and write to the nfs
After you make changes to /etc/exports
make sure to run exportfs -r
.
exportfs -r
rereads the /etc/exports
and exports the directories specified in /etc/exports
.
Essentially, you need to run it every time you edit /etc/exports
.
For some reason I had issues when not specifying the no_root_squash
option for some directories.
I still don't have a good answer for what's up with that, but you can read my (still unanswered) question on unix stack exchange if you want.
This didn't effect my ability to use this nfs server as a place for persistent storage for kubernetes though.
It seemed to be a void specific bug that only effects certain directories (specifically my home directory), but I'm still not sure.
Unsurprisingly the voidlinux docs on setting up an nfs server on voidlinux were pretty helpful, who knew? There are a few pretty non-obvious steps when setting up an nfs on void. Notably you have to enable the rpcbind
, and statd
services on the nfs server in addition to the nfs-server
service.
showmount -e 192.168.0.253
Received: clnt_create: RPC: Program not registered
Fix: Start statd
service on server
showmount -e 192.168.0.253
Received: clnt_create: RPC: Unable to receive
Fix: Start rpcbind
service on server
sudo mount -v -t nfs 192.168.0.253:/home/jeff/test nas/
Received: mount.nfs: mount(2): Connection refused
Fix: Start rpcbind
service on server
sudo sv restart nfs-server
Received: down: nfs-server: 1s, normally up, want up
Fix: Start rpcbind
and statd
services on server
sudo mount -v -t nfs 192.168.0.253:/home/jeff/test nas/
Received: mount.nfs: mount(2): Permission denied
nfs-server
service is actually up.sv
doesn't make this super clear in my opinion.
For example this means everything is good
> sudo sv restart nfs-server
ok: run: nfs-server: (pid 9446) 1s
while this means everything is broken
> sudo sv restart nfs-server
down: nfs-server: 1s, normally up, want up
Not quite as different I would like :/
If you find that your nfs-server
service isn't running it might be because you haven't enabled the statd
and rpcbind
services.
For instance, if you put /home/user *
in /etc/exports
you can mount /home/user/specific/path
assuming /home/user/specific/path
exists on ths nfs server like this:
sudo mount -t nfs4 192.168.0.253:/home/user/specific/path /mnt/mount_point
This is a guide on adding a new raspberry pi node to your k3s managed kubernetes cluster.
unzip 2020-08-20-raspios-buster-armhf-lite.zip
sudo dd if=/path/to/raspberryPiOS.img of=/dev/sdX bs=4M conv=fsync
(where /dev/sdX is the SD card device)sudo mount /dev/sdX /mnt/sdcard
(/mnt/sdcard
can be any empty directory)sudo touch /mnt/sdcard/ssh
sudo umount /mnt/sdcard
ssh pi@raspberrypi
password is "raspberry"sudo apt update && sudo apt upgrade -y && sudo apt install -y vim curl
Although vim
isn't strictly necessary and curl
is on the image by default, I like vim and we'll use curl later so better to make sure it's already there.sudo useradd -m -G adm,dialout,cdrom,sudo,audio,video,plugdev,games,users,input,netdev,gpio,i2c,spi jeff
adm,dialout,cdrom,sudo,audio,video,plugdev,games,users,input,netdev,gpio,i2c,spi
are groups that you are adding your user to. The only super important one is probably sudo
. This is the list that the default pi
user starts in so might as well..ssh
directory so you can get in to your user: sudo -u jeff mkdir .ssh
sudo -u jeff
here so that it runs as the jeff user and makes jeff
the owner by defaultsudo -u jeff curl https://github.com/ToxicGLaDOS.keys -o /home/jeff/.ssh/authorized_keys
Here we curl the key down from a github account straight into the authorized_keys file. If your keys aren't on github you might scp
them onto the pi./etc/hosts
and /etc/hostname
files. This can be done manually or with some handy sed
commands.sudo sed -i s/raspberrypi/myHostname/g /etc/hosts
sudo sed -i s/raspberrypi/myHostname/g /etc/hostname
/etc/ssh/sshd_config
and edit the line that says #PasswordAuthentication yes
so it says PasswordAuthentication no
. If this line doesn't exist add the PasswordAuthentication no
line.sudo sed -i s/#PasswordAuthentication\ yes/PasswordAuthentication\ no/g /etc/ssh/sshd_config
sudo
: echo 'jeff ALL=(ALL) NOPASSWD:ALL' | sudo tee -a /etc/sudoers
This is a little dangerous, because if your account on the machine gets comprimised then an attacker could run any program as root :(. Also if you fail to give yourself passwordless sudo
access and restart the pi you can end up being unable to sudo
at all which means you can't access /etc/sudoers
to give yourself sudo
access... So you might end up having to re-imaging the SD card cause you're boned. Not that that has happened to me of course... :(sudo userdel -r pi
curl -sfL https://get.k3s.io | K3S_URL=https://masterNodeHostname:6443 K3S_TOKEN=yourToken sh -
This pulls down a script provided by k3s and runs it so maybe check to make sure k3s is still up and reputable. Make sure to replace masterNodeHostname and yourToken with your values. masterNodeHostname is the hostname of the master node in your cluster (probably the first one you set up), in my case it's raspberry0
. yourToken is an access token used to authenticate to your master node. It can be found on your master node in the /var/lib/rancher/k3s/server/node-token
file. Read more at k3s.io.That's basically it!