Before installing Proxmox, I wanted to ensure I was starting fresh, so I needed to wipe the hard drive. To do this, I booted to my Ventoy USB drive, selected my Windows 10 ISO, and once at the setup screen I entered SHIFT + F10 to open the terminal.
I used the dispart utility by executing these commands:
diskpart
list disk # Identify disk you need
select disk 0 # Disk that contains Windows
clean
create partition primary
format fs=ntfs
exit
wpeutil reboot
It is not actually necessary to format in this case, but it does make it harder to recover data if that is important to you.
On my Ventory drive I had the Proxmox 9.0 ISO, so I booted to this. Right away I got the error No support for hardware-accelerated KVM virualization detected. Check BIOS settings for Intel VT / AMD-V / SVM. I went back to my BIOS settings and enabled virtualization and the installation then proceeded normally.
The initial installation was simple. I selected the target harddisk, set location/time zone, password and email, and network info. Then I waited for the installation to complete.
Immediately after starting, I got the error KERNEL PANIC! Please reboot your compouter. VFS: Unable to mount root fs on unknown-block(0,0). I couldn't find any helpful information online that helped for my situation. It seemed to be an issue with GRUB, and the solutions I found were to modify /etc/default/grub.d/grub.cfg and delete rdinit=/vtoy/vtoy but GRUB did not have any commands available to edit a file.
Eventually, I redownloaded Proxmox (this time version 8.4) and flashed it to its own flash drive. I don't think the version mattered, I believe it was ventoy that was messing up the install. Once I booted to the new flash drive, the install completed successfully and I was able to boot into Proxmox.
When I booted into Proxmox for the first time, it gave me a message to access the management console using the IP address I configured during installation and the port 8006. This didn't work, and when I looked at my DHCP server I saw an error for BAD ADDRESS. After looking at my router connections I found that the address I used was already taken.
To start using the terminal, enter root for the login and then use the password set during installation.
To change the IP address, I simply needed to change it here:
nano /etc/network/interfaces
After changing the address, networking needed to be restarted:
systemctl restart networking
I found that the main screen still showed the old IP address, so I then changed it in /etc/hosts.
After all that, I still couldn't connect and nothing showed up in DHCP or my router. When executing ip a, I saw that all interfaces were DOWN. After attempting various methods of starting an interface, I found that Proxmox does not directly support Wi-Fi.
The simple fix was to plug my computer into my router which still didn't work. Long story short, my router had a bad port, and once I moved it to a different port, I got link lights, Proxmox showed it was connected, and I was able to access the web interface.
I began by logging in with root and the password I set during installation. The first change I wanted to make was to prevent my machine from sleeping when I closed the laptop lid. To do this:
nano /etc/systemd/logind.conf
Next, I just had to make this change:
### From This:
#HandleLidSwitch=suspend
### To This:
HandleLidSwitch=ignore
Then execute:
sudo systemctl restart systemd-logind
For my first VM, I chose to use an Alpine Linux ISO since it is lightweight. This will be the base image for running Docker. I downloaded the ISO and used scp to copy the file to proxmox in /var/lib/vz/template/iso/.
I used all of the default settings when creating the VM. The only thing I changed was setting CPU cores to 2 instead of 1. When I went through the alpine setup, I found that it wasn't prompting for the disk correctly. Once the setup completed, I had to run it again so that it would prompt me to select a disk and how to use it. After that, I powered off the VM, removed the CD/DVD drive with the ISO, and started the VM again.
It's important to remove the ISO, otherwise it will try to boot from it which will take you back through the setup.
After setup, I began to install Docker. I had the same issues as the first time I did this and the solution was the same. Check out Alpine Linux for details.
The first container I wanted to create was for Rustdesk. I copied the docker-compose.yml from my current Rustdesk project and tried to start the containers but got the error no matching manifest for linux/386 in the manifest list entries. It turns out, I accidentally install the 32-bit version of Alpine instead of 64-bit. So I reinstalled with the correct version, went through all the same steps, and successfully started my containers.
I tested that Rustdesk was working by configuring 2 clients with the new IP/Relay server and key. Success!
The next goal was to migrate my wiki from my Macbook to Proxmox. First, I made sure I had directories created for both the server and nginx, then I was able to copy all files over
I used rsync to send files from my Macbook to my Proxmox VM which uses SSH. First, I had to install rsync in the VM:
doas apk add rsync
Then on my Macbook, to copy nginx directory over:
rsync -avzP --exclude '.git' --exclude 'node_modules' /path/to/nginx/directory remoteuser@remoteIP:/path/to/remote/directory
After verifying on the VM that all files transferred successfully, I created the network for my services:
docker network create nginx_internal || true
Before I could migrate my database, I needed to create a mysqldump which would be used to restore to the new database. I first stopped my containers so that the database would not change during the migration. Then I created the dump file:
docker compose -f /path/to/docker-compose.yml exec -T db \
sh -c 'mysqldump --databases wiki -uroot -p"$(cat /run/secrets/mysql_root_password)"' > wiki.sql
Now the dump file can be copied to the VM:
scp wiki.sql remoteuser@remoteIP:/path/to/remote/directory
Now that the dump file is prepared, the new database can be restored. The db container must first be started:
docker compose up -d db
Then it can be restored:
cat wiki.sql | docker compose exec -T db sh -c 'mysql -uroot -p"$(cat /run/secrets/mysql_root_password)"'
This may present an error similar to Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock'.
To get around this, add -h 127.0.0.1 to the command. (See below).
cat wiki.sql | docker compose exec -T db sh -c 'mysql -uroot -p"$(cat /run/secrets/mysql_root_password)" -h 127.0.0.1'
Before the new certificate could be renewed, I needed to delete the old one in /projectdirectory/nginx/certs/live. I deleted the entire folder wiki.tybax.com.
At this point, I also needed to move /projectdirectory/nginx/conf.d/emsdemo.tybax.com out of the conf.d folder. When I left this, nginx would not start properly because it could not reach the emsdemo service (which wasn't yet setup).
All that was left was to modify the docker-compose.yml by uncommenting the certbot-init service and changing the domain to wiki.tybax.com. Then everything could be started:
# In the /server directory
docker compose up -d
# In the nginx directory
docker compose up -d nginx
docker compose up certbot-init
Instead of starting everything in the nginx docker-compose.yml, I broke it up like this and left off the -d for certbot-init because I wanted to watch the logs live to ensure the certificate was issued successfully. This is not required, I could have just done docker compose up -d
Once the certificate was issued, there was just a few things to clean up:
ipconfig /flushdns on internal clientsAfter completing those steps, I tested accessing the wiki from an external network and my internal network.
For the most part, migrating the emsdemo project to Proxmox was the same as the wiki. The difference was in the database table names.
When I tried to start my containers, emsdemo would not stay running and the logs showed Different lower_case_table_names settings for server ('0') and data dictionary ('2'). This is because on my Macbook, lower_case_table_names is set to 2 which is for macOS meaning case-insensitive. But now it is set to 0 which is case-sensitive.
To fix this, I needed to create a new directory in my project folder that contained a config file.
mkdir mysql-conf
nano mysql-conf/my.cnf
The contents of the my.cnf file are:
[mysqld]
lower_case_table_names=2
Then, in my docker-compose.yml I bound this new volume in the emsdemoDB service:
### Existing Code ###
volumes:
- ./data:/var/lib/mysql
- ./mysql-conf:/etc/mysql/conf.d
### Existing Code ###
Ultimately, this solved the first issue, but presented another. The database started, but the emsdemo container would not stay running. The logs showed errors such as sqlalchemy says Table 'emsdemov2.Settings' doesn't exist. This told me that my app was referencing Settings but in my database the table was named settings.
However, the solution was straightforward. I went inside the emsdemoDB container to modify the table names so that they matched what I used in models.py. I executed the below commands to access the container, login to mysql, then switch to my database and rename the tables. I renamed each table with the RENAME TABLE command.
docker exec -it <container-name> bash
mysql -p
USE emsdemov2;
SHOW tables;
RENAME TABLE settings TO Settings;
Once the tables were renamed, I was successfully able to start the containers and access the site.