Recently migrated my homelab ESXi to Proxmox VE, and while things mostly went smoothly, one node would randomly hang (mostly during large transfers or under sustained network load but sometimes when idling…). I plugged in a monitor to check logs and… confirmed the cause with a quick dmesg:
e1000e: Detected Hardware Unit Hang
Turns out this has been a known issue for years with Intel e1000e NICs (like I217-LM, I219-V, 82574L, etc.). These “aging” chips choke when offload features are enabled under modern workloads.
Just discovered something pretty cool while working on email workflows for BookSea that doesn’t seem to be widely documented anywhere. You can use “+” signs in test emails !
This is an unsupported modification and may void your warranty. Proceed at your own risk. Configuration WILL revert after DSM updates.
But why ?
Synology reserves eSATA ports for their own brand external expansion units, and DSM explicitly prevents drives connected through those ports from being used as SSD caches. The setting responsible for this behavior is called esataportcfg, found in the system configuration files.
The esataportcfg setting tells DSM which physical SATA ports should be treated as eSATA ports — usually for external expansion units like DX517.
It’s written in hexadecimal (e.g. 0x4) but actually represents a bitmask: a binary number where each bit corresponds to a SATA port on the motherboard.
0x4 is hexadecimal for binary 0100. This means:
Port 0: 0 = Not eSATA
Port 1: 0 = Not eSATA
Port 2: 1 = eSATA
Port 3: 0 = Not eSATA
DSM will treat only port 2 ( sdc ) as eSATA, and ignore it for caching, system volumes, and other features limited to “internal” drives.
This is useful if you want to explicitly allow or deny eSATA functionality for certain ports — for example, if you’re using a third-party eSATA dock or expansion device and want DSM to handle it differently. ( I have NOT tried that yet !)
The “hack”
Enable SSH in DSM: Control Panel → Terminal & SNMP → Enable SSH service
SSH into your NAS: ssh romain@synology
Edit the config files: sudo vi /etc.defaults/synoinfo.conf If it exists, also edit: sudo vi /etc/synoinfo.conf
Find the line: esataportcfg="0x4" And change it to: esataportcfg="0x0"
Unmount ISCSI & Reboot your NAS: sudo reboot
After Reboot: Enable SSD Cache
Once DSM is back online:
Go to Storage Manager → SSD Cache
Select your connected SSD (formerly on the eSATA port)
Create a read or read-write cache as desired
DSM should now accept the drive as a valid caching candidate.
Notes
This trick works best on models with physically exposed eSATA ports not already assigned to expansion bays.
DSM updates may overwrite synoinfo.conf. Consider making a backup.
This workaround does not make sense if you have M.2 slots — use those instead for best performance.
While troubleshooting a malfunctioning radar system, I wanted to inspect its firmware for diagnostic tools (couldn’t find any). The firmware was stored in a Btrfs image, which isn’t straightforward to handle on macOS without spending some quality time with FUSE.
To “extract” its contents, I instead used a Docker container with the necessary tools, because it’s quite easy to do on plain linux. Here’s how.
Out of curiosity I’m testing Cursor AI ( and I’m actually enjoying it more that I should ), but I quickly noticed my laptop’s battery draining at an alarming rate – faster than playing Factorio. After investigation, the root cause was clear: the Import Cost VS Code extension was triggering excessive CPU usage through repeated recalculations (for no reason?!).
Edit: No reason might seem excessive. It’s a TypeScript project and I was working on components that got imported everywhere so I might understand why the sizes needed to be updated, but it’s not a BIG project, it still doesn’t make sense to eat all the CPU.
Here’s a straightforward docker-compose setup for running WordPress locally on my M1 Macbook, with persistent data, mapped plugins directory, and custom PHP upload settings. I use it as a Dev environnement but should work for prod with minor changes.
Boat wiring isn’t just about hooking up positive and negative leads; it’s about ensuring every connection can withstand a harsh marine environment without turning into a fire hazard. Even the smallest wiring mistake—like a subpar crimp or an undersized wire—can lead to localized overheating. Over time, these issues can degrade further until they become a serious safety risk. In this article, we’ll walk through common pitfalls and best practices to keep your boat’s electrical system safe and reliable.