We have moved to a new Sailfish OS Forum. Please start new discussions there.
30

[SailfishX] [Gemini] The rootfs is too small

asked 2019-02-28 13:36:57 +0300

jolladiho gravatar image

updated 2019-11-24 16:41:50 +0300

The root file system is lower than 2.5GB. After a new flash round about 1GB is free if no applications are installed. That may cause problems with later updates. Please fix it soon. This bug is known since SailfishX for the Xperia X. Time to make it better!

It can be bigger, because it has not to fit into the android system partition. The lvm comes into the linux partition and on a GeminiPDA we can get round about 60GB for it. There is enougth space left if the rootfs will get up to 6GB.

btw: you can make it bigger on X XAx as well, because it does not has to fit in the android system partition, too. It resides in userdata partition.

edit 190307 - What to fix in the boot image?

Change the value of variable 'LVM_ROOT_SIZE' inside the file '/etc/sysconfig/init' of the ramdisk. Thats all.

content of original file

# Common settings for normal and recovery init.

# Amount of space to keep unallocated for refilling root or home later on.
LVM_RESERVED_MB=0

# Default size for root LV
LVM_ROOT_SIZE=2500

suggestion for new content

# Common settings for normal and recovery init.

# Amount of space to keep unallocated for refilling root or home later on.
LVM_RESERVED_MB=0

# Default size for root LV
LVM_ROOT_SIZE=4000

edit190801: Q: Jolla, can you fix it some day? Please. It is not fixed in 3.1.0.11

edit191124: Still no A from Jolla, still no fix in 3.2.0.12 and not mentioned in release notes.

edit retag flag offensive close delete

Comments

1

Don't know if applicable, but I did this on my X Compact, and worked to resize system partition to 5GB - https://together.jolla.com/question/156279/installing-system-updates-fails-when-there-is-not-enough-space-in-system-data-partition/?answer=156670#post-id-156670

Levone1 ( 2019-03-03 18:26:51 +0300 )edit

it is, but it should be fixed in the boot image by Jolla, because it is not for every user applicable.

jolladiho ( 2019-03-07 10:07:01 +0300 )edit

@jolladiho: Do you know what the variable LVM_RESERVED_MB exactly mean? Is it possible to reserve space for an other logical volume with this variable?

This is required to install Debian in parallel to SailfishX (on the Jolla delivered Sailfish for the Gemini PDA) with out resizing (reduce) the /home partition size. Debian could only be located at an additional logical volume partition because it is not possible to reseve space otherwise (extra partition) beside the SailfishX LVM.

If this is possible, how can i recreate the image after changing the variable to e.g. 10480 (10GB)? I'm not shure that it is possible to modify the value in the image :-(. It enhance the script length ..

Using the community port there is no LVM on SailfishOS, so this is not necessary.

Many thanks

Gabriel

gabs5807 ( 2019-08-02 20:16:45 +0300 )edit
2

Using this variable should reserve space at the lvm at the first boot after a fresh flash. It should be possible to use this space later for a new device or to increase existing devices. I did not tested it, I shrinked the /dev/sailfish/home to create the gemian device.

To make changes you have to unpack the initramfs (not as root and always using the parameter not to use absolute filepaths!). You find the initramfs inside the hybris-boot.img (unmake it) or you can install the package doid-hal-geminipda-kernel and find it and the kernel (named zImage inside boot.img) in the /boot folder. After done your edits, repack the initramfs and make a boot.img (mkimage with parameters for geminipda) containing initramfs and kernel. Best way to find the right parameters for geminipda for me was looking to the lineageos 14 github from deadman. Search at xda-developers.com for it, you can find there some howtos unpacking and repacking boot images too. Have fun. ;-)

jolladiho ( 2019-08-05 17:52:25 +0300 )edit

Hello @jolladiho, did not work :-(. i tried to rebuild the sailfishos_boot.img with the (modified) bootimg_tools_7.8.13.zip from https://forum.xda-developers.com/showthread.php?t=2319018.

First modification was to strip '/' from cpio archiv. Second was to change owner and group to root:root when creating the new cpio archiv.

Flashing the recreated image results in an boot loop (with black screen) :-(.

There are two items which could be the Problem:

First the files on the new created cpio archiv did not have a leading '/' (it seams that cpio has no option to prepend then during creation of an archiv. Second the shown base address during extraction = 0x43ffff00.

split_boot sailfishos_boot.img
Page size: 2048 (0x00000800)
Kernel size: 8347927 (0x007f6117)
Ramdisk size: 4042838 (0x003db056)
Second size: 0 (0x00000000)
Board name: 
Command line: 'bootopt=64S3,32N2,64N2'
Base address: (0x43ffff00)

Trying to repack with this value and extracting again shows 0x44000000. It seems not possible to get the same base address again :-(.

The boot_info script from the same bootimg_tools archiv shows:

boot_info sailfishos_boot.img
PAGE SIZE: 2048
BASE ADDRESS: 0x40080000
RAMDISK ADDRESS: 0x45000000
CMDLINE: 'bootopt=64S3,32N2,64N2'

But the base address did also not work.

Did you get a working boot image with this tools?

Many thanks

Gabs5807

gabs5807 ( 2019-08-22 00:22:51 +0300 )edit

1 Answer

Sort by » oldest newest most voted
6

answered 2019-03-03 16:22:15 +0300

this post is marked as community wiki

This post is a wiki. Anyone with karma >75 is welcome to improve it.

updated 2019-03-03 16:31:00 +0300

pcfe gravatar image

While I also found the extremely small / annoying, thanks to LVM this is a 2 minute thing to fix.

NOTE; never shrink any filesystem if you do not have a known restorable backup!

  1. enable dev mode
  2. enable remote login, noting the password for nemo
  3. devel-su
  4. drop your ssh pubkey in /root/.ssh/authorized_keys
  5. ssh in directly as root (we will be killing the user session in a moment)
  6. stop the user session [root@Sailfish ~]# systemctl stop user@100000.service
  7. shrink your /home to 32GiB with [root@Sailfish ~]# lvresize -L 32G --resizefs --autobackup y sailfish/home (this step will include unmounting /home, for that to be possible we had to stop the user session.)
  8. cleanly reboot your Sailfish X device with [root@Sailfish ~]# systemctl reboot

Do note, at this point your additional free space is not allocated anywhere. I personally like it this way because then I can grow any LV on the fly, without any interruption while the system is running. Only the shrink requires unmounting the ext4 filesystem. Growing happens on-line.

Here's an example of on-line growing / by 1 GiB: (you can ignore the read errors as long as they are not on /dev/mmcblk0p29. Jolla; you may consider blacklisting mmcblk0rpmb)

[root@Sailfish ~]# vgs
  /dev/mmcblk0rpmb: read failed after 0 of 4096 at 0: Input/output error
  /dev/mmcblk0rpmb: read failed after 0 of 4096 at 4128768: Input/output error
  /dev/mmcblk0rpmb: read failed after 0 of 4096 at 4186112: Input/output error
  /dev/mmcblk0rpmb: read failed after 0 of 4096 at 4096: Input/output error
  VG       #PV #LV #SN Attr   VSize   VFree  
  sailfish   1   2   0 wz--n- <57.52g <23.08g
[root@Sailfish ~]# lvs
  /dev/mmcblk0rpmb: read failed after 0 of 4096 at 0: Input/output error
  LV   VG       Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home sailfish -wi-ao---- 32.00g                                                    
  root sailfish -wi-ao----  2.44g                                                    
[root@Sailfish ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
rootfs                2.4G  1.3G  1.1G  56% /
/dev/sailfish/root    2.4G  1.3G  1.1G  56% /
/dev/sailfish/home     32G   57M   32G   1% /home
devtmpfs              1.9G  496K  1.9G   1% /dev
tmpfs                 1.9G  112K  1.9G   1% /dev/shm
tmpfs                 1.9G   16M  1.9G   1% /run
tmpfs                 1.9G     0  1.9G   0% /sys/fs/cgroup
tmpfs                 1.9G  8.0K  1.9G   1% /tmp
tmpfs                 1.9G     0  1.9G   0% /mnt
tmpfs                 1.9G     0  1.9G   0% /storage
/dev/mmcblk0p8        3.9M   68K  3.7M   2% /protect_f
/dev/mmcblk0p9        8.3M   60K  8.0M   1% /protect_s
/dev/block/platform/mtk-msdc.0/by-name/nvdata
                       28M  7.7M   20M  29% /nvdata
/dev/block/platform/mtk-msdc.0/by-name/nvcfg
                      3.9M   44K  3.7M   2% /nvcfg
tmpfs                 383M  576K  382M   1% /run/user/100000
tmpfs                 383M     0  383M   0% /run/user/0
[root@Sailfish ~]# lvextend -L +1G --resizefs sailfish/root
  /dev/mmcblk0rpmb: read failed after 0 of 4096 at 0: Input/output error
  Size of logical volume sailfish/root changed from 2.44 GiB (625 extents) to 3.44 GiB (881 extents).
  Logical volume sailfish/root successfully resized.
resize2fs 1.43.1 (08-Jun-2016)
Filesystem at /dev/mapper/sailfish-root is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/mapper/sailfish-root is now 902144 (4k) blocks long.

[root@Sailfish ~]# df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/sailfish/root    3.4G  1.3G  2.1G  39% /
edit flag offensive delete publish link more

Comments

2

Thanks, I know it already, did it on the xperia X. Not for every user ...

It is time to fix it in the boot image. It is only one variable in a script.

jolladiho ( 2019-03-04 19:10:44 +0300 )edit
4

@jolladiho Maybe a good topic for the community meeting ?

MartinK ( 2019-03-05 03:41:52 +0300 )edit
1

Nice but too complicated and risky for average user like me...

nerip ( 2019-03-12 14:06:55 +0300 )edit

@jolladiho do you happen to know a URL to a git (or other VCS) repository containing that script? I'd be happy to make a pull/merge request but I have no idea where the repo is.

pcfe ( 2019-03-26 02:17:25 +0300 )edit
1

@pcfe I do not know where it is. Maybe Jolla can help to find it soonTM. ;-)

jolladiho ( 2019-03-26 13:26:40 +0300 )edit
Login/Signup to Answer

Question tools

Follow
11 followers

Stats

Asked: 2019-02-28 13:36:57 +0300

Seen: 1,507 times

Last updated: Jan 05 '20