Resolved Synology RAID Expansion Not Using New Drive Space

itsthenewdc

macrumors member
Original poster
Jul 10, 2008
91
116
Orlando, FL
So I recently purchased a Synology DS517 to expand our DS1817+. I did everything properly on the Synology side and have it showing the increased space of ~90 TB up from the old ~50 TB. The problem now lies on the Mac OS side of things because it's only showing the old 50 TB size. Here's a terminal output of: diskutil list

Code:
/dev/disk6 (external):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                         96.0 TB    disk6
   1:                        EFI EFI                     209.7 MB   disk6s1
   2:                 Apple_APFS Container disk7         53.7 TB    disk6s2

/dev/disk7 (synthesized):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      APFS Container Scheme -                      +53.7 TB    disk7
                                Physical Store disk6s2
   1:                APFS Volume CC-SYN1                 52.6 TB    disk7s1
In Disk Utility, when clicking on Partition, it shows a pie chart of CC-SYN1 with 53.7 TB and Free Space with 42.2 TB. When I click on the Free Space and click "-", the size row shows the new space of 96 TB. But when I click Apply, it fails and says - The new size must be different than the existing size.

I've tried Googling and finding a way to get that extra space recognized in the APFS container, but nothing has worked. Outside of formatting, I'm not sure what else I can try. Formatting is currently not a realistic option either, because I don't have a spare 54 TB of hard drive space lying around doing nothing to be able to do a backup and restore. Anybody have experience with this and have a way to expand the space recognized without losing data?
 

dsemf

macrumors 6502
Jul 26, 2014
330
67
Based on your diskutil output, you should be able to use the resizeContainer option.

The code below shows a test I did using a 32GB SD card. I set it up with a 24GB container.
Code:
/dev/disk2 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *32.1 GB    disk2
   1:                        EFI EFI                     209.7 MB   disk2s1
   2:                 Apple_APFS Container disk3         24.0 GB    disk2s2

/dev/disk3 (synthesized):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      APFS Container Scheme -                      +24.0 GB    disk3
                                 Physical Store disk2s2
   1:                APFS Volume APFSDemo                1.1 MB     disk3s1
As you can see, there is 8GB unused. The resize command shown below uses the 0 (zero) option which means expand the container to the next partition or the end of the disk.
Code:
dmba:jmri xxx$ diskutil apfs resizeContainer disk3 0
Started APFS operation
Aligning grow delta to 7,850,455,040 bytes and targeting a new physical store size of 31,850,455,040 bytes
Determined the maximum size for the targeted physical store of this APFS Container to be 31,849,426,944 bytes
Resizing APFS Container designated by APFS Container Reference disk3
The specific APFS Physical Store being resized is disk2s2
Verifying storage system
Performing fsck_apfs -n -x -S /dev/disk2s2
Checking the container superblock
Checking the space manager
Checking the space manager free queue trees
Checking the object map
Checking volume
Checking the APFS volume superblock
The volume APFSDemo was formatted by diskmanagementd (945.220.38) and last modified by apfs_kext (945.220.38)
Checking the object map
Checking the snapshot metadata tree
Checking the snapshot metadata
Checking the extent ref tree
Checking the fsroot tree
Verifying allocated space
warning: Overallocation Detected on Main device: (121504+1) bitmap address (7cb8)
...
warning: Overallocation Detected on Main device: (262723+3) bitmap address (7cea)
Performing deferred repairs
The volume /dev/disk2s2 appears to be OK
Storage system check exit code is 0
Growing APFS Physical Store disk2s2 from 24,000,000,000 to 31,850,455,040 bytes
Modifying partition map
Growing APFS data structures
Finished APFS operation
I don't know what the Overallocation warnings are, but they did not cause a failure. The new diskutil list is below.
Code:
/dev/disk2 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *32.1 GB    disk2
   1:                        EFI EFI                     209.7 MB   disk2s1
   2:                 Apple_APFS Container disk3         31.9 GB    disk2s2

/dev/disk3 (synthesized):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      APFS Container Scheme -                      +31.9 GB    disk3
                                 Physical Store disk2s2
   1:                APFS Volume APFSDemo                1.1 MB     disk3s1
You do need to substitute the correct values. Also make sure that you have a good backup.

This procedure worked on a 32GB SD card, but I make no guarantees for a 96TB raid system.

The command below will show the possibilities for a resize.
Code:
diskutil apfs resizeContainer disk2s2 limits
DS
 

itsthenewdc

macrumors member
Original poster
Jul 10, 2008
91
116
Orlando, FL
Based on your diskutil output, you should be able to use the resizeContainer option.

The code below shows a test I did using a 32GB SD card. I set it up with a 24GB container.
Code:
/dev/disk2 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *32.1 GB    disk2
   1:                        EFI EFI                     209.7 MB   disk2s1
   2:                 Apple_APFS Container disk3         24.0 GB    disk2s2

/dev/disk3 (synthesized):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      APFS Container Scheme -                      +24.0 GB    disk3
                                 Physical Store disk2s2
   1:                APFS Volume APFSDemo                1.1 MB     disk3s1
As you can see, there is 8GB unused. The resize command shown below uses the 0 (zero) option which means expand the container to the next partition or the end of the disk.
Code:
dmba:jmri xxx$ diskutil apfs resizeContainer disk3 0
Started APFS operation
Aligning grow delta to 7,850,455,040 bytes and targeting a new physical store size of 31,850,455,040 bytes
Determined the maximum size for the targeted physical store of this APFS Container to be 31,849,426,944 bytes
Resizing APFS Container designated by APFS Container Reference disk3
The specific APFS Physical Store being resized is disk2s2
Verifying storage system
Performing fsck_apfs -n -x -S /dev/disk2s2
Checking the container superblock
Checking the space manager
Checking the space manager free queue trees
Checking the object map
Checking volume
Checking the APFS volume superblock
The volume APFSDemo was formatted by diskmanagementd (945.220.38) and last modified by apfs_kext (945.220.38)
Checking the object map
Checking the snapshot metadata tree
Checking the snapshot metadata
Checking the extent ref tree
Checking the fsroot tree
Verifying allocated space
warning: Overallocation Detected on Main device: (121504+1) bitmap address (7cb8)
...
warning: Overallocation Detected on Main device: (262723+3) bitmap address (7cea)
Performing deferred repairs
The volume /dev/disk2s2 appears to be OK
Storage system check exit code is 0
Growing APFS Physical Store disk2s2 from 24,000,000,000 to 31,850,455,040 bytes
Modifying partition map
Growing APFS data structures
Finished APFS operation
I don't know what the Overallocation warnings are, but they did not cause a failure. The new diskutil list is below.
Code:
/dev/disk2 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *32.1 GB    disk2
   1:                        EFI EFI                     209.7 MB   disk2s1
   2:                 Apple_APFS Container disk3         31.9 GB    disk2s2

/dev/disk3 (synthesized):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      APFS Container Scheme -                      +31.9 GB    disk3
                                 Physical Store disk2s2
   1:                APFS Volume APFSDemo                1.1 MB     disk3s1
You do need to substitute the correct values. Also make sure that you have a good backup.

This procedure worked on a 32GB SD card, but I make no guarantees for a 96TB raid system.

The command below will show the possibilities for a resize.
Code:
diskutil apfs resizeContainer disk2s2 limits
DS
Thanks for the response.
Doing the following..
Code:
diskutil apfs resizeContainer disk7 0
..it reported the following error..
Code:
Error: -69743: The new size must be different than the existing size
When do the limits code, I get..
Code:
Resize limits for APFS Physical Store partition disk6s2:
  Current Physical Store partition size on map:   53.7 TB (53733052342272 Bytes)
  Minimum (constrained by file/snapshot usage):   53.7 TB (53733052317696 Bytes)
  Recommended minimum (if used with macOS):       53.7 TB (53733052342272 Bytes)
  Maximum (constrained by partition map space):   96.0 TB (95959022284800 Bytes)
 

maverick808

macrumors 65816
Jun 30, 2004
1,153
134
Scotland
If you are using an SHR then the unit probably just created another RAID volume. So over the disks you might have your original 50TB RAID and then a new 40TB RAID. The Synology would make that look a single 90TB unit but really it would be two.

You can check what the RAID setup is on the Synology by ssh'ing in to it and doing 'cat /proc/mdstat' or 'fdisk -l'.

Also, how are you connecting this to your Mac? I didn't think a network mounted SMB share would appear in the diskutil list... are you mounting as AFP?
 

itsthenewdc

macrumors member
Original poster
Jul 10, 2008
91
116
Orlando, FL
If you are using an SHR then the unit probably just created another RAID volume. So over the disks you might have your original 50TB RAID and then a new 40TB RAID. The Synology would make that look a single 90TB unit but really it would be two.

You can check what the RAID setup is on the Synology by ssh'ing in to it and doing 'cat /proc/mdstat' or 'fdisk -l'.

Also, how are you connecting this to your Mac? I didn't think a network mounted SMB share would appear in the diskutil list... are you mounting as AFP?
It's all one volume. Storage Manager indicates:
https://imgur.com/P38B2I4

I'm connecting to the RAID by iSCSI. Here's what shows in Disk Utility when connected:
https://imgur.com/Vd9H17c

While in "Partition" mode in Disk Utility, I also can't drag/resize the Free Space area, which I think I'm supposed to be able to do but for some reason cannot:
https://imgur.com/rqbC3pK

I also noticed, when using Paragon's Hard Disk Manager, that the RAID row for some reason doesn't stretch all the way to the right showing the space used like the other drives, as seen here:
https://imgur.com/4Rf5kHe

Something internally knows it has the space, thinks it's using it (hence why resize command won't work), but for some reason isn't actually using it.
 

dsemf

macrumors 6502
Jul 26, 2014
330
67
Since limits maximum shows 96TB, you might be able to use an actual value instead of using the zero option.

ds
 

itsthenewdc

macrumors member
Original poster
Jul 10, 2008
91
116
Orlando, FL
Since limits maximum shows 96TB, you might be able to use an actual value instead of using the zero option.

ds
Do I type in the full amount (95959022284800) or do I have to do the difference from current and max?
Code:
diskutil apfs resizeContainer disk6s2 95959022284800
 

dsemf

macrumors 6502
Jul 26, 2014
330
67
Do I type in the full amount (95959022284800) or do I have to do the difference from current and max?
Code:
diskutil apfs resizeContainer disk6s2 95959022284800
The man page indicates that the value is the new size.

DS
 

itsthenewdc

macrumors member
Original poster
Jul 10, 2008
91
116
Orlando, FL
Since limits maximum shows 96TB, you might be able to use an actual value instead of using the zero option.

ds
When I try the above code it comes back with the following error message:
Started APFS operation
Aligning grow delta to 42,225,969,942,528 bytes and targeting a new physical store size of 95,959,022,284,800 bytes
Determined the maximum size for the targeted physical store of this APFS Container to be 95,959,022,284,800 bytes
Error: -69519: The target disk is too small for this operation, or a gap is required in your partition map which is missing or too small, which is often caused by an attempt to grow a partition beyond the beginning of another partition or beyond the end of partition map usable space
 

itsthenewdc

macrumors member
Original poster
Jul 10, 2008
91
116
Orlando, FL
The man page indicates that the value is the new size.

DS
Came across this post - https://apple.stackexchange.com/questions/312182/extend-main-apfs-partition-fails-with-target-disk-is-too-small-for-this-operati?rq=1 - that talks about something that seemed on the right track. I notice the part after 2 GPT seems pretty big but is blank in terms of "contents". Since I have little knowledge of this stuff, not sure if this is helpful in troubleshooting or not.

Code:
         start          size  index  contents
             0             1         PMBR
             1             1         Pri GPT header
             2            32         Pri GPT table
            34             6         
            40        409600      1  GPT part - C12A7328-F81F-11D2-BA4B-00A0C93EC93B
        409640  104947367856      2  GPT part - 7C3457EF-0000-11AA-AA11-00306543ECAC
  104947777496   82472599559         
  187420377055            32         Sec GPT table
  187420377087             1         Sec GPT header
[doublepost=1542326207][/doublepost]
My only other idea at this time is try 95TB.

DS
Trying that it returns:
The size (90TB) must not be greater than the partition map allocatable size
 

dsemf

macrumors 6502
Jul 26, 2014
330
67
I have verified everything works on my test SD card and should work for you. However, there is a bit of size difference between 32G and 96T. Might be time for a support call to Apple.

DS
 

maverick808

macrumors 65816
Jun 30, 2004
1,153
134
Scotland
It's all one volume. Storage Manager indicates:
https://imgur.com/P38B2I4
That representation of the storage in the web GUI is a gross simplification. Every single Synology has an absolute minimum of two RAIDs because it has a RAID1 across all disks that it uses as a boot volume in case any individual disk dies. It uses the Linux mdadm to create software RAIDs so it wouldn't work unless it could boot off a single disk to get started.

The only way to know for sure what the RAID volumes are on the Synology is with the command line tools. My own unit shows one volume on that GUI but when I check at the command line I see four. Please check yours as I am almost certain it will have multiple RAIDs now. Given that it's an SHR that would be the natural way for it to expand the volume... by creating a second RAID.

I'm connecting to the RAID by iSCSI. Here's what shows in Disk Utility when connected:
https://imgur.com/Vd9H17c
I strongly caution you not to use macOS's disk tools to make any changes to your Synology drives. Quite simply, they are likely not to understand the layout of the drives and could easily make incorrect changes that may result in a loss of your volume. I would only use the Synology itself to attempt to make any changes. I mean, it's a btrfs volume and disk utility is trying to do APFS operations on it! In a way you are very lucky it has rejected to make changes because if it had gone ahead and made any I imagine you would have lost the volume.

While in "Partition" mode in Disk Utility, I also can't drag/resize the Free Space area, which I think I'm supposed to be able to do but for some reason cannot:
https://imgur.com/rqbC3pK
It's because that first RAID is 54TB and the unit has created a second RAID with the new space. Check by ssh'ing into the Synology and doing 'cat /proc/mdstat'. I'm certain you'll see more than one listed, I bet you see 3 or 4 separate RAIDs.
 
Last edited:

itsthenewdc

macrumors member
Original poster
Jul 10, 2008
91
116
Orlando, FL
It's because that first RAID is 54TB and the unit has created a second RAID with the new space. Check by ssh'ing into the Synology and doing 'cat /proc/mdstat'. I'm certain you'll see more than one listed, I bet you see 3 or 4 separate RAIDs.
When I run that command, this is what I get..
Code:
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sda5[0] sdja5[8] sdjb5[9] sdjc5[10] sdjd5[11] sdje5[12] sdh5[7] sdg5[6] sdf5[5] sde5[4] sdd5[3] sdc5[2] sdb5[1]
      93710345472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/13] [UUUUUUUUUUUUU]
      
md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3] sde2[4] sdf2[5] sdg2[6] sdh2[7]
      2097088 blocks [8/8] [UUUUUUUU]
      
md0 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3] sde1[4] sdf1[5] sdg1[6] sdh1[7]
      2490176 blocks [8/8] [UUUUUUUU]
      
unused devices: <none>
I don't quite understand it, but it seems md2 is my raid. It's a Raid 5, and the number of blocks seems to indicate the amount of storage on it, and 13/13 would represent there being 13 drives in the 2 bays. Synology itself doesn't seem to have any tools that deal with anything past volume creation. Even Synology support said I had to partition it in MacOS to see the new space. Does this information help troubleshoot the issue at all?
 

maverick808

macrumors 65816
Jun 30, 2004
1,153
134
Scotland
When I run that command, this is what I get..
Code:
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sda5[0] sdja5[8] sdjb5[9] sdjc5[10] sdjd5[11] sdje5[12] sdh5[7] sdg5[6] sdf5[5] sde5[4] sdd5[3] sdc5[2] sdb5[1]
      93710345472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/13] [UUUUUUUUUUUUU]
     
md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3] sde2[4] sdf2[5] sdg2[6] sdh2[7]
      2097088 blocks [8/8] [UUUUUUUU]
     
md0 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3] sde1[4] sdf1[5] sdg1[6] sdh1[7]
      2490176 blocks [8/8] [UUUUUUUU]
     
unused devices: <none>
I don't quite understand it, but it seems md2 is my raid. It's a Raid 5, and the number of blocks seems to indicate the amount of storage on it, and 13/13 would represent there being 13 drives in the 2 bays. Synology itself doesn't seem to have any tools that deal with anything past volume creation. Even Synology support said I had to partition it in MacOS to see the new space. Does this information help troubleshoot the issue at all?
I stand corrected. md0 and md1 are boot/core RAIDs and are small. That md2 one at the top is indeed one big RAID. So it didn't create a new RAID and did just expand your existing one. So it looks like it's configured as you want.

What if you do 'df -h' at that command? That will list your mounts and the second column will show the total size of the drive. Does that show 90+TB as being the total size?
 

itsthenewdc

macrumors member
Original poster
Jul 10, 2008
91
116
Orlando, FL
I stand corrected. md0 and md1 are boot/core RAIDs and are small. That md2 one at the top is indeed one big RAID. So it didn't create a new RAID and did just expand your existing one. So it looks like it's configured as you want.

What if you do 'df -h' at that command? That will list your mounts and the second column will show the total size of the drive. Does that show 90+TB as being the total size?
It shows the 84TB, which is what it keeps saying the size is via Synology's calculator. When connected via Mac OS, the size is always calculated as more. But either way, it is showing the new/expanded size.

Code:
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        2.3G  980M  1.3G  45% /
none            3.9G     0  3.9G   0% /dev
/tmp            3.9G  1.4M  3.9G   1% /tmp
/run            3.9G  4.4M  3.9G   1% /run
/dev/shm        3.9G  4.0K  3.9G   1% /dev/shm
none            4.0K     0  4.0K   0% /sys/fs/cgroup
cgmfs           100K     0  100K   0% /run/cgmanager/fs
/dev/vg1000/lv   84T   49T   36T  58% /volume1
 

maverick808

macrumors 65816
Jun 30, 2004
1,153
134
Scotland
Well this has me stumped. If it shows up right on the Synology command line then I would guess that it is actually set up perfectly fine now and for some reason macOS is reporting the size incorrectly.
[doublepost=1542401083][/doublepost]How about if you mount the volume with SMB or AFP? If you do that and then do 'df -h' on your Mac what does the size show as then? I'm thinking perhaps it is the iSCSI mounting that might be causing it so would be worth checking that to rule it out.
 

itsthenewdc

macrumors member
Original poster
Jul 10, 2008
91
116
Orlando, FL
Well this has me stumped. If it shows up right on the Synology command line then I would guess that it is actually set up perfectly fine now and for some reason macOS is reporting the size incorrectly.
Yeah, it saddens me that all the solutions people have told me are supposed to work and for some reason aren't. I guess I'm left with having to buy another full raid system to perform a backup, format, and then restore, and then return the RAID (I hate being THAT guy).
 

maverick808

macrumors 65816
Jun 30, 2004
1,153
134
Scotland
Yeah, that might be the way to go I'm afraid. Sorry to hear that this hasn't worked out.

By the way, if you do this again ever I'd recommend setting up the expansion disks as a separate RAID as it's less risky that way in terms of potential data loss. Having a RAID spread out over the internal disks and external disks greatly increases risks because all it takes is for the external unit to have its wire disconnected or to be powered down and then you could potentially lose the entire RAID. This is because RAID 5 can withstand a single disk failure and obviously the external unit being disconnected is more than a single disk being lost. So it's safer to have the external be a completely separate RAID so that it is its own thing (so when first setting up the expansion pick create new volume rather than expand the current).
 

itsthenewdc

macrumors member
Original poster
Jul 10, 2008
91
116
Orlando, FL
Yeah, that might be the way to go I'm afraid. Sorry to hear that this hasn't worked out.

By the way, if you do this again ever I'd recommend setting up the expansion disks as a separate RAID as it's less risky that way in terms of potential data loss. Having a RAID spread out over the internal disks and external disks greatly increases risks because all it takes is for the external unit to have its wire disconnected or to be powered down and then you could potentially lose the entire RAID. This is because RAID 5 can withstand a single disk failure and obviously the external unit being disconnected is more than a single disk being lost. So it's safer to have the external be a completely separate RAID so that it is its own thing (so when first setting up the expansion pick create new volume rather than expand the current).
I get the safety aspect of that, but I really liked the idea of expanding (why I bought this setup in the first place) so I didn't have to worry about what client project is on what drive, running out of ports, etc. I do film production and working with 4-8K footage it can take up a bit of space, and since some client folders can be TBs in size on their own, I don't want to worry about having to move them back and forth between drives as they increase in size and the drive space runs out.
 

maverick808

macrumors 65816
Jun 30, 2004
1,153
134
Scotland
I get the safety aspect of that, but I really liked the idea of expanding (why I bought this setup in the first place) so I didn't have to worry about what client project is on what drive, running out of ports, etc. I do film production and working with 4-8K footage it can take up a bit of space, and since some client folders can be TBs in size on their own, I don't want to worry about having to move them back and forth between drives as they increase in size and the drive space runs out.
Totally get that, as long as you have backups that's a good way to work.