Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

nquinn

macrumors 6502a
Original poster
Jun 25, 2020
829
621
Situation:

My main machine is a 2021 Macbook, obviously running APFS.

I have a small Windows SFF machine on the network that I want to use like 75% as a NAS/Plex server and to quickly back up files, maybe using Rclone or something else.

Now, I know of course that Samba (SMB) is supposed to seamlessly convert files between operating systems if you transfer over network shares, but I have a few concerns:

1. Is SMB efficient? Should I be backing up via another method to a little SFF machine?
2. Obviously Windows 11 runs NTFS. SMB can write to this, but I worry that there may be minor file type issues that don't write correctly between APFS and NTFS. (for example, leading periods in files, slashes, etc). I've heard of others having issues. I'm considering dual-booting with linux and running a file system that might be more compatible with APFS like exfat or ext4

I know Synology and others typically run BTRFS/etc, but not sure if they use SMB or not to write to them locally.

I don't own a mac mini/etc so APFS on remote system is not an option.
 
Last edited:
Try making a new sparse bundle disk image on the remote machine. Make sure it's in APFS format. When you want to backup from the Macbook, mount that remote image and backup to it.

With a disk image on the remote machine, it won't matter what its actual native disk format is. It could even be FAT32, and the disk image will still have the APFS format.

You can also enable encryption on the disk image, and it will still work regardless of the remote machine's native file-system.
 
Try making a new sparse bundle disk image on the remote machine. Make sure it's in APFS format. When you want to backup from the Macbook, mount that remote image and backup to it.

With a disk image on the remote machine, it won't matter what its actual native disk format is. It could even be FAT32, and the disk image will still have the APFS format.

You can also enable encryption on the disk image, and it will still work regardless of the remote machine's native file-system.

Is this the correct workflow?
1. Make windows folder shareable over network/SMB
2. Use macOS disk utility to create sparse image over network (disk utility > file > new blank image)
3. From laptop, mount that image over the network?

Other main concern is that I'd want to make sure that files would be read-able by plex/amazon fire TV that are connecting to that SFF. I think running APFS might hose that?
 
Start by connecting to the remote Windows machine from the Mac.

Then launch Disk Utility.

In Disk Utility, create a new disk image, and navigate to the connected Windows machine.

One of the options in the creation dialog should be for a sparse bundle disk image. Choose that.

Set the format of the disk image to APFS. If it wants a size, pick a size suitable to your Windows machine's capacity.

A sparse bundle only takes space as needed. So if you create it with a max size of 100 GB, it won't actually need that much space until you have 100GB of data on it.


More info on how to use Disk Utility is here:
 
Start by connecting to the remote Windows machine from the Mac.

Then launch Disk Utility.

In Disk Utility, create a new disk image, and navigate to the connected Windows machine.

One of the options in the creation dialog should be for a sparse bundle disk image. Choose that.

Set the format of the disk image to APFS. If it wants a size, pick a size suitable to your Windows machine's capacity.

A sparse bundle only takes space as needed. So if you create it with a max size of 100 GB, it won't actually need that much space until you have 100GB of data on it.


More info on how to use Disk Utility is here:
Right, so I think this works for backing up files from my macbook, but if I have that little SFF connected to a TV to use as an HTPC also:

1. Windows11 on that machine wont' be able to playback those files since it can't mount the image right?
2. My Fire TV 4k won't be able to mount/read from that file right?

I'm pretty sure this works well as a remote backup/time machine type solution, but not for more general file access
 
Right, so I think this works for backing up files from my macbook, but if I have that little SFF connected to a TV to use as an HTPC also:

1. Windows11 on that machine wont' be able to playback those files since it can't mount the image right?
2. My Fire TV 4k won't be able to mount/read from that file right?

I'm pretty sure this works well as a remote backup/time machine type solution, but not for more general file access
Correct on all counts.

The reason it won't work for "general file access" is because the disk image contains an actual APFS file system. If machine X (where X = Windows, Linux, whatever) can't read APFS, then it has no hope of reading the files on the APFS disk image.
 
Last edited:
Correct on all counts.
Ya prob not a solution i'll go with then. I think I'll either just do a network share w/ windows+NTFS or maybe consider letting a linux distro run exFAT w/ the same sharing scenario.
 
I'm running Time Machine on my MBP to a SMB network share served by my Linux box. I created a dedicated SMB share for Time Machine and served that as a dedicated share. If you select that disk when adding a Time Machine drive then the Mac will automatically create a sparsebundle image on the remote drive. You don't have to do anything manually, the Mac will just do it.
On the Linux side you will see a directory with the sparsebundle image. You won't be able to see the Mac files within the disk image. Only the underlying files that make up the image.
There's some magic to the settings needed on Samba and Avahi on the Linux side to get the SMB service to show up on the Mac as a supported drive. But once that is done, the Samba mount is seamless on the Mac. Seems fast enough to me. The Linux side is serving a directory on a RAID-5 hard-disk volume in my case.

This is different than sharing files though. If you want to share files between your Mac and other devices on the network then you just map the share drive and start copying files to it. It doesn't matter what filesystem is used on the network server side. SMB is the protocol being used to access the files by the network clients. If you did it to a Linux SMB share you'd be using some native Linux filesystem on the server side (ext4, etc.).
 
Last edited:
Is your goal backup AND general file sharing?
Correct. The 'general file sharing' goal is to be able to have the media files (video h.265) available via plex or direct access to the fire 4k stick or via playback from VLC right on the device to a TV.
 
A couple of things I want to chime in on here:

A file system is not the same as a protocol. APFS/eXFat/NTFS/ext4 are all local file systems. SMB, NFS, FTP are all remote filesystem that employ a network protocol. SMB does not write locally as a result.

And there are two important parts of a file that each filesystem cares about, data and metadata. Data has no translation of any sort between filesystems. It is just written as blocks. The metadata is usually where things get messed up.

There are three places that metadata can get messed up. First is the translation from the local filesystem (APFS in your case) to the remote filesystem (SMB). Second is the when the data is transferred between the two computers over SMB. This is the most common place of error. Third is when the server translates from SMB to the local filesystem (NTFS or ext4 or whatever depending on your server OS).

Here's why step two is the primary issue. Most OS developers can easily manage the testing of what runs locally on the machine. So the translation of metadata from the local FS to the remote FS is usually pretty clean. All the code is in their control. When communicating with another OS you have to trust that the other OS is respecting the protocol specification (SMB in this case) and even when both sides are doing that two people might read the spec differently and implement it differently. So there can be hiccups.

So what's the best bet? Use the company that wrote the standard. For NFS that was Sun (back in the day). Same for FTP. SFTP came from the OSSF (mostly). AFP from Apple (which is now gone) and SMB from Microsoft. The Microsoft SMB server is the gold standard for interoperability because MS wrote the spec. It's the platform folks test against.

Regarding the translation of each OS to and from the remote filesystem protocol use the most widely used local file system on that machine for both the best performance and compatibility. Use ext4 on linux and NTFS on windows. These will be the most tested.

Regarding the performance of SMB, the latest version of SMB (version 3) is very efficient over high bandwidth/low latency networks, reasonably efficient over high bandwidth/high latency networks, and pretty poor over in the worst case scenario of low bandwidth/low latency (it's exponential backoff is too aggressive in my experience). In practice that means you're going to do best over a wire and second best over a wifi network.
 
  • Like
Reactions: nquinn
Thanks for the detailed reply. I think I'll stick with SMB3 to Windows NTFS then for now. I think it'll be reasonably safe since I don't think i'm doing anything wild with metadata or file naming.
 
Thanks for the detailed reply. I think I'll stick with SMB3 to Windows NTFS then for now. I think it'll be reasonably safe since I don't think i'm doing anything wild with metadata or file naming.
I'll give you a bit of an inside scoop from a decade ago. I worked at MS on the NFS team (did you know MS had an NFS client?) and we shared a lot of plumbing with the SMB team. We pushed hard on the SMB team to start testing with linux boxes because the compatibility mattered (NFS has always been pushed for compatibility). Eventually they added Macs too (which is part of the reason apple dropped AFS). Since the EU forced the spec to be open everyone was able to implement it good things happened from both sides and we ended up with a solid well supported protocol. NFSv4 was a competitor but SMB was already so widespread that it kinda fell to the side.

As a side effect us NFS folks got to give the SMB folks crap for all the bugs they found with compat testing. There was always a fun rivalry there. Something to the effect of "our protocol is better than yours" but I guess the SMB folks came out on top in the end 🤣
 
I'll give you a bit of an inside scoop from a decade ago. I worked at MS on the NFS team (did you know MS had an NFS client?) and we shared a lot of plumbing with the SMB team. We pushed hard on the SMB team to start testing with linux boxes because the compatibility mattered (NFS has always been pushed for compatibility). Eventually they added Macs too (which is part of the reason apple dropped AFS). Since the EU forced the spec to be open everyone was able to implement it good things happened from both sides and we ended up with a solid well supported protocol. NFSv4 was a competitor but SMB was already so widespread that it kinda fell to the side.

As a side effect us NFS folks got to give the SMB folks crap for all the bugs they found with compat testing. There was always a fun rivalry there. Something to the effect of "our protocol is better than yours" but I guess the SMB folks came out on top in the end 🤣
In general then would you say SMBv3 is the way to go forward with how I'm using it? Or would you use anything else for backing up from macOS -> a Windows OS?

For example I'm curious how Synology and other NAS's connect when on local networks to backup. Are they also using SMB to copy? Or something proprietary with direct socket connections/webdav/etc?
 
Try making a new sparse bundle disk image on the remote machine. Make sure it's in APFS format. When you want to backup from the Macbook, mount that remote image and backup to it.

With a disk image on the remote machine, it won't matter what its actual native disk format is. It could even be FAT32, and the disk image will still have the APFS format.

You can also enable encryption on the disk image, and it will still work regardless of the remote machine's native file-system.

Which partition type should I use for it?
CD/DVD
Single partition - GUID partition map < I think this one?
Single partition - Apple partition map
Single partition - Master boot record partition map
No partition map


And which read/write options?
sparse bundle disk image < this one I think? just more efficient it sounds like
sparse disk image
read/write disk image
DVD/CD master

I think I have a good plan here to split the drive:
1) Apple encrypted sparse bundle disk image for my backups/sensitive stuff/doesn't need media playback
2) Regular NTFS file store for movies for playback
 
Last edited:
Which partition type should I use for it?
CD/DVD
Single partition - GUID partition map < I think this one?
Single partition - Apple partition map
Single partition - Master boot record partition map
No partition map


And which read/write options?
sparse bundle disk image < this one I think? just more efficient it sounds like
sparse disk image
read/write disk image
DVD/CD master
If you're going with @chown33 idea for the backups, then yes, you've picked the correct choices. At least, those are what I would use for this application. The others should work, but (a) Apple seems to like GUID and (b) I've had excellent, reliable results with sparse bundle disk images -- they seem well supported and they won't use unnecessary space.
 
If you're going with @chown33 idea for the backups, then yes, you've picked the correct choices. At least, those are what I would use for this application. The others should work, but (a) Apple seems to like GUID and (b) I've had excellent, reliable results with sparse bundle disk images -- they seem well supported and they won't use unnecessary space.
So dumb quedstion by like - is there any reason? I need a partition map at all in this use case? Or would 'no partition map' make sense?
 
If you make a disk with no partition map you can have no partitions. If you have no partitions you can't created a volume. If you can't create a volume you have nowhere to store files. You need that last one for a backup 😁.

No Partition Map is called a "raw" disk. They aren't very useful for most people. The places they can come in handy is when setting up a RAID or using them for transient streaming data. GUID Partition Table/Map (it's commonly called GPT) is the standard used by all major OSs these days. APM is Apple's old partitioning scheme and MBR is Microsoft's.

It's crazy, but between disk partitioning and USB-C the world of technology might be starting to agree on some things.
 
  • Like
Reactions: Brian33
If you make a disk with no partition map you can have no partitions. If you have no partitions you can't created a volume. If you can't create a volume you have nowhere to store files. You need that last one for a backup 😁.

No Partition Map is called a "raw" disk. They aren't very useful for most people. The places they can come in handy is when setting up a RAID or using them for transient streaming data. GUID Partition Table/Map (it's commonly called GPT) is the standard used by all major OSs these days. APM is Apple's old partitioning scheme and MBR is Microsoft's.

It's crazy, but between disk partitioning and USB-C the world of technology might be starting to agree on some things.
Thanks that was my rough understanding too. Learned the difference between sparse disk image and sparse bundle disk image!
 
No Partition Map is called a "raw" disk. They aren't very useful for most people. The places they can come in handy is when setting up a RAID or using them for transient streaming data.
Thanks, I learned something!
It's crazy, but between disk partitioning and USB-C the world of technology might be starting to agree on some things.
We can hope!
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.