Those non-geeks are the same people that would buy 8/256 and then turn to swap because they like to leave tabs and programs open. Unlike Windows, macOS doesn't make it easy to quit an app.
This is the problem with people taking one benchmark, essentially a Blackmagic throughput test, and extrapolating all kinds of real world usage impacts.
The difference in throughput, for massive sequential read accesses is roughly 1.5GB/s versus 3GB/s. There's less of a difference for writes, which are in the background anyway for swapping. That means that if you're reading a contiguous 1GB, it'll take 0.3 seconds longer to change tabs.
Let's count to 0.3 together, shall we? Ready? One, two... dang, let's try that again...
But how much is actually being swapped because you have a tab or unused application open? If I look at my Activity Monitor, I see a rare few that might be as much as 300MB. So it would take 0.1 seconds longer to read that back from swap.
Except that's not how the virtual memory works on MacOS-- long before the system starts paging to disk, it starts compressing the memory and then, when it has to, it pages the compressed memory to disk. When I look at the compressed VM size in Activity Monitor, the biggest I'm seeing is on the order of 70-80MB. So about 25 milliseconds different.
For reference, with a 60Hz display, the screen refresh time is 16 milliseconds. How often are your non-geek users switching tabs and apps that they're noticing the added disk access time is slowing down their day?
Of course, the VM doesn't page out the entire memory space at once, it does it 16kB at a time based on the algorithm's estimate of which memory is least likely to be accessed. So how much of the above analysis is even relevant? What do we know about small random accesses which are dominated by latency rather than throughput? Nothing that I'm aware of.
And that's the point. Using a throughput benchmark to make generalizations about the impact on Granny's Chrome performance when she forgot to force-quit Pages is senseless.
The challenge for swap isn't too many browser tabs or unused applications that you haven't closed-- that's an access rate measured in human time. The challenge for swap is when a single application is processing a single dataset too large for memory and needs to constantly thrash to disk to access all of it-- this is an access rate measured in CPU time.
So let's not start to go too crazy on what the implications of the slower throughput of the 256GB SSD are. It's a pretty complex interaction of factors that drive performance-- throughput, latency, access patterns, free space in flash, erase times and user workflow. The tradeoffs are different for what you'd expect a 256GB drive and a 2TB to be used for.