As we have been saying, every benchmark that causes thin and light machines to hit their thermal limits performs in the same ballpark. I'm really surprised intelligent adults can't figure this out, even people who claim to be computer experts (rather, clickbait masters).
But if your machine is not hitting thermal limits because you are using apps that are suitable for thin and light computers, then the results are like this:
Thin and light machines are not supposed to be for high power hours long CG and 4K/8K rendering. Nobody does that in the professional fields.
I guess the question then becomes how many of these short-run (~4-5 min) tasks you end up running in a day, since longer-running tasks, or heavy multi-tasking will presumably end up being a wash, as they soak up the CPU and/or GPU, leading to similar performance on the 2.6 vs 2.9 (according to all benchmarks and real-world tests I've seen so far).
Let's say you run 6-8 of these 4-5 minute tasks per hour, and save 11-13 seconds for each task. In an 8-hour day, you've saved about 1-2 minutes of time per day. (I'm specifically talking about 2.6ghz vs 2.9ghz). I don't do photo editing for a living, so I can't really talk to whether 6-8 5-minute tasks per hour is a good estimate, or whether 100% of tasks fall into this category, so you are literally shaving 5% of the time off your entire work day. If the latter, then maybe the i9 makes sense. Its like giving yourself a 4-5% raise, if you do fixed-bid projects.
My workflow tends to be ~80% editing C# code + running scenes in the Unity Editor, with only occasional building out a project to XCode, and then building from there to iPhone. Given that everything I've seen so far, the i7 2.6ghz performs similarly to 2.9ghz in CPU+GPU intensive tasks, and in compiling using XCode tasks, the i9 didn't seem worth it.
Still awaiting other developer-oriented testing to solidify my thinking, but those types of tests are unfortunately few and far between. In the meantime, I bit the bullet and ordered the i7 2.6ghz model, and spent the savings on a good TB3 dock.
[doublepost=1533145584][/doublepost]
I have not tested C1 yet but in going from my top spec 2017 to my new i9 2018 I am seeing significant improvements across the board. To set the stage, I have been doing nothing else for a living but high end commercial advertising and editorial photography for some 30 years. In the digital end, I work with hi res Nikon and Hasselblad files often stitched for large displays.
I did one simple test to see if I would keep my new 2018 over the 2017 and it was to convert 100 high ISO Nikon D850 raw files from NEF to high quality jpeg in LR CC using normal adjustments for white balance, saturation, sharpness, etc. The outcomes are shown in minutes and are as follows:
2017 15" i7 3.1ghz/16gb ram/560 4gb gpu/ 2tbSSD, 6:15
2018 15" i9 2.9ghz/32gb ram/560 4gb gpu/ 2tbSSD, 4:20
iMac Pro 10 core 3.0ghz/128gb ram/16gb gpu/ 2tbSSD, 1:45
As you can see the 2018 is a significant improvement over the 2017 in terms timing of exporting RAW's to client deliverables and now makes a perfect backup and mobile pairing for the power house iMac Pro. I personally don't care what some Youtube wannabe says, clock speed and cores rule the day on exports and ram GPU rule the day for all the rest.
The main focus of this thread is on the 2.2 vs 2.6 vs 2.9 models, though. If the i7 2.6ghz did the conversion in 4:33, would it be worth it to you to spend the extra $300 to save 13 additional seconds? How many 13 second savings would you get in your typical day?