

Some Performance observations re: Fortnite Battle Royale in macOS and Win10
#1
Posted 16 January 2018 - 08:11 AM
We have Google Fiber gigabit internet, which is hard-wired straight into my Gigabit-equipped iMac.
The reason that I tried out the Windows version is because my initial experience running the game in macOS was so negative that it had me concerned that maybe my Mac was too long in the tooth to play it well. What I found out, instead was a little bit different.
Here are my observations after playing Fortnite on both OS's:
1. The loading screens and transitions are much smoother and reliable in Windows. In High Sierra, I experience random errors, (mostly failure to join a game pop-ups). Sometimes, I would have to quit out of Fortnite, then quit out of the Epic Launcher, both of which are laborious and slow to wait for. My son, at one point, while waiting to join a game in High Sierra, asked me if my Mac was frozen up.
2. After having played around with the settings to achieve the best results, I learned that it will only play on lowest settings at 640x480 in High Sierra, with a frame rate that varies wildly between 5 FPS to 60 FPS, even on loading screens. On Windows, I can run on 720p, with all settings lowest, except for draw distance, which I max out), for obvious reasons. These settings give me a frame rate from between 40-60 FPS. Even with the in-game frame cap turned up to 120 FPS, we noticed no difference in measured frame rate.
3. Once in-game, the differences between the two OS's mostly melted away, giving an enjoyable playing experience. The only hitch I encountered was a slight stutter when scoping in with the game's sniper rifle, an action which was smooth in Windows.
4. Regarding menu performance under High Sierra: one thing I quickly learned not to do is to bring up the menu to chance graphic settings once in-game. Doing so introduced the same transition slowness I noticed in #1 above. Sometimes, the game would freeze up and I would was forced to command-tab out of it, and restart my Mac.
5. Please notice that I am not shocked that a Mac of mine's vintage experiences problems playing the game. Also, please note that I am fully aware that the game is in Early Access, which means it's not fully optimized. What I hope to do is present mine and my sons's experiences playing the game. The differences between the performance in Windows 10 and macOS High Sierra are disappointing, and I can only hope that further Metal development brings performance gains more on-par with Windows 10.
#2
Posted 16 January 2018 - 02:16 PM
Irishman, on 16 January 2018 - 08:11 AM, said:
While there is life there is hope! But life is short... NVIDIA drivers are unfortunately bad on macOS. I think it's a bit better with AMD, but still Windows has better performance.
#3
Posted 16 January 2018 - 09:22 PM
Irishman, on 16 January 2018 - 08:11 AM, said:
5. Please notice that I am not shocked that a Mac of mine's vintage experiences problems playing the game. Also, please note that I am fully aware that the game is in Early Access, which means it's not fully optimized. What I hope to do is present mine and my sons's experiences playing the game. The differences between the performance in Windows 10 and macOS High Sierra are disappointing, and I can only hope that further Metal development brings performance gains more on-par with Windows 10.
The lack of VRAM on that GPU is really going to hurt on macOS, we use more VRAM on macOS so we'll be paging a lot more between CPU & GPU which is bad. That's a necessary evil to avoid other inefficiencies caused by trying to map a D3D11-oriented engine on to the Metal API.
Irishman, on 16 January 2018 - 08:11 AM, said:
Behind the loading screen on macOS it will be doing a heck of a lot more shader compilation than it has to under Windows and unfortunately that is currently a highly serial single-threaded process.
Irishman, on 16 January 2018 - 08:11 AM, said:
The varying frame-rate will stabilise somewhat if you play on the same build for long enough as the local shader cache builds up entries. The price is longer load times of course. Metal (like both D3D12 & Vulkan) exposes the developer to the reality of how shaders are actually compiled for GPUs and expects developers to optimise around this. Unfortunately most engines are built on the abstraction provided by D3D11 which inherited the D3D9 model of separate shaders with near-zero runtime compilation cost, achieved by the driver vendors investing the man-hour equivalent of tens of millions of dollars to aggressively optimise their runtime shader compilers and make their D3D-driver fully asynchronous (so games don't block when calling D3D) via substantial multi-threading. Often they optimised their shader compilers to generate code that is *trivial* to patch when render-state (like render-target or texture-formats) change or even outright replace the games shaders with their own "specially optimised" version. The new APIs force/encourage driver vendors to optimise their shader compilers to generate "perfect" GPU shader code, even if it makes shader compilation much slower as that is now the game developers problem and not directly the vendors. That is not to say the vendors don't care - merely that the APIs send you down a particular implementation route.
Another cause of fluctuating frame-rates is that D3D11's GPU resource management is also heavily abstracted away from the developer with the vendor able to do a lot of under-the-hood optimisations as they control the implementation. Metal puts that all on the game developer a bit more like Vulkan (though the API & semantics are quite different) and right now we aren't as efficient at allocating resources as D3D which can cause hitches on the CPU.
Irishman, on 16 January 2018 - 08:11 AM, said:
That'll be shader compilation or resource allocation which is more expensive on Metal than D3D. If the game plays well in-game on macOS then really that's the important part.
Irishman, on 16 January 2018 - 08:11 AM, said:
Well, yep, that'll be the engine recompiling all the shaders & shader-pipelines

#4
Posted 17 January 2018 - 01:16 AM
#5
Posted 17 January 2018 - 12:39 PM
jeannot, on 17 January 2018 - 01:16 AM, said:
It is a bit more complicated than that. You can optimise D3D12 by changing D3D12-specific code, which won't benefit Metal, or by changing the common, higher-level code, which might. If we change high-level code to benefit D3D12 or Vulkan it could help Metal but it very much depends on what we are changing and how. What I will say is that performance will continue to improve incrementally in future releases but I can't promise how that will translate to individual games nor will I promise any radical improvements.
#6
Posted 19 January 2018 - 12:15 PM
marksatt, on 17 January 2018 - 12:39 PM, said:
Thanks, Mark, for putting our experience into some context other than "Macs suck for gaming! What did you expect?"

#7
Posted 14 April 2018 - 02:25 PM
marksatt, on 16 January 2018 - 09:22 PM, said:
The lack of VRAM on that GPU is really going to hurt on macOS, we use more VRAM on macOS so we'll be paging a lot more between CPU & GPU which is bad. That's a necessary evil to avoid other inefficiencies caused by trying to map a D3D11-oriented engine on to the Metal API.
Behind the loading screen on macOS it will be doing a heck of a lot more shader compilation than it has to under Windows and unfortunately that is currently a highly serial single-threaded process.
The varying frame-rate will stabilise somewhat if you play on the same build for long enough as the local shader cache builds up entries. The price is longer load times of course. Metal (like both D3D12 & Vulkan) exposes the developer to the reality of how shaders are actually compiled for GPUs and expects developers to optimise around this. Unfortunately most engines are built on the abstraction provided by D3D11 which inherited the D3D9 model of separate shaders with near-zero runtime compilation cost, achieved by the driver vendors investing the man-hour equivalent of tens of millions of dollars to aggressively optimise their runtime shader compilers and make their D3D-driver fully asynchronous (so games don't block when calling D3D) via substantial multi-threading. Often they optimised their shader compilers to generate code that is *trivial* to patch when render-state (like render-target or texture-formats) change or even outright replace the games shaders with their own "specially optimised" version. The new APIs force/encourage driver vendors to optimise their shader compilers to generate "perfect" GPU shader code, even if it makes shader compilation much slower as that is now the game developers problem and not directly the vendors. That is not to say the vendors don't care - merely that the APIs send you down a particular implementation route.
Another cause of fluctuating frame-rates is that D3D11's GPU resource management is also heavily abstracted away from the developer with the vendor able to do a lot of under-the-hood optimisations as they control the implementation. Metal puts that all on the game developer a bit more like Vulkan (though the API & semantics are quite different) and right now we aren't as efficient at allocating resources as D3D which can cause hitches on the CPU.
That'll be shader compilation or resource allocation which is more expensive on Metal than D3D. If the game plays well in-game on macOS then really that's the important part.
Well, yep, that'll be the engine recompiling all the shaders & shader-pipelines

Interesting read, thanks for sharing those insights!
I've noticed with my MBP's Iris Plus 650, Fortnite in general achieves pretty decent frame rates (mostly between 40 and 70 FPS) under macOS, except for severe frame drops every few minutes, which others seem to have also experienced on Macs with Intel GPUs.
Is there something specific to Intel's GPU architecture that might cause this, or could it be a driver issue?
MacBook Pro (13-inch, Late 2016, 4 TBT3) — 3.1 GHz i5-6287U | Iris 550 | 16 GB RAM | 512 GB Flash | Space Gray
iMac (27-inch, Late 2012) — 3.4 GHz i7-3770 | GTX 680MX | 32 GB RAM | 1 TB Fusion Drive
iMac (Flat Panel) — 700 MHz PPC 7450 (G4) | GeForce2 MX | 640 MB RAM | 40 GB HDD
iMac (5 Flavors) — 333 MHz PPC 750 (G3) | ATI Rage Pro | 512 MB RAM | 6 GB HDD | Tangerine
iPhone X — 256 GB | Silver
iPhone 5s — 32 GB | Space Gray
iPad Pro (10.5-inch) — 256 GB | Space Gray
iPad Air 2 — 64 GB | Space Gray
iPad — 16 GB
iPod touch (3rd generation) — 32 GB
iPod touch — 8 GB
Apple Watch (1st generation) — 42 mm | Stainless Steel (Yes, there's games for watchOS)
#8
Posted 30 November 2018 - 07:59 AM
After all the build-up about Mojave, I was hoping to see a continued shrinking of the performance delta between Win10 and macOS, and hopefully see something more competitive. Not to be, apparently. And that's frustrating. The gap is worsening for me, I can play 40-60 fps in Win10 at 1080p, minimum settings, and I am struggling to get 2-20 fps in Mojave, at lowest res I can crank it down to, minimum settings.
What the hell!?!?