A
friend of mine has actually been in contact with one of the Warhorse guys responsible for the global illumination regarding async. As he (the WH guy) explains it,
It was never properly finished. It's slower on some graphics cards. And forced update doesn't work, so you would occasionally see un-updated light after a time skip or a cutscene. As [async compute enabled] is, the queuing of blocks to update is very aggressive, it basically 100% loads the asynchronous compute pipeline. Ideally there would be some budgeting. There's always work to be done because it always continuously updates all voxel blocks. There's still a chance it will be finished and enabled by default.
In other words, this game is very lighting update intensive and all voxels are
constantly invalidated, so when async compute is enabled for global illumination the render pipeline is extremely saturated by voxel updates. Async compute is buggy because its implementation is unfinished. Until it's addressed, it's going to spice up your GPU. If I had to guess it's the oversaturation of work that causes the strobing, too. If they do end up finishing it it'll work fine with Enhanced GI.
Utilization is a function of clock cycles doing stuff and clock cycles not doing stuff; not necessarily related to the complexity of the work being done with those cycles. 100% utilization of easier work = less power usage. 100% utilization of harder work = more power usage. "Harder work" doesn't even necessarily mean worse frametimes, that would be down to the chips on the gpu. Thus, async compute doing harder work that's better implemented on the hardware = higher fps and higher power usage with same utilization. I am not an engineer, that's just the basics as I understand them, and I have no idea what work is the "bottleneck" here. If I had to guess why the game generally uses less power, I'd expect CryEngine is using mostly classical rasterization techniques compared to newer titles that use newer hardware calls? That's from my ass but sounds right.