OP was active yesterday, so I'm assuming they are aware of the posts in this thread.
Anyway, got around to doing some updated test on my 5820K/X99 SOC Champion/NH-D15S setup, with the hottest test I know of for this platform:
22C ambients; Fractal Design Define R5 with three Arctic P14s at full ~1700 rpm as intakes, with clean filters; no case exhaust fans; NHD-15S with NT-H1 TIM and two Thermalright TY-143 at full ~2500 rpm on the CPU during this test. All power saving features disabled.
1GHz OC on the core (3.3 to 4.3GHz), 1.1GHz OC on the uncore (3GHz to 4.1GHz), and 16GiB (4x4 single rank micron) DDR4-2667 12-11-12-27-T1 with extremely tight secondary/tertiary timings. 1.91v input (medium LLC, 500kHz VRM PWM), 1.225v core, 1.2v ring, +100mv VCSSA, 1.06 VCCIO.
OS is Server 2016 (effectively identical to Windows 10 1607 LTSC), fully updated, and well stripped down.
This is no longer my primary system (no personal information on it), so I've got it running the last pre-mitigation microcode for Haswell-E (3A). The most recent microcode with all available patches for spectre and meltdown related exploits requires about 20mV more vcore for the same clocks, and pushes temperatures to throttle points in AVX2 LINPACK.
More real-world loads are obviously considerably cooler, and even with the stock fan on the D15 and case fans at very quiet levels, most anything short of OCCT, Prime95, LINPACK, or AIDA64 FPU will stay at very comfortable temperatures at 4.3GHz core.
YMMV, some chips will run hotter, some will run cooler, while ambient temperatures and air flow obviously play a big part. My particular 5820K sample is a relatively low voltage and high leakage part that runs a bit warmer than most, volt for volt, but needs lower vcore for most OCs.