Unfortunately, due to the complexity and specialized nature of AVX-512, such optimizations are typically reserved for performance-critical applications and require expertise in low-level programming and processor microarchitecture.
Unfortunately, due to the complexity and specialized nature of AVX-512, such optimizations are typically reserved for performance-critical applications and require expertise in low-level programming and processor microarchitecture.
Whomever wrote this article is just misleading everyone.
First of all, they did this for other kinds of similar instruction sets before, so this is nothing special. Second of all, they measure the speedup compared to a basic implementation that doesn’t use any optimizations.
They did the same in the past for AVX-2, which is 67x faster in the test where avx-512 got the 94x speed increase. So it’s not 94x faster now, it’s 1.4x faster than the previous iteration using the older AVX-2 instruction set. It’s barely twice as fast as the implementation using SSE3 (40x faster than the slow version), an instruction set from 20 years ago…
So yeah, it’s awesome that they did the same awesome work for AVX-512, but the 94x boost is just plain bullshit… it’s really sad that great work then gets worded in such a misleading way to form clickbait, rather than getting a proper informative article…