Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > PC Hard & Software

Reply
 
Thread Tools Search this Thread Display Modes
Old 9th August 2018, 14:12   #21  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,049
these liquid nitrogen garbage overclocks are just a waist of time. totally not practical just people having fun wasting power.

the stock speeds are impressive.
huhn is offline   Reply With Quote
Old 9th August 2018, 15:33   #22  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,463
The liquid nitrogen based overclocking is very impressive to see, it's like a show on its own.

Also, it's a kind of extreme overclocking to reach the limits of the architecture and remove any thermal performance limitations.

Intel started a war by water-cooling with a huge forbidden cooler in many countries their CPU in order to break a record in Cinebench and AMD just replied.

With their new aircooler CoolerMaster Wraith they can push 32C/64T at 4.0GHz all-core which is a huge achievement at 250W.

The rest is just for the performance crown.

And don't forget TR2 is a real CPU that you can order right now if you want.

Intel's CPU looks like an illusion...
__________________
Win 10 x64 (17763.55) - Core i3-4170/ iGPU HD 4400 (v.5058)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Old 9th August 2018, 23:38   #23  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Germany
Posts: 393
Preface:


Intel actually has the 98% of the server market share, mostly because Xeon CPUs have been fast and reliable for quite some time.
Besides, even though AMD released its "new" server products, Intel Xeon CPUs are still faster, mostly because instructions, memory handling and load distributions are implemented in a way that they handle certain type of loads faster in a multithreading environment and also in a single thread one.
For instance, afaik the fastest AMD CPU is the Epyc 7601 which has 32 core, 64 threads, a maximum frequency of 3.2 GHz, 64 MB of cache L3 and instructions 'till AVX2.
The Intel opponent is the Intel Xeon Platinum 8176 which has 28 core, 56 threads, a maximum frequency of 3.80 GHz, 38.5 MB of cache L3 and instructions 'till AVX-512.
Please note that the lack of AVX-512 instructions in AMD CPUs can also play an important role in favor of Intel for certain type of calculations.


Demonstration:


In order to demonstrate that Intel is faster than AMD, I report the results from Cinebench, the famous benchmarking program that many people use (including Linus Sebastian) and other benchmarking tools:

Cinebench R11.5, 64bit (Single-Core):



As you can see, single thread calculations show that Intel Xeon is way faster than AMD Epyc when it comes to using cores singularly.
This is kinda expected, because of the way the CPU is implemented, but you'll be surprised to find out what the multi-thread result shows.

Cinebench R11.5, 64bit (Multi-Core):



Despite having less shared L3 cache, less cores and less threads, Intel Xeon is still slightly faster than AMD Epyc.

Passmark CPU Mark:



As you can see, even Passmark shows that Intel Xeon is faster than the AMD Epyc in multi-threading calculations.

Geekbench 3, 64bit (Single-Core):



Once again, AMD is noticeably slower than Intel on single-core calculations.

Geekbench 3, 64bit (Multi-Core):



And Geekbench shows that Intel is slightly faster than AMD on multi-threading calculations as well.


Final Conclusion:


In other words, even though Intel CPUs are kinda overpriced, they perform better than the AMD ones, due to their internal design, their implementation and also because of AVX-512 instructions set support that lacks in AMD CPUs.
Intel has always provided good, fast and reliable hardware for servers and workstations, that's why pretty much every company decided to pick Intel CPUs for their workloads.
Xeon CPUs don't come cheap, but you get what you pay for.
Unfortunately, though, the consumer side doesn't reflect the server-side scenario.
Intel CPUs for consumers (i3, i5, i7) are not as good as the Xeon ones, they didn't have many cores in the past and they have never been cheap.
This is mostly because Intel didn't really have any competitor for years, that's why we started to think about i3 as 2c/4th, i5 as 4c/4th and i7 as 4c/8th.
AMD CPUs didn't quite manage to reach the same performances of Intel CPUs since the days of dual core.


A bit of history:


Back when CPUs were mono-core, AMD had a particular way of handling calculations and memory management in their chips and they managed to get more calculations per cycle than Intel CPUs.
Having more calculations per cycle granted AMD a significant part of the market 'cause many people decided to get an AMD CPU.
As result, to keep the peace against AMD, Intel increased the frequency of their CPUs, but this led to an increase in voltages as well, which led to an increase in temperatures 'cause air coolers didn't quite manage to keep them very cool.
On the other hand, AMD had the same performances with less frequency, less voltage and temperatures were fine.
They also started naming their CPUs with the Intel equivalent frequency number; for instance, an AMD Athlon 3200+ at 2.2GHz was "the same" as an Intel processor with the same characteristics, but running at 3.2GHz.
Back in the days (2003-2004 if I recall correctly), a $200 Athlon 64 3200+ chip performed pretty much the same as an $800 Pentium 4 3.2 GHz, but it was cheaper and it spent less energy, so it was a "go for it".
Once again, this was because AMD managed to get more calculations done per CPU cycle.
When Intel understood that raising the frequency wasn't the right answer, they came up with the "multi-core" idea and they released their first multi-core CPU.
At the very beginning, it was slower than a normal single thread CPU in common daily scenarios, 'cause programs weren't aware of the second core, so they used Core0 only, while Core1 was sitting there in idle, doing pretty much nothing.
Anyway, it turned out to be good to do many tasks at the same time, due to the fact that users were able to divide the workload and it somehow found its way to the public.
Eventually, programmers started implementing programs in a different way to make use of the second additional core and this gave a great speed boost.
In the meantime, AMD released other single-thread CPUs, like the AMD Athlon 3700+, which were fine except for the fact that once the developers made their programs aware of the second core, AMD single-thread CPUs couldn't cope with the Intel ones.
This forced AMD to make its own dual core CPU, but because of the way AMD CPUs were implemented, they didn't work well in a dual core scenario.
The peculiar production way that granted them success back when they were mono-core allowing more calculations per cycle somehow worked against them when they had to implement dual core.
Later on, Intel continued developing their own CPUs and 4cores CPUs came out (Dual Core Duo) and this led them to develop their multi-core architecture further and further implementing multi-threading and naming their CPUs i3, i5, i7.
AMD tried to fight Intel in many ways with a failure after another, like the Intel Phenom II x6 (a 6core 6 thread CPU that had lower performances than an Intel i5 4c/4th), the famous AMD Bulldozer FX that worked very badly with multi-thread and didn't divide the work-load really well, having really bad performances in pretty much every common scenario and so on.
Enthusiast were more and more prone to buy Intel and the company raised prices without introducing many features in the consumer CPU.
They did develop other features, but they didn't release them by purpose 'cause there was no reason to: they had the whole market for themselves, so they just focused on the enterprise-side which demanded faster and better CPUs.
After many failures to reach Intel and after deluding enthusiast consumers year after year, AMD decided to dedicate their new CPUs to the entry-level consumer market, including a decent GPU inside their processors called APU.
APU found a positive response by customers that didn't want to spend much and couldn't afford to buy an Intel CPU and a dedicated GPU.


Nowadays:


Nowadays, AMD finally dropped their "peculiar" way of making CPUs in favor of an Intel-like implementation, featuring what recalls the Intel multi-threading; that's why Ryzen and Epyc came out.
However, this whole way of making CPUs is new to AMD engineers while Intel engineers have done it for years and years, so it's "normal" that Intel CPUs are more fine-tuned than the AMD ones, however this is good for the market because now that Intel finally has a rival, it's forced to either release new products or lower their prices.


My "story" as consumer:
I've been running Intel CPUs in the 90s 'till the new millennium, when I moved to AMD. I've been a satisfied AMD customer for four years, buying a CPU after another and enjoying their great success for single-thread CPUs, but then after 2004 I got pissed off year after year, especially after I bought the Phenom II x6 (6c/6th) thinking that it was going to compete with the Intel i7 CPUs, but I ended up encoding things slower than people with an i5 CPU. After that, I moved to Intel; I've been an Intel customer ever since and I'm not planning to go back to AMD anytime soon.
__________________
Broadcast Encoder
FranceBB is offline   Reply With Quote
Old 10th August 2018, 08:03   #24  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,463
Quote:
Originally Posted by FranceBB View Post
...After that, I moved to Intel; I've been an Intel customer ever since and I'm not planning to go back to AMD anytime soon.
It seems that you moved to Intel for good, I think you could actually work for them in a way.

Where to start from ?

Intel has 98% of server market when last year had 99+%

EPYC managed to gain almost 2% which is not that big, but even Intel's CEO (former) said that is struggling to keep AMD below 15% - 20% which is close to the biggest share AMD ever had during Opteron days - 25%

Intel's 28Cores processor costs 10.000$ while AMD's 32Cores processor costs around 4.000$

AMD supports 2 TB RAM per socket versus 768 GB RAM for Intel.

The memory advantage is huge for AMD.

Also Intel supports 48 PCIe lanes in single socket versus 128 PCIe lanes for AMD.

The PCIe lanes advantage is huge for AMD.

The spec_int and spec_float benchmarks which are more important for server market than Cinebench and passmark (!) favor once again AMD.

The second half of 2018 will bring AMD to 5% share at the end of the year for server market.

Server market is a difficult market and needs time to validate and trust new platforms.

Due to the broken Intel's 10nm process which will not probably be fixed even in latest 2019, AMD will have no rival in server market for 2019 when it will release its new EPYC 2 server CPU in Q1 at 7nm and 48C/96T initially and 64C/128T later.

It's a 28 cores vs 64 cores battle with no luck for Intel.

AMD, as former Intel's CEO has already said, will reach more than 20% at the end of 2019 in server market because big companies have already samples of EPYC 2 using 7nm in their hands and can see the difference in speed compared to ancient and slower Xeons in 14nm.

And the price is always much better for AMD.

Now regarding AVX512, there is simply no market for these instructions yet.

Intel and other reviewers are struggling to find benchmarks and real apps to leverage AVX512 but with no luck, besides one or maybe two.

You seem to skip HEDT category (high end desktop) which is our topic here for a reason.

On Monday August 13th, AMD will release officially Threadripper 2 or Threadripper 2000 series with 32C/64T and 4.0GHz all-core speed using air-cooling.

Intel simply doesn't have an answer to this category because its best CPU is just a 18C/36T and will stay this way till the end of the year and probably next year too.

On August 31th, AMD will release a new 16C/32 CPU and on October 12C/24T & 24C/48T

These AMD Threadripper processor have simply no rival from Intel and will dominate the HEDT category.

And of course AMD has the PCIe lanes advantage for Threadripper, exactly like EPYC vs Xeon.

Finally, regarding mainstream desktop CPUs, AMD has the core number advantage of 8C/16T vs 6C/12T for Intel which can only win at 1080p gaming for 6% (which doesn't really matter) and some light threaded apps.

All multithreaded-aware apps favor AMD which is the trend nowadays and of course 1440p or 4K gaming is exactly the same between Intel and AMD.

For the next year Ryzen 2 is coming based on the same Zen 2 architecture using 7nm like EPYC 2 and rumors already are talking about 12 cores or even 16 cores for mainstream desktop.

I see no good future for Intel at least for 2019, like most people actually.
__________________
Win 10 x64 (17763.55) - Core i3-4170/ iGPU HD 4400 (v.5058)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Old 10th August 2018, 08:47   #25  |  Link
Blue_MiSfit
Derek Prestegard IRL
 
Blue_MiSfit's Avatar
 
Join Date: Nov 2003
Location: Los Angeles
Posts: 5,409
I'm super impressed with the new 32 core Threadripper systems.

I don't do a ton of encoding / highly threaded loads at home, but if I did I'd be looking at a Threadripper of some sort!

As it stands my most performance sensitive application is Lightroom, mainly when working with high megapixel (often stitched) images from my Nikon D810 DSLR. Lightroom heavily favors single threaded performance, especially from Intel, so I have a watercooled i7-7700k.

That being said, the 8 core boost frequency on the new Threadripper is pretty cool - when you're in "game mode" it clocks 8 cores up quite high so you get good single threaded performance. Regardless, I bet the Intel offerings will still be faster for Lightroom

I absolutely love how AMD has been coming up with such innovative, disruptive products, though. I definitely can see them taking a nice chunk of the cloud server market with Epyc, and maybe a decent chunk of the HEDT market with Threadripper.
Blue_MiSfit is offline   Reply With Quote
Old 10th August 2018, 11:49   #26  |  Link
Atak_Snajpera
RipBot264 author
 
Atak_Snajpera's Avatar
 
Join Date: May 2006
Location: Poland
Posts: 6,630
Let's be honest even Intel knows that they won't be able to compete with upcoming Epycs in 7nm (64C/128T)
Interesting read
https://semiaccurate.com/2018/08/07/...-they-know-it/

Quote:
In other words, even though Intel CPUs are kinda overpriced, they perform better than the AMD ones, due to their internal design, their implementation and also because of AVX-512 instructions set support that lacks in AMD CPUs.
Forget about avx-512 because it overheats CPU like crazy!
https://networkbuilders.intel.com/do...nsions-512.pdf

Last edited by Atak_Snajpera; 10th August 2018 at 11:55.
Atak_Snajpera is offline   Reply With Quote
Old 10th August 2018, 11:50   #27  |  Link
StvG
Registered User
 
Join Date: Jul 2018
Posts: 17
Quote:
Originally Posted by FranceBB View Post
... Nowadays, AMD finally dropped their "peculiar" way of making CPUs in favor of an Intel-like implementation, featuring what recalls the Intel multi-threading;...
Both companies have different implementations? Zen is CCX based, Intel is monolithic.
With this Intel could make something Zen-like (up to 40 cores?).
StvG is offline   Reply With Quote
Old 10th August 2018, 12:05   #28  |  Link
Groucho2004
 
Groucho2004's Avatar
 
Join Date: Mar 2006
Posts: 3,896
Quote:
Originally Posted by Blue_MiSfit View Post
As it stands my most performance sensitive application is Lightroom, mainly when working with high megapixel (often stitched) images from my Nikon D810 DSLR. Lightroom heavily favors single threaded performance, especially from Intel, so I have a watercooled i7-7700k.
Odd. I use Adobe Camera Raw (through Photoshop) for my Nikon raw files which is basically the same as Lightroom and it clearly uses all 4 cores of my i5 2500K, whether I make adjustments or convert a bunch of raw files to tif/jpeg.

Last edited by Groucho2004; 10th August 2018 at 12:11.
Groucho2004 is offline   Reply With Quote
Old 12th August 2018, 12:48   #29  |  Link
ShogoXT
Registered User
 
Join Date: Dec 2011
Posts: 89
I always thought that AVX 512 was the future, but not so much anymore. Fab density is increasing and this has forced more cooling. There is no way Intel can keep pushing x86, high clocks, and complex instruction sets in a world that is going toward parallelization.
ShogoXT is offline   Reply With Quote
Old 12th August 2018, 12:57   #30  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,419
Its the same as it was with AVX2, the initial implementation produces too much heat, the next ones will be refined, especially with a die shrink, and it'll be much more valuable.
AVX512 is also not only about 512-bit instructions, it also adds a whole lot of useful features to 128-bit and 256-bit instructions that should allow faster and smarter code in the future, without the heat penality.

But it'll take some more time for AVX512 support to be a bit more wide-spread and less heat-dependent and software to fully understand to properly use it without incurring any of the penalities for it to become effective.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; 12th August 2018 at 13:12.
nevcairiel is online now   Reply With Quote
Old 12th August 2018, 13:40   #31  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,463
AVX2 had no such problems in the first implementation of Haswell.

The turbo modes of AVX2 were lower than any other mode, but this hasn't changed since then.
It's not that it was improved.

AVX512 market is largely covered by other more parallelized hardware like GPUs.

Intel added 4 special instructions regarding AI called DLBoost in next generation Cascade Lake server CPUs as a superset of AVX512, but we all know that AI and ML/DL is a market for GPUs.

The GP part of GPGPU is becoming more and more general and this is a trend that looks like is not going to stop for HPC and all the other aspects of AVX512 that could be useful.
__________________
Win 10 x64 (17763.55) - Core i3-4170/ iGPU HD 4400 (v.5058)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Old 12th August 2018, 22:10   #32  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,049
is there even a heat problem with AVX512?

seriously does it even matter that the CPU is using a lower clock as long as the program runs faster? so be it...
getting a cooler that will keep it working even with AV512 and full boost clocks so not a hard too and trivial with deliding.

and if there is any real problem with heat on intel right now it's there thermal "paste".
huhn is offline   Reply With Quote
Old 12th August 2018, 22:34   #33  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,419
Quote:
Originally Posted by NikosD View Post
AVX2 had no such problems in the first implementation of Haswell.

The turbo modes of AVX2 were lower than any other mode, but this hasn't changed since then.
It's not that it was improved.
Actually it was improved. In Haswell the entire CPU (ie. all cores) would clock down if any core used AVX2. Since Broadwell this is no longer the case, and only the core actually running AVX2 is clocking down if needed.
This resulted in a steep performance penality when using AVX2 in Haswell, which had people equally hesitant at first. But now AVX2 is quite useful in a multitude of applications, including Video.

Broadwell was also Intels first CPU to be made on Intels tri-gate transistor process, which helped efficiency and helped to reduce the power/heat requirements of the AVX2 units.

The really dense SIMD compute areas is where process improvements give the most benefit, since they produce the most heat on a small area.

I fully expect those clock offsets to improve with Ice Lake, whenever that comes out. Maybe by then developers have also figured out how to properly utilize it.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is online now   Reply With Quote
Old 12th August 2018, 23:24   #34  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,463
But if your code doesn't run on all cores, how are you going to gain performance ?

Multithreading is necessary for any modern code, especially for HPC.

So, once again the clock will go down for AVXx.

And Broadwell is actually a non existent CPU for desktop.

Anyway, I think GPUs are nowadays more suitable for these kind of calculations like AVX512 and Top500 supercomputer list, clearly shows this exactly.
__________________
Win 10 x64 (17763.55) - Core i3-4170/ iGPU HD 4400 (v.5058)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Old 13th August 2018, 00:47   #35  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,049
that's the point it's supposed to be faster when correctly used even with the lower clocks. there is lot's lot's of stuff that can't be paralysed
why even waist your time on creating a 32 core CPU if a 1080 has 2560 "cores" for the same reason and much more.

AVX512 is used in x265 do GPU do this better?

where does this total blind assumption come from that it is something GPU will be better at.

i wouldn't be shocked if zen2 will support it..
huhn is offline   Reply With Quote
Old 13th August 2018, 11:30   #36  |  Link
Atak_Snajpera
RipBot264 author
 
Atak_Snajpera's Avatar
 
Join Date: May 2006
Location: Poland
Posts: 6,630
Quote:
Its the same as it was with AVX2, the initial implementation produces too much heat, the next ones will be refined, especially with a die shrink, and it'll be much more valuable.
after 2020 maybe

Quote:
i wouldn't be shocked if zen2 will support it..
If we are lucky zen2 will have 4xFMAC128 instead of 2xFMAC128. This means that AVX-512 will require 2 cycles like in SkyLake-X 7800x

Quote:
seriously does it even matter that the CPU is using a lower clock as long as the program runs faster? so be it...
getting a cooler that will keep it working even with AV512 and full boost clocks so not a hard too and trivial with deliding.
The issue is that it may not run faster Even Intel in his document does not recommend using avx-512 in x265.

They had to disable 24 and 20 cores respectively in order to show some gains. Yeah that makes sense. You buy $10k CPU and use 4 or 8 cores in video encoding.

Last edited by Atak_Snajpera; 13th August 2018 at 11:36.
Atak_Snajpera is offline   Reply With Quote
Old 13th August 2018, 12:18   #37  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,463
@huhn

Oh man.

How many unreasonable phrases in one post.

AVX512 is a specialized instruction set used in very very very rare cases, especially in an efficient way.

For x265 even Intel suggests to enable AVX512 under certain circumstances or else it's not worth it or even worse it's slower than AVX2

AVX512 is just a special case of Intel's propaganda due to the fact that it can't produce more general purpose cores.

So, it can't win the match fair in general purpose silicon and it has already started to make things useless like AVX512 look as something important and necessary.

Unfortunately, this kind of propaganda works as we see people in x265 thread to buy Intel CPUs for AVX512 to use it with x265!

Poor guys...

The reason that AMD delivered many cores is because obviously there are many cases that general purpose computing can be parallelized and run on many threads.

But obviously this is not the case for AVX512.

Of course AMD will provide support for these kind of instruction sets, like always

But it is more important and necessary to implement fast SSEx and AVX/AVX2 than AVX512

So, simple.
__________________
Win 10 x64 (17763.55) - Core i3-4170/ iGPU HD 4400 (v.5058)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Old 13th August 2018, 12:21   #38  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,049
so we just ignore the benefits on the 10 core CPU?

and again you can workaround the clockspeed with a simple bios setting and proper cooling.

Quote:
From Figures 1 and 2 we can make the following inferences:
For desktop and workstation SKUs (like the Intel Core i9-
7900X processor that we tested), Intel AVX-512 kernels
can be enabled for all encoder configurations, because the
reduction in CPU clock frequency is rather low.
For server SKUs (like the Intel Xeon Platinum 8180
processor on which we tested), the frequency dip is higher
and increases with more cores being active. Therefore,
Intel AVX-512 should only be enabled when the amount
of computation per pixel is high, because only then is the
clock-cycle benefit able to balance out the frequency
penalty and result in performance gains for the encoder.
Specifically, we recommend enabling Intel AVX-512 only
when encoding 4K content using the slower or veryslow
preset in the main10 profile. We do not recommend
enabling Intel AVX-512 kernels for other settings
(resolutions, profiles, or presets), because unexpected
inversions with respect to using the Intel AVX2 kernels
may result.
Conclusion
This paper described our experience with accelerating
x265 and open-source HEVC encoders, with Intel AVX-512
instructions. From our experience we recommend that
for workstation and client CPUs that have Intel AVX-512
instructions, the kernels may be used across all profiles of
x265. However, for server-grade CPUs that have the Intel
AVX-512 instructions, this acceleration should be used only
for certain profiles of x265 that focus on encoding high
resolution video (4K and higher) in the main10 profile using
the slower or veryslow presets due to the impact that the
Intel AVX-512 instructions have to clock frequency. For other
profiles on server CPUs with Intel AVX-512 instructions,
enabling these kernels is not recommended.
so this read like a not recommend for you?
huhn is offline   Reply With Quote
Old 13th August 2018, 12:31   #39  |  Link
Atak_Snajpera
RipBot264 author
 
Atak_Snajpera's Avatar
 
Join Date: May 2006
Location: Poland
Posts: 6,630
Quote:
so we just ignore the benefits on the 10 core CPU?
Yes because We are living now in 16-32 core HEDT world. These days 10 core HEDT CPU for $1k is sooo last age. Ryzen 3700 will have more cores than that old 7900x.

Last edited by Atak_Snajpera; 13th August 2018 at 12:35.
Atak_Snajpera is offline   Reply With Quote
Old 13th August 2018, 12:42   #40  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,049
@NikosD

yes 512 is not that important right now like AVX2 wasn't important when it was released.

if you would just read why AVX512 looses speed on the 8180 you would instantly see that AVX512 can be used with as many cores as present the problem is just a design flaw from intel.

but calling it propaganda is such much easier...
huhn is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 17:44.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2018, vBulletin Solutions Inc.