游戏主机,i7-i7 6700k与i7 4790k非k和i7-4790k选哪个?

i7 6700k和4790k哪个好?
想要自己装机,目标是显卡gtx970,cpu想买i76700k的,但是看了好多推荐都是买4790,硬件基本小白,望各位前辈不吝赐教。平时的话想玩一些单机游戏。
按投票排序
我是4790K+丽台970据说第六代I7对之前的提升不到5%而且受限于主频的低下反而某些时候还不如4790K4790K再怎么说睿频一下也有4.4G啊。还便宜。
不考虑性价比,6700K当然是最好的,但K的U你主办也要匹配,这样一来一去,预算起码要超2000.其实不考虑需求谈配置是没有意义的。之所以现在4790普遍,那是因为对线目前的主流游戏可以满足大部分游戏了。
淘宝上现在4790k比6700k略贵一些。
6700k的skylake胜在是新一代平台,ddr4的内存支持,这是skylake的重点,性能只能说跟比4790k好一点吧,但是听说部分专业软件很屌,不知道是指令集什么的,但是6700似乎能超的很高,所以有钱就上新的,买新不买旧是真理,skylake主板和内存都贵啊
我最近也想弄 个 6700k
970不懂怎么样
楼主配了吗
只能说专业软件6700更好
已有帐号?
无法登录?
社交帐号登录|||||||||||
欢迎您访问PChome.net
热门DIY配件产品: |
& 名龙堂水冷i7 4790K升6700/GTX970四核DIY 组装台式游戏电脑主机
名龙堂水冷i7 4790K升6700/GTX970四核DIY 组装台式游戏电脑主机
参考报价: ¥5555
配置类型:CPU系列:
用户点评0.0
& (满分5.0)
名龙堂水冷i7 4790K升6700/GTX970四核DIY 组装台式游戏电脑主机产品对比
同价位对比
同品牌对比
与其他产品进行PK:
请选择品牌
名龙堂ZOL特别版游戏悍将宁美国度EiT雷霆世纪攀升兄弟PEO西部火炬郑新电脑京天华盛
请选择型号
名龙堂水冷i7 4790K升6700/GTX970四核DIY 组装台式游相关链接
下一步您可以
名龙堂水冷i7 4790K升6700/GTX97
DIY主机品牌
(51)(11)(10)(37)(10)(18)(17)(7)(3)(4)
热门关键词
关于名龙堂
联系电话:027-
官方网站:
联系地址:湖北武汉市广埠屯广八路天宝佳苑C座4F
PChome官方微信Your question
&&&i7-4790K vs i7 5820K vs i7 6700K ( Skylake )&
Multi-core CPUs have been around for 10+ years and still only a minority of software makes significant use of more than two cores. It is unlikely typical games will grow beyond requiring quad cores before the i7-5820 becomes obsolete, so I would not bother losing sleep over that.
DDR4 is still considerably more expensive than DDR3 and based on how little performance scaling there is from increasing clocks on DDR3, there is little reason to believe DDR4's bandwidth will produce significant performance gains except while using the IGP, so I would not lose sleep over that either.
The i5-2500k is still a very viable gaming CPU by today's standards even at stock clocks, so whichever i7 you buy today will likely remain viable for 5-7 years.
Reply to InvalidError
I see games using quad core CPU's now.
People rightfully think multicore programming is hard, it is.
The hard part is creating the first and 2nd thread, after that creating more threads is simple.
Once a game (or any program) can run efficiently on a dual core its significantly easier to add more threads.
However I don't see the industry pushing this yet.
They rightfully notice that 6 and 8 core Intel CPU's are still out of the reach of most gamers and will program instead for four cores or less for the foreseeable future.
Nearly no one would play a game that required a i7-5820k to run well, simply because the costs are still very high compared to quad core CPU's.
Like InvalidError said, the i7-5820k will be obsolete before games recommend 6/8 core CPU's.
I would wait for Skylake just so the prices on the current generation drop.
Its only 1 to 3 months before broadwell comes out and 4 to 6 months for Skylake.
As an added bonus we will know more about AMD Zen by the time Skylake hatches.
Reply to JQB45
- Tech Support
- Tech Support
- Tech Support
- Tech Support
- Tech Support
Can't find your answer ? Ask !
I wouldn't look too much into "future proofing" your new system as most people don't keep their system much more than 3-5 years.
Build the best PC you can for RIGHT NOW!
For me, that's an easy choice, i7-4790K with DDR3 memory.
Like you said, 4 faster cores will generally out preform a slower 6+ core CPU in most games now.
Plus most games are GPU limited, so take the money you would save on a Haswell i7 and put it into better GPU(s) and/or monitor(s).
Reply to gerr
JQB45 said:I see games using quad core CPU's now.
People rightfully think multicore programming is hard, it is.
The hard part is creating the first and 2nd thread, after that creating more threads is simple.
I wouldn't say that.
The first threads you create in a typical applications are to delegate tasks to avoid stalling the UI thread but most of those threads do relatively little work. When you have time-consuming tasks, then you run into the additional hurdle of figuring out how to factor the tasks as efficient multi-threaded problems and many algorithms do not scale well beyond a handful of threads. Quite a bit of research and many a PhD thesis have gone into finding more threading-friendly alternatives to common constructs.
Much of the time, implementing threading beyond running unrelated or only loosely related tasks in parallel is simply not worth the trouble.
Reply to InvalidError
Exactly. Depending on the programming language, you can spam threads with one or two lines of source code. Does that make sense? Sometimes, for unrelated tasks that need to run in the background - usually not.
The challenge is finding ways to create threads with as equal load as possible. In games, this often gets hard due to constantly changing circumstances and the lack of predictability, of which a need for deterministic instruction execution results. Your cpu can't first calculate where you step in 10 seconds and then where you step now - doesn't make sense, as well as awkward position changes might occur even if movement was predictable.
Therefore, you try to offload as many seperate tasks from the main thread as possible. Be it a kind of trading post, an ability changing system, the user interface, calculation of damage and health changes after an attack, mana/energy control, etc. etc. etc. It's very common for games to have 50+ threads working. Those, however, all take very little cpu time, so the challenge is finding ways to split up the cpu heavy tasks such as physic calculations. There you'll run into challenges, for example if you split up X-, Y- and Z-axis movement into separate threads and Y-position gets calculated before X-/Z-, you'll end up in the air or inside the ground your next step. Therefore you need to coordinate the threads to still get the behavior you want, which means unless they always take the exact same time to finish, a fast one might end up waiting on a slow one, which shortens the performance increase you get from having two or three cores running at the same time instead of one
It's hard finding things you can parallelize efficiently without much need of scheduling them. Because as far as that goes, you way more often run into unwanted scenarios (bugs) than with only having to deal with one thread.
You might actually tun out of tasks that can be offloaded before your game makes efficient use of X cores. I'd even argue creating the first set of offloaded instructions is easier than the following, because you'll eventually have problems finding something where controlling it in separate instances is worth it performance wise and doesn't add too much extra code.
By the way, go code for incrementing a number 1.000.000 times using 10 threads:
package main
import("fmt")
func main(){
var counter int
for i:=0;i&10:i++ {
go incr(&counter)
fmt.Println(counter)
func incr(a *int){
for l:=0;l&100000:l++{
The result is going to be around 10 though, not the expected 1e6. That's because the main thread finishes before others do. Now, you can add channels to wait for all threads to finish:
package main
import("fmt")
var done = make(chan bool)
func main(){
var counter int
for i:=0;i&10:i++ {
go incr(&counter)
for I:=0;I&10;I++{
fmt.Println(counter)
func incr(a *int){
for l:=0;l&100000:l++{
done &- true
...And the result still isn't going to be 1.000.000, while one thread reads the value and calculates, another thread might read the value, do the same increment and write the same value again.
You'll need to schedule the threads by using semaphores or locks or channels or whatever (or in this example simply use an atomic increment).
Reply to DubbleClick
& lost you .
thanks for answers by the way but . don't you think that i7 5820k is more future proof ( can get you runnin longer ) because . first your already updated with DDR4 so you wont be waiting for skylake and you wont eventually see games and apps taking advantage from ddr 4 while your stuck with your older DDR 3 . Second , maybe games will start using 6 cores in near future . amd hexa and octa cores have been around for pretty long now , so its not that knew . games that might take advantage of more cores might not be in a far far future .
by the way . is there ANY game that will have trouble running on a monster like i7 5820K ? can any one see any difference between that and older i7 4790K ? they both do great with a powerful GPU . so why not take newer one and not upgrade to skylake . many are tired of waiting . i just built my i7 5820K with GTX 980 in PCHound just for around 1700$ . i know its still expensive but it's still affordable . what you guys think ? how about those benchmarks showing 5820K beating 6700K in GTA V and Crysis 3 and such ?
Reply to DukiNuki
DukiNuki said:don't you think that i7 5820k is more future proof ( can get you runnin longer ) because . first your already updated with DDR4 so you wont be waiting for skylake and you wont eventually see games and apps taking advantage from ddr 4 while your stuck with your older DDR 3.
By the time either of those two things become a significant concern, the i7-5960X and DDR4-3200 will likely be obsolete anyway.
It boils down to buying a $2000 system today that might last seven years or build a $1000 today that will last four or five and another $1000 PC at that point that will exceed the original $2000 PC's specs and last another four or five. The best overall bang-per-buck is building two systems.
After three or four years, you will start itching to upgrade the system anyway due to all the updated IOs regardless of how "future-proof" your processing power was anyway.
Reply to InvalidError
DukiNuki said:games and apps taking advantage from ddr 4 while your stuck with your older DDR 3 .
There is no advantage of ddr4 over ddr3 speed wise. It has higher theoretical bandwith but you're never even going to use a fraction of that anyway.
DukiNuki said:
Second , maybe games will start using 6 cores in near future
Quad core cpu's are in the mainstream lineup of intel and amd for roughly 10 years. Most games still don't make efficient use of 4 cores (dr of last post: harder to code, more vulnerable to bugs, no reward for the devs) and I don't see that changing any time soon. If you want an "upgrade" for gaming you'll be looking at a cpu with better single core performance than the i7 4790k, which does already exist (i7 6700k) but won't be available for purchase for the next few months.
InvalidError said:
After three or four years, you will start itching to upgrade the system anyway due to all the updated IOs regardless of how "future-proof" your processing power was anyway.
Oh, so much this.
Reply to DubbleClick
"By the time either of those two things become a significant concern, the i7-5960X and DDR4-3200 will likely be obsolete anyway"
well when that time comes . i already have fast DDR 4 and 6 Core i7 CPU ( that means ill be already updated with that time's standards ) . no matter if i7-5960X will be outdated by that time . am i right ? people are still using and are still satisfied with their old i5 2500K with high end VGA s . well i can wait for skylake or even canon lake or just go for cheaper i7 4790K or more expensive i7 5820K , bottom line i don't think they well make a noticeable difference in Gaming as long as you have a great GPU and your CPU is not Bottle necking it .
CPUs are hardly ever bringing big performance gains . they are not working like graphic cards . so i guess outdated does not quite go with CPUs . its just the number that changes and some new features .
Reply to DukiNuki
It sounds like you made up your mind before even posting, so just get what you want as long as it fits within your budget.
Reply to gerr
no i have not . that's why i came here . just don't get why no body is voting for i7 5820k . i'm still waiting for skylake . sorry if i sound like a jerk who is not listening . by the way 'll probably go for i7 4790 because more i read more i get that its a better deal than i7 5820k . thanks anyway
Reply to DukiNuki
Because games don't put continuous load well distributed across more than 4 cores, for the reasons I mentioned in previous posts and therefore scale better to faster frequency than to more cores. They don't benefit from dd4 ram and therefore with the i7 5820k you're wasting big money for no performance gain.
Reply to DubbleClick
DukiNuki said:just don't get why no body is voting for i7 5820k
Because for most gaming situations, six cores, HT and DDR4 yield little to no extra performance gain at a significantly higher cost compared to an i5 system. For most gamers, the i7-5820k with its expensive support components are simply a waste of money with little foreseeable benefits.
Many/most gamers who buy i7-5820k also use their computers for large video processing, 3D rendering, compiling, simulation, etc. jobs and need the extra cores for that. They buy the i7-k/5960x to run workstation-style jobs and make their workstation double as their gaming rig or vice versa. Either way, they have immediate significant uses for the extra cores.
And you also have the few who want the x for the 40xPCIe lanes to run 8x8x8x8 CF/SLI in their extreme gaming rig.
Reply to InvalidError
got it & thanks a lot . few days ago i read about crysis taking advantage of HT . and there was a 20Fps difference between HT on and off . so i thought maybe few cpu heavy games like arma 3 would take advantage of it . but its better not to waste my money on it and get 4 stronger cores . sorry if i was rude or idiot in anyway
Reply to DukiNuki
InvalidError said:DukiNuki said:just don't get why no body is voting for i7 5820k
Because for most gaming situations, six cores, HT and DDR4 yield little to no extra performance gain at a significantly higher cost compared to an i5 system. For most gamers, the i7-5820k with its expensive support components are simply a waste of money with little foreseeable benefits.
Many/most gamers who buy i7-5820k also use their computers for large video processing, 3D rendering, compiling, simulation, etc. jobs and need the extra cores for that. They buy the i7-k/5960x to run workstation-style jobs and make their workstation double as their gaming rig or vice versa. Either way, they have immediate significant uses for the extra cores.
And you also have the few who want the x for the 40xPCIe lanes to run 8x8x8x8 CF/SLI in their extreme gaming rig.
I bought a 5930K exactly for this purpose - Video rendering. Tests show the full six cores in use (around 80%) on each even when using the 290x
OpenCL encoding as well. This proves a good place for the six and eight core processing units to do what they are built to do.
Howveer, to stay on the topic I must add that I have kept my 2600k build in case I ever decide to play games again in the future. I would expect the 2600k at 4.5ghz to perform better than the 5930k stock at this present time.
When i render video the instructions of my task are set, I do not change anything except let the process run. This would seem far easier to multi core than a continuous set of instruction changes during game play. I of course say that in simpler terms as how it works has been done pretty well by another poster above.
Reply to H11poc
I bought the 5820, however gaming was my second need, video editing (which does benefit from multiple cores) was my priority.
I went with the x99 platform becasue of the additional sata ports and M2 and eSATA options since I wanted to run a variety of SSDs.
Eventually settled on Intel 750PCIe and couple of samsung EVOs) and have option of adding a M2 or eSATA.
And I have played every game thus far with no issues.
Reply to jddem
Seconds after I read the review on the 6700k, I bought a ASRock x99 ITX, Crucial 32GB (2 x 16GB) DDR4 ECC, SAMSUNG SM951 M.2, and a Xeon E5-1650 v3.
(Already have a decent 680 GTX)
The 1650-v3 (like a 5930k with ECC)
has better multithreaded performance than a 6700k AND has lower load temps, so it seemed like a no brainer if I can afford it even though there are better cost effective solutions. I will use the new rig for real work and not just games & web-surfing. The 6700k temps may go down with maturity, but I don't think by much.
I spend every day reading these kinds of articles, so I can't see myself with a new 4-core CPU after 7 years of using a q9550.
With 3.8 Ghz turbo speed, it should work well with games and should show bigger gains with the more multithreaded DX12.
Just my 2 cents worth.
Reply to enewmen
InvalidError said:Multi-core CPUs have been around for 10+ years and still only a minority of software makes significant use of more than two cores. It is unlikely typical games will grow beyond requiring quad cores before the i7-5820 becomes obsolete, so I would not bother losing sleep over that.
DDR4 is still considerably more expensive than DDR3 and based on how little performance scaling there is from increasing clocks on DDR3, there is little reason to believe DDR4's bandwidth will produce significant performance gains except while using the IGP, so I would not lose sleep over that either.
The i5-2500k is still a very viable gaming CPU by today's standards even at stock clocks, so whichever i7 you buy today will likely remain viable for 5-7 years.
i would add a small note for when choosing the cpu in this case, the main difference with 6700k is using a different chipset which has quite a few changes in terms of pci-e lanes which gives a lot better performance when you want to use an m.2 drive and sli/cs usb 3.1 and similar situation where pci-e lanes are involved
Reply to darkmeiun
4790 should work in 1151? few i've been looking at for DDR4 all claim they support "Core i7 / i5 / i3 / Pentium / Celeron (LGA1150)". so can stick with the 4790K and still get the DDR4 advantage.
Reply to GObonzo
GObonzo said:4790 should work in 1151?
Just because the sockets look nearly identical except for one extra pins does not make them compatible. Power delivery to the CPU on Skylake is completely different from Haswell since Skylake moved the CPU core voltage regulator back to the motherboard.
Reply to InvalidError
InvalidError said:GObonzo said:4790 should work in 1151?
No. Just because the sockets look nearly identical except for one extra pins does not make them compatible. Power delivery to the CPU on Skylake is completely different from Haswell since Skylake moved the CPU core voltage regulator back to the motherboard.
i didn't write they looked identical. i wrote a quote from the motherboard's specification that it supports: Core i7 / i5 / i3 / Pentium / Celeron (LGA1150). since it's stating specifically that it supports the 1150 version of Celeron that should imply it supports the other 1150 "i" series listed.
Reply to GObonzo
GObonzo said:i didn't write they looked identical. i wrote a quote from the motherboard's specification that it supports: Core i7 / i5 / i3 / Pentium / Celeron (LGA1150).
If you see an LGA1151 motherboard advertised as compatible with LGA1150, that would be because someone copy-pasted the web page or database entry for the motherboard and forgot to update the CPU list with LGA1151.
Reply to InvalidError
InvalidError said:GObonzo said:i didn't write they looked identical. i wrote a quote from the motherboard's specification that it supports: Core i7 / i5 / i3 / Pentium / Celeron (LGA1150).
If you see an LGA1151 motherboard advertised as compatible with LGA1150, that would be because someone copy-pasted the web page or database entry for the motherboard and forgot to update the CPU list with LGA1151.
GIGABYTE =
multiples from Newegg. the first spec paste earlier was from MSI's website showing 1150 compatibility, most of these i'm looking at have changed their Celeron support to 1151 only. but the others only state i3, i5, i7. no 6th gen limitation stated.
Reply to GObonzo
GObonzo said:multiples from Newegg. the first spec paste earlier was from MSI's website showing 1150 compatibility, most of these i'm looking at have changed their Celeron support to 1151 only. but the others only state i3, i5, i7. no 6th gen limitation stated.
They did not "change their Celeron support to LGA1151", the "(LGA1151)" at the end of the CPU support list is meant to apply to the whole list and is completely redundant since the LGA1151-only or 6th-gen-only compatibility (the only CPUs you can get for LGA1151) is already a given due to the socket's pin count.
A copy-paste error/oversight does not magically make LGA1150 compatible with LGA1151.
Reply to InvalidError
InvalidError said:A copy-paste error/oversight does not magically make LGA1150 compatible with LGA1151.you're such a genius, no one could ever be as intelligent as you. of course every one else on the world would just buy a board and count individual pins before reading spec sheets.
Reply to GObonzo
My 5820k is OC'd to ~ 4.6 GHz from the stock 3.3.
This is a stable OC using the Asus software and X99 Deluxe board.
I am a beginner and do not mess with settings on my own.
The DDR4 Memory was not significantly more expensive than DDR3.
I am using a reference 980Ti.
I run Ark Survival single player with all settings maxed out and get 45 FPS.
It is nice to finally see what games are supposed to look like!
Reply to jmcneal
The very best you could do,is look at the specs for the CPU and not the board. i.e. Intel Core i5-4690K Devil's Canyon Quad-Core 3.5GHz LGA 1150,Intel Core i7-5930K Haswell-E 6-Core 3.5GHz LGA 2011-v3,Intel Core i5-6600K 6M Skylake Quad-Core 3.5GHz LGA 1151.LGA stands for Land Grid Array.If you buy an 1150 board,it WILL NOT fit a 2011-v3 chip.etc.,etc.,etc...Each has different pin counts.You came on here for help,and your getting snippy.
Reply to GaryO63
It depends what games you will be playing. Some games like the arma franchise really only use 1 core so you would want a card with great single core performance but I would go for parts that will benifet you in the long run so you don't have to upgrade again.
Reply to Jesse_bisbee
Thinking about OP's choices also made me lose much sleep over many months. Then I thought to hell with this shite and bought the 4790K. I run it with all speedstep and power saving features off constantly at 4.4 Ghz paired with a GTX 980 Ti and cheap 16 gigs ddr3 ram. Someone on this thread said by the timer games start using 6 cores this haswell-e will be obsolete like the amd 8-core shits... I agree.
Reply to Mugglensu1984
Ya, I basically did the same...i7-Ghz, 16GB DDR3-0 TI.
Should be plenty powerful for several years.
Heck, I know some people with high-end Sandy or Ivy bridge CPU systems and all they did was upgrade their GPU's and are gaming fine at high levels.
Reply to gerr
If I had a Devils Canyon I wouldn't even blink at Skylake.
From a price stand point the best value i7 aint the Skylake.
If your heart is set on a 5820K go for it as with directx 12 round the corner cpu single core overhead aint gonna be worth worrying about.
Hmmm allot of old games still use direct x9 but hey old games will be pwned by a 5820k.
Reply to LoneGun
In my opinion you should buy the skylake i7 because it has 8 threads which makes it like an 8 core processor and it's the most powerful processor per core from all these 3 processors. I think that generally is the best choice for gaming now. it will last until the end of ps5.
Reply to Kritonios
Or you could wait for the Skylake-E. I'm rocking an old i5 2500k @ 4.8ghz at 120hz 1080p not planning to go 4k just yet so don't really see the point of the latest skylake its barely quicker than a devils canyon quite deflating.
Reply to LoneGun
I went through the same thought process a couple of weeks ago (4790K vs 5820K vs 6700K) and the key factors that helped in the decision were:
a) Where is each option in its Product Life Cycle (End/Mid/Start)
b) What are current pricing impacts of these Product Life Cycles
c) Where are the "supporting technologies" of each option
d) What are the differences between these options ito # of PCIe lanes, RAM channels,
e) How the above map to a usage that is mainly non-gaming (video conversion/data management/etc)
f) How the above map to a planned system life of ~3-4 years minimum
First to be eliminated was the 4790K based on "end-of-life" of both CPU and socket-set (and associated PCH).
Next was the 6700K based on it being too new (artificially inflated pricing, virtually no "real difference" to 4790K, potential for initial teething problems)
So, wound up with the 5820K and - after 2 weeks of living with it - no regrets.
Basis of plus factors:
a) LGA2011-v3 likely to be around for 18-24 months
b) DDR4 RAM already (despite slight cost premium)
c) 28-lane PCIe 3.0 architecture vs 16-lane for both 4790K and 6700K (additional lanes on 6700K via DMI 3.0 to PCH)
d) More focus on current "real world" needs (eg SATA ports via PCH, USB 3.0, etc)
e) Virtually no wastage on "solutions seeking a problem" (such as USB 3.1)
As I needed at least 10 x full-speed SATA6G ports and had no need for USB 3.1 (what currently available devices are there that need/support USB 3,1?), the 28-lane 5820K fitted like glove (1 x 16x for GPU, with 1 x 8x and 1 x 4x for future use)
I'm still trying to establish whether these "spare" PCIe 3.0 lanes can be used for:
a) 8x RAID card
b) 4x PCIe 3.0 SSD (eg Intel 750)
Current config:
Lian-Li PC-Z70 full tower with 9x 3.5" hotswap HDD bays
Asrock Fatal1ty X99X Killer board
Core i7-5820K
Corsair H80i GT liquid cooler
4 x 8GB Corsair Vegeance LPX DDR4-2133 RAM
1 x EVGA GTX970
Corsair AX860 PSU
Samsung 850PRO 512GB SSD
6 x 6TB plus 3 x 3TB HDDs
Sunix PCIe 2.0 (1x) 2-port SATA controller
DVD-RW plus BD-DW optical drives
Windows 8.1 Pro-64
So far, everything I need works and works very well.
My $0.02 worth... &
Reply to DevillEars
I'm not here to say when to to what extent 4 vs &4 cores will matter, also 4 vs &4 threads I suppose.
I'm just posting to offset the constant stream of some people saying 2 cores is enough. Don't listen to them! I think they are talking more from theory than experience with a variety of demanding games and tasks.
You will hear more people like me say that anything less than 4 cores is a really bad choice in general. I've used both, and for gaming (for example BF4), general use, and multi-tasking, there's no contest. Never get 2 cores. But again, that doesn't mean you need more than 4 but who knows when you will?
Reply to eltouristo
eltouristo said:I'm not here to say when to to what extent 4 vs &4 cores will matter, also 4 vs &4 threads I suppose.
I'm just posting to offset the constant stream of some people saying 2 cores is enough. Don't listen to them! I think they are talking more from theory than experience with a variety of demanding games and tasks.
You will hear more people like me say that anything less than 4 cores is a really bad choice in general. I've used both, and for gaming (for example BF4), general use, and multi-tasking, there's no contest. Never get 2 cores. But again, that doesn't mean you need more than 4 but who knows when you will?
2 cores is plenty if you are running something like steam OS, games only, and nothi then they are right. However, most people will have a few small things running in the background or maybe even a browser/music player. In any case I would rather have a 4 core cpu then a dual.
Reply to Iamsoda
If you only have a single gpu, an Intel quad core would be a great solution. If you have more than one, and it's a 'higher end' gpu, then a 6 core isn't a bad idea.
Reply to Nathan White
I think the main thing that makes the 6700k worth buying and better than the 4790k is that it is more than like gonna be a cooler running cpu and 14nm is a nice upgrade. The main thing you should see with the cooler temps is that overclocking will be much better, easier & stable since it doesn't need to run at high temps and/or start throttling in later pushes to clock speeds.
Reply to PS3toPC
Had the same dilemma. Went for the 5820k and a pcie ssd. Overclocks way better than 6700k or even dc due to solder on the die, and the ssd makes it staggeringly quick. If you game and just game the 6700 will do you fine, but since the 5820k will game at the same rate, and destroy it in multiple thread workload it seems like a simple choice to me. 8 pack shares my opinion so I'm happy.
Reply to Jimbo79
Might I just point out that there seems to be a lot of discussion regarding concurrent programming and threads with the idea that creating threads somehow means you're using more cores on a CPU.
Threads != cores, while you may know this already, allocating CPU resources (cores) is entirely an operating system job. Just because a programmer creates 1000 threads, does not mean they all 4 CPU cores will be used, from what I understand (as a 3rd year computer science student) is that the application programmer has no control over what core a thread will run on, even if 1000 threads are created, there is no guarantee that multiple cores will be used.
Additionally much of the concurrency is taken away from a standard game developer and depends on the game engine they use - much safer this way.
Reply to Leigh_1
Leigh_1 said:from what I understand (as a 3rd year computer science student) is that the application programmer has no control over what core a thread will run on, even if 1000 threads are created, there is no guarantee that multiple cores will be used.
If at least four independent threads doing non-trivial work meet scheduling/wake-up conditions before any of the others goes back to sleep on its own or hits a lock, it is extremely unlikely the OS will let cores or hardware threads go to waste when there is plenty of work for them to do unless it is in some power-saving/throttling state that forbids the OS from waking up and scheduling the other cores.
If you have 1000 threads but they are all waiting on other threads' locks or otherwise are non-schedulable most of the time, then I think there is something horribly wrong with your programming model. In that case, you are right, creating threads just for the heck of creating threads might not improve concurrency by a significant amount.
If you run a typical game like Tombraider, you see one thread hogging the equivalent of a whole core continuously, a secondary thread using about half a core and a bunch of other minor threads using roughly the equivalent of half an extra core collectively, so all the processing should theoretically fit on two cores. But some of those extra threads run concurrently with others and packing everything on only two cores may increase latency since one or more of the running threads, such as the main thread of most games, needs to be preempted to give the others a fair share of CPU time, so you need a 2C4T or better CPU to mitigate the context switches.
On top of that, you have to add all of the OS' background processes as additional workload and sources of induced context switches.
Reply to InvalidError
I have the 5930k and I see 6 core usage when I'm on the Oculus Rift DK2 on Project Cars.
Other games like ArmA 3 also take 6 core usage.
I can see 6 core usage because in Intel Extreme Tuning Utility it logs what has happened to up to the past hour.
When I'm in Windows it's 1 core sometimes it goes 3-4 for some reason as I have other things running.
When VR hits the market the more cores the better.
It's more than just games that need the cores but the OS programs as well.
I had the i5 when I was i7 for a long time since.
I went i5-4690k after 3770k to save money but damn it was bad when multi-tasking in the background.
Games would stutter at times.
It would start lagging later.
It would make sense to get i3-i5 back in Windows 7 probably but in Windows 10 it takes advantage of everything it can get.
If you look at task manager for a while you would see many processes running and turning off.
Especially with high-end SSDs that saturate the PC quickly, those burst of performance sap would be seen as a stutter or a lag.
Weird thing is I still get it at times but not as much.
5930k, NVME 400GB Intel, 980 Ti, Creative ZxR, X99 EVGA, 32GB DDR4
I'm just worried for all the people going VR next year as it will hit big and people will have slow performance at the CPU level not GPU.
6-core usage in Virtual Reality.. many processes have to happen for it to be in sync like the camera tracking sensor, mouse, keyboard, etc.
Reply to AntDX316
Windows 10 brings DirectX 12 which has features specifically geared toward multi-threaded rendering.
Until DirectX 12, you couldn't gain much by adding cores because the main hog, rendering, was almost always single-threaded, because DirectX 11 and prior didn't give you much for running multiple threads in the rendering pipeline.
I'm not saying 4 vs 6 cores is going to see much difference, though.
But you might eventually start seeing 6 cores bump FPS numbers by some noticeable percentage over 4 cores.
But you'll be fine with 4 cores for another 2 years at least.
Reply to Cowtung
Cowtung said:Windows 10 brings DirectX 12 which has features specifically geared toward multi-threaded rendering.
Until DirectX 12, you couldn't gain much by adding cores because the main hog, rendering, was almost always single-threaded, because DirectX 11 and prior didn't give you much for running multiple threads in the rendering pipeline.
I'm not saying 4 vs 6 cores is going to see much difference, though.
But you might eventually start seeing 6 cores bump FPS numbers by some noticeable percentage over 4 cores.
But you'll be fine with 4 cores for another 2 years at least.
but the cost of a 5820k vs 6700k is like a couple dollars
You get 6700k if you are going to use the onboard GPU.
Otherwise get 5820k
Reply to AntDX316
AntDX316 said:Cowtung said:Windows 10 brings DirectX 12 which has features specifically geared toward multi-threaded rendering.
Until DirectX 12, you couldn't gain much by adding cores because the main hog, rendering, was almost always single-threaded, because DirectX 11 and prior didn't give you much for running multiple threads in the rendering pipeline.
I'm not saying 4 vs 6 cores is going to see much difference, though.
But you might eventually start seeing 6 cores bump FPS numbers by some noticeable percentage over 4 cores.
But you'll be fine with 4 cores for another 2 years at least.
but the cost of a 5820k vs 6700k is like a couple dollars
You get 6700k if you are going to use the onboard GPU.
Otherwise get 5820k
Games have yet to support DirectX12.... So right now, 6700k will do better for almost all games... Its the higher frequency that pushes it past the 5820k for gaming. If games running DirectX9 and 11(.1) acted like DirectX12 then the unused cores would push the 5820k beyond the 6700k like most would expect.
Reply to PS3toPC
Cowtung said:Windows 10 brings DirectX 12 which has features specifically geared toward multi-threaded rendering.
Until DirectX 12, you couldn't gain much by adding cores because the main hog, rendering, was almost always single-threaded, because DirectX 11 and prior didn't give you much for running multiple threads in the rendering pipeline.
I'm not saying 4 vs 6 cores is going to see much difference, though.
But you might eventually start seeing 6 cores bump FPS numbers by some noticeable percentage over 4 cores.
But you'll be fine with 4 cores for another 2 years at least.
thumbed up by accident.
More cores is better especially when it's from Intel.
It makes navigating while doing other things a lot smoother.
When you peg 100% with less cores you will know what I'm talking about.
If you go i5 with no HT you will know what I'm talking about.
With Windows 10 and new devs taking advantage of multi core like with VR you will know what I'm talking about.
Well you probably won't because you will be ok lagging every now and then thinking it's ok but looking at CPU log would show 100% CPU usage so 6 core would give enough space to play w/o lag.
Reply to AntDX316
PS3toPC said:AntDX316 said:Cowtung said:Windows 10 brings DirectX 12 which has features specifically geared toward multi-threaded rendering.
Until DirectX 12, you couldn't gain much by adding cores because the main hog, rendering, was almost always single-threaded, because DirectX 11 and prior didn't give you much for running multiple threads in the rendering pipeline.
I'm not saying 4 vs 6 cores is going to see much difference, though.
But you might eventually start seeing 6 cores bump FPS numbers by some noticeable percentage over 4 cores.
But you'll be fine with 4 cores for another 2 years at least.
but the cost of a 5820k vs 6700k is like a couple dollars
You get 6700k if you are going to use the onboard GPU.
Otherwise get 5820k
Games have yet to support DirectX12.... So right now, 6700k will do better for almost all games... Its the higher frequency that pushes it past the 5820k for gaming. If games running DirectX9 and 11(.1) acted like DirectX12 then the unused cores would push the 5820k beyond the 6700k like most would expect.
Getting more cores for DX12 isn't the purpose.
It's not lagging when you will have multiple devices that need real-time use of a CPU like with Oculus Rift coming next year.
Reply to AntDX316
So, in the end since the Skylake and Haswell-E are very similar in price, which do you go for if you mostly Game with web browsing and basic word processing, etc. I occasionally encode, but not enough to call it a main reason to get a CPU. From what I read it seems the Haswell-E is the way to go over Skylake. Similar performance, similar price, more cores?
Reply to xaephod
xaephod said:So, in the end since the Skylake and Haswell-E are very similar in price, which do you go for if you mostly Game with web browsing and basic word processing, etc. I occasionally encode, but not enough to call it a main reason to get a CPU.
Unless you do some seriously CPU-intensive stuff, Haswell-E and the much more expensive motherboards that go with it are a waste of money in most cases and so are the Haswell and Skylake i7, so you can save another ~$100 on the CPU by getting an i5 instead.
I got an i5-3470 for my current PC and I doubt I am going to get an itch to upgrade before the five years old mark.
Reply to InvalidError
1 / 2Newest
Related resources
More resources
Read discussions in other CPUs categories
Ask the community
Top Experts
2873 messages since 2/29/16
hang-the-9
36458 messages since 3/24/10
9699 messages since 5/1/10
All badges
Forum help
Latest Reports
Tom's Hardware Around the World
Subscribe to Tom's Hardware
About Tom's Hardware
Advertising
Purch Privacy Policy
Terms Of Use and Sale
Copyright Policy
Copyright & 2016
Group, Inc. All Rights ReservedTom's Hardware Guide &
Ad choices}

我要回帖

更多关于 i7 6700k 4790k 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信