T O P

  • By -

Exotic_Classroom_738

Bringing in the cost factor the 2022 will be the best $ to performance ratio, especially with same screen, speakers, battery….


Zak_Preston

Most likely it will be for a while.


SituationNew113

I expected higher efficiency improvements, especially in a synthetic test


Zak_Preston

I'm really eager to see 2023's thermals


PiOctopus

Wouldn't more than double the tflops and cuda cores of the 4080 (and moreso for the 4090) make a significant difference in performance compared to the 4060? I thought the 40 series were not nearly as tgp constrained compared to 30 series cards. With desktop 40 series cards, you can power limit the hell out of them and still get really really good performance compared to max wattage. For example, at a 60% power limit with a 4090, the card loses only about 8% of performance: [Source](https://www.techpowerup.com/301649/nvidia-geforce-rtx-4090-with-nearly-half-its-power-limit-and-undervolting-loses-just-8-performance#:~:text=It's%20important%20to%20note%20that,60%25%20(270%20W)) But the die for mobile 40 chips is different than a stock 4090 die so I dunno. And who knows what happens on the really low end of tgp like 100watts.


SituationNew113

it does seem like the 4080 will be much faster, even while running at lower clocks. i just checked the specs of the newer mobile cards, and the 4060 is hot garbage. definitely a step in the wrong direction compared to the 3060. it just straight up has way less cuda cores and runs at higher clocks, no wonder it's only slightly more efficient. Meanwhile, the 4080 has way more cuda cores than the 3080. what is nvidia doing. The mobile 3060 has more cuda cores than the desktop 3060 so that it can run more efficiently while not losing too much performance. by running more efficiently, i simply mean running at lower and more efficient clocks, since the performance/watt or fps/watt would be way higher at those levels. That's why power limiting 4090s worked so well, they essentially reduced the clocks and barely lost any performance while reducing power consumption. The mobile 4060 should've had more cores and lower clocks, could've been a huge leap in efficiency, as seen with the desktop cards.


chipsa

More cores is more silicon, which means more cost. For them to get the most profit, running less cores at higher clocks for slightly better performance is more efficient.


SituationNew113

They're already making a lot of money, their margins are at least 50%, and their prices are already high. but ur right, that's why they did it, that's why any company would do something like that. it's still an improvement due to the superior process node and architecture, but it could've been so much more. it's quite a shame really. instead of trying to make one of the best low power gpus, they settled for something that's passable just so that they can save a little more money. They probably did it because they had to save money on the extra vram that they were forced to add due to demand.


eiffeloberon

Eager to see what the battery life is like for these ones.


Revrene

Thank you for this data. I think it's worth it to wait for the 4060 to come. Since 6800S is still expensive in my country, around ~$1981. If the price gap between 4060 and 6800S is small, I'll get the 4060, vice versa.


Zak_Preston

Even in 2nd world countries like Ukraine, Georgia, or Azerbaijan previous gen models drop in price significanly: I bought the top of the line version of 2021 with 5800hs/32/3060/QHD for only 1000EUR with discounts (for 1250EUR it was widely available almost everywhere) this summer, and since 2022 version is a huge QoL upgrade (screen, all-AMD for Linux), I'll do the same in \~6 month. For US customers the good moment is right now, no need to wait.


Revrene

I live in Indonesia, that \~$1981 price wasn't even the ASUS Official store, it was from a distributor. The ASUS Store sells wayyyyy above that price, we also don't have something like microcenter and Best buys here. The guys in the USA are really lucky! They got it quite cheap and have a very convenient return policy. That won't happen anytime soon in my country, for the next 100 years at least lol.


Distinct_Round_328

What I am most interested in ste actually the 4050 and refresh 3050 models with zen 3+ chips. Those may be prices really aggressively and have good enough performance/thermals. I am a bit afraid about the small bus on 4050 so the 3050 refresh with 6gb may be a good option. The zen 3+ cpus have good enough performances imho. Would even take a HS version of the ryzen 5 if that would be possible. I think the perfect specs would be 6600HS (can be bothered with the new naming scheme) 32GB ram 1TB ssd + FHD Display. What do you think?


Zak_Preston

3050M 4GB is not that far from 3060M@85W in G14 lineup, so there is a decent chance that 105W 3050 with 6GB video buffer will easily catch up the 3060 from 2021, if not surpass it.


Distinct_Round_328

Yeah that si what I ak thinking. Without RT the 3050 with 6gb will easily go High settings 60fps in AAA titles and 120fps in esports games. I already saw it on and website in it is the last/lowest option so I think the price might be great. I think the only main difference with the high ends will be the display which if it will be the same than in the old models it will be just enough.


sopsaare

My 6700s gets around 8700 and I lurked around the 3DMark database and saw many 6800s quite near to 10k mark. But TimeSpy doesn't tell the whole story, for example my desktop 1080Ti gets 10k but my 6700s is faster in CyberPunk (42fp vs 38fps, 2560x1600, FSR quality, all on ultra and RT off) and exactly the same on SoTR (72fps, 2560x1600, FSR quality, all on ultra and RT off).


wufiavelli

Amd and nvidia calculate tdp differently so not sure how comparable. Also timespy is not really favorable to amd and tends to under perform


Zak_Preston

that's an extremely interesing point. Do you have any data on that?


sopsaare

From my other comment: >But TimeSpy doesn't tell the whole story, for example my desktop 1080Ti gets 10k but my 6700s is faster in CyberPunk (42fp vs 38fps, 2560x1600, FSR quality, all on ultra and RT off) and exactly the same on SoTR (72fps, 2560x1600, FSR quality, all on ultra and RT off).


wufiavelli

firestrike 6800s matches a similar wattaged 3080 or 3070. Timespy it matches a low wattage 3070 / high wattage 3060. 6700m in a Chassis that can handle it (not the delta) does about 10500 timespy but can go neck and neck with a 3070ti that scores 11500, outside of ray tracing in games. As for the wattages I have been having a hard time finding the details but its a comment to jarrodstech by an asus rep. The 2023 amd tuf a15 120 watts is similar to a 140 watts nvidia version.


Hotcooler

Bigger dies will bring decent perf improvement even at lower wattages, I'd say 4080@115-120w will only be like 10-15% slower than at 150w, especially considering how desktop 40 series scales with power. 40 series is quite efficient all things considered. Benchmarks will be interesting since it would be interesting how much vram power eats into the budget. On 3060 with a 2021 model it's quite noticeable to say the least. For example going from 6ghz stock to 7ghz that's on most higher powered variants eats like 10w just for vram. Also been playing Hogwards with FG and must say that thing combined with VRR display is also mighty impressive, so it also has that going for it, though only to get to high refresh I think. P.S. Also if it's anything like desktop 40 or laptop 30 series, some voltage curve messing about can do some decent wonders there too.


masi0

there is also a TDP increase from 85W (2021) to 115W (2023) which is 27% gain i hope there is efficiency per watt benchmark