Doing it ARM DynamIQ style ........
Qualcomm uses ARM DynamIQ tech to put it all into the chip core itself , with dedicated video and I/0 done per core, with massive but tiny redundancy taking care of any local loading and I/O issues. A planning council was done for this chip months ago and all implementation details were worked out by the group and
THE OPERATING SYSTEMS SOFTWARE was refined in advance for this arrangement.
vs ============================================================
3.3 TFLOPs of "duplicative useless" computingDoing it all Intel old school kludgy "after the fact" style .........
Copy the entire CPU chipset over twice, one Intel CPU complete and one AMD CPU complete, then mate the two dissimilar complete with on board graphics CPUs up to a single very large powerful AMD Graphics Core Array that is already an integrated part of the AMD Chipset.
Totally ignore the weak arsed Intel graphics unit built into the Intel system as you don't plan to use it (but you still have to pay for it).
Now pay for all the required I/O support pieces for the multiple redundant Intel and AMD CPU sets and then figure out AFTER THE FACT how to get them to work together on the same Daughter and Motherboards ......
Think you got hot bubbly compatibility issues mebbe just a roiling up out of this hot unplanned mess ???? (Microsoft never planned up front to support this 2 CPU kludge, btw.) Think somebody is gonna wanna get paid for EACH ONE OF THE 3 complete processor sets ???? (and then paid for all them required hook up bits and pieces on the motherboard, too)
Think on it a bit ---- Qualcomm has a whole range of ARM based stuff they can do now, going up from a phone level, to a Chromebook / laptop level, on up to a PC level, ranging on up to a super server level. Stuff with a cost savings that is running up to 10 times better and a speed improvement of 2-3x better (soon ramping up to 20x better with the AI tricks maturing day by day by day).
To Intel, this is pure competitive hell come to Earth to visit them. Time for Intel to pull out the bag of dirty tricks and try out a few of those nasty tricks on Qualcomm and Samsung to attempt to find some "distraction factor" to slow the bad boys down.....Realizing that the existing Samsung and Qualcomm pair up (foundry and chip designer) are both now both able at this time to attack Intel successfully at a very fundamental level, understand that the recent Broadcom take over bid may simply be a disruptive corporate attack on Qualcomm orchestrated by some other, left hand hidden Intel partners.
If so, look to see actions taken by the existing "anti-Intel" standards consortium to actively promote some more (somewhat less complex) versions of the DynamIQ server chipset, mounting Mali G72 graphics (or possibly even a AMD Graphics Array as AMD is a part of the group already) and remember AMD is still smarting from that latest Intel bowl movement that Intel took all over their head last week).
And, even if Qualcomm did get itself sidelined by a Broadcom take over bid, Samsung by itself currently both designs and builds its own chipsets anyway. As does Mediatek and Huawei .....
Plus, if ARM designed itself up a turn key reference style PC desktop chipset, anyone could build a PC level ARM chipset using TSMC as the low cost foundry just by building out the ARM reference design. The only thing that would be required would be a somewhat stronger Mali Graphics GPU block design, one that is very tightly integrated with a
much larger stronger AI graphics block array. This is the path I predict will take place once Mickey finishes the Windows software.
AMD has "gone ahead" and shown us how to do it, and AMD has a much larger stronger AI graphics block array than Nvidia's current "CPU less" version does.
But make no mistake,
Nvidia has also shown us that the old style Intel type CPU portion is far less important than the many many shader style cores of a massively complex GPU based processor as both Nvidia and Apple both now use their GPU style shader cores to do all normal math calculations, etc. etc. CISC CPU's will become less and less important over time.
And remember that AI itself runs off the GPU style shader core arrays in their massive massive numbers ....
Self driving cars currently do ALL their calculations off the large amounts of graphics shader cores that they contain .....So, what we use for math calculations and actually call the "computer" is busy changing up on us as we speak.....