Announcement

Collapse
No announcement yet.

Xbox One Secret Sauce™ Hardware

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • mistercteam
    started a topic Xbox One Secret Sauce™ Hardware

    Xbox One Secret Sauce™ Hardware

    XBOX Hotchip August 2013:
    http://www.hotchips.org/wp-content/u...0130826gnn.pdf

    XBOX XDK NOV 2014 CHM file :
    https://mega.co.nz/#!gRFXjbKT!y964qf...veQI2is-0mh9fY

    XBOX ONE John Sell IEEE April 2014
    https://mega.co.nz/#!VNV2AAIB!Opv06_...K6S9PV8SgHHNq8
    http://www.computer.org/csdl/mags/mi...t/06756701.pdf

    Xbox One Architect Interview
    http://www.eurogamer.net/articles/di...-one-interview

    Altera, XB1 SOC, said, die to die
    https://www.altera.com/solutions/tec...onference.html

    XBOX ONE, ISCA 2014
    "Keynote I (Marquette I/IV): Insight into the MICROSOFT XBOX ONE Technology
    Dr. Ilan Spillinger, Corporate Vice President, Technology and Silicon, Microsoft"
    http://cag.engr.uconn.edu/isca2014/program.html

    Usenix Advanced Computing Systems Association (June 2014)
    Possible Futures, Category processor
    Xbox One: Next Gen Game Processor
    Microsoft: John Sell
    AMD: Sebastien Nussbaum (Trinity/APU Architect)
    http://vcew.org/CE-Vail-2014-Program.pdf
    http://u64.imgup.net/022233473e76.jpg


    Xbox One architecture Panel transcript (May 2013)
    https://forum.beyond3d.com/threads/x...nscript.54525/


    Kryptos (High performance APU) fact:
    means there is other SOC beside mainSOC
    ====================================
    1. AMD slide showed AMD jaguar is not categorized as High performance

    http://g72.imgup.net/36ae6.jpg
    http://w51.imgup.net/AMD_2013-25631.jpg
    http://j58.imgup.net/mobile0e0b.jpg

    2. Linkedin of Kryptos showed as High Performance APU
    https://pbs.twimg.com/media/CHmwVguUwAAOQao.jpg

    3. More Proof Kryptos is for X1, but it is not MainSOC
    http://goo.gl/O0fq9N
    http://goo.gl/W5cFuJ


    MrX live journal reference to it , (OBAN/kryptos SOC):
    ===========================================
    (will add later)....
    Last edited by mistercteam; 07-04-2015, 09:54 AM.

  • mistercteam
    commented on 's reply
    X2 or XSX has L4 which basically a custom HBM2 like but for cache

  • mistercteam
    replied

    Leave a comment:


  • OrionWolf
    replied
    Hey MrC if I could pick your brain a bit, so the rumors are that rDNA2 is going to use HBM2E, one of the manufacturers of HBME2 is Samsung.

    https://semiengineering.com/hbm2e-th...-evolutionary/

    "Samsung is positioning HBM2E for the next-gen datacenter running HPC, AI/ML, and graphics workloads. By using four HBM2E stacks with a processor that has a 4096-bit memory interface, such as a GPU or FPGA, developers can get 64 GB of memory with a 1.64 TB/s peak bandwidth—something especially needed in analytics, AI, and ML"
    What if MS partnership allows them a better price for HBM2E which is still a very costly memory in comparison to GDDR6 and by doing that they could not only have a more powerful GPU, but a much faster transfer rate. The part about AI/ML intrigues me the most. What do you think they should go with in terms of memory solution? HBM or GDDR?

    Leave a comment:


  • OrionWolf
    commented on 's reply
    Is this the reason as to why Scarlett devkits aren't a thing yet? You could have been right all along MrC Sony might be going with 7nm, they didn't either want to risk it not being available or pay for 5nm.

  • OrionWolf
    replied
    AMD ‘Zen 4’ 5nm Products Will Launch In 2021, 5nm Yield Has Already Crossed 7nm

    AMD has been on a red hot streak lately and it looks like it can't get anything wrong. If this report from China Times is to be believed (and this is usually a reliable source) then TSMC's 5nm testing is going very well and the first 3 customers have already been locked in - including AMD. According to the schedule obtained by China Times, AMD's 5nm products will be landing in early 2021 with mass production for 5nm scheduled in 2020.
    AMD among first three customers to grab TSMC 5nm production capacity, NVIDIA missing from the picture

    What is really amazing to hear in the report is that TSMC's 5nm yield has already crossed 7nm - which is quite the feat. This would mean that TSMC's 5nm will become viable sooner than expected and the transition from 7nm to 5nm can begin in earnest as well. The three customers that will be able to grab the first wave of production capacity are Apple, HiSilicon and AMD. While it is not surprising to see Apple get the first bite, it is interesting to see NVIDIA missing from this list - as I would have assumed they would be first in line to grab onto a process advantage (although this might be a questionable assumption considering they have yet to launch 7nm GPUs).
    AMD Ryzen 4000 CPUs With Zen 3 & Ryzen APUs ‘Renoir’ With Zen 2 Now Supported By AIDA64




    As per my understanding, the reason NVIDIA has yet to jump to 7nm is because of yield issues. If TSMC has managed to get 5nm yield higher than 7nm already than I can only assume that this statistic will only get better by 2021. If NVIDIA is no longer able to get the first bite of the product, then they might be at a disadvantage if AMD decides to revive its GPU side of things and make a comeback - which is expected to happen with the launch of "big Navi".
    On the other hand, the pressure has just increased even further for Intel - which is struggling to push 10nm out and aims to achieve 7nm by 2021. For the layman - Intel's 7nm is roughly equal to 5nm and is based on EUV, so it should be easier to execute than 10nm (counter-intuitively) which is not based on EUV. If Intel can get to 7nm by 2021, then at the very least, it will be on an equal footing with TSMC. Any other scenario would mean Intel losing even more market share and the stock price taking a big hit.
    TSMC's 5nm process has crossed 50% yield according to the report (which is what the yield for 7nm supposedly is right now) and monthly production capacity has been increased from 50000 units to 70000 units with 80000 units on the horizon. Thew new 5nm process is 1.8 times as dense as the older 7nm one (offering even more scalability for AMD's MCM philosophy) and can increase clock speeds by 15%. This means that a CPU and GPU that are currently netting 4.4 GHz and 1700 MHz respectively, will be able to hit the 5.0GHz and 1955 MHz marks quite easily.
    It honestly feels like AMD's luck is nowhere near to running out and the company's current stock price can only move higher. With TSMC handling the process lead over the industry and AMD's award winning Zen designs in place and prices cheaper than anyone else's - the one complaint a consumer can make right now is that the company give a little bit more attention to the GPU side as well. Everything else, as they say, is gold.

    https://wccftech.com/amd-zen-4-5-nm-launching-2021/

    Leave a comment:


  • mistercteam
    replied
    4:2:4 Metal layer only used for passive Interposer + Die = 2.5D, interposer can be active = active Block, so Active interposer + thin die = 3D W2W/ die 2 die, PS4 is 3:2:3


    Leave a comment:


  • mistercteam
    replied
    W2W from old data

    thin one = TSMC = Main SOC = CPU complex
    Thick one = Glofo = GPU complex


    Leave a comment:


  • mistercteam
    commented on 's reply
    sorry bit late to reply

  • mistercteam
    commented on 's reply
    wait ...

  • Misterx
    replied
    great topic MrC!

    Can you point to a post or photo about w2w evidences?

    Leave a comment:


  • OrionWolf
    commented on 's reply
    To not make new posts, but the new rumor is that the ps5 is going to be using a hybrid Zen2 aka Zen2+ ... lmao, can't wait for the hypocrites on era to say how the ps5 is going to be so much better thanks to that, all the while they were saying that Scarlett with Zen 3 will have a minimal impact on performance differences between Zen 2 and Zen 3.

  • OrionWolf
    replied


    So even Coretex is talking about 1GB of level 4 cache directly on chip! But he's talking about this as a design for 2023! And yet we had the leaked Scarlett info since January!!!

    Leave a comment:


  • OrionWolf
    commented on 's reply
    And yet everyone was dismissing Arcturus as nothing special or noteworthy ... now we see that Arcturus is a 2020 GPU, not only is it a 2020 GPU, but a 7nm+ GPU! What if MS had an AI/ML accelerator (Arcturus) + GPU (RDNA2)?

    Edit: I'm not saying full fledged AI accelerator (Arcturus) + GPU, but considering that AI/ML are becoming more of an necessity, especially when it comes to gaming (and game development!) I would think putting a dedicated chip that can do various other tasks would be a good idea, no?
    Last edited by OrionWolf; 10-12-2019, 07:13 AM.

  • OrionWolf
    replied
    AMD Arcturus Next-Gen GPU Support Added To HWiNFO, Could Be Featured In Radeon Instinct AI/HPC Lineup As Early As 2020

    AMD's Navi GPU architecture might be powering their current gaming graphics cards but the red team might also be working on a separate line of GPUs for AI and HPC markets, one that would be replacing the Vega GPU based Radeon Instinct Lineup in 2020. AMD Arcturus GPU Listed For Support, Possible Next-Gen Radeon Instinct 'MI100' HPC / AI Accelerator With Launch in 2020

    The AMD Arcturus GPU is the one we are talking about and it has shown up in leaks quite a few times. The Arcturus GPU first appeared back in 2018 through Linux (Phoronix Forums) and was later confirmed by an AMD employee that they will be using designated codenames for the chip itself rather than using family-codenames that might expose the product/marketing name. AMD Arcturus GPU will be the first to fall in that line but at the time, other details were not mentioned.

    Related Exclusive: Next Generation AMD Mobility 7nm CPUs Landing In Q1 2020, Will Bring AMD Gaming Laptops Price Down To $699



    Yesterday, HWiNFO added preliminary support for AMD Arcturus in its latest v6.13-3945 BETA release. In addition to that, Komachi_Ensaka revealed a series of chips that are marked under AMD's AI family and Arcturus seems to be one of them. The AI family also includes other GPUs such as Vega 10, Vega 12 and Vega 20. Judging by the AI name, the list might be mentioning the AMD Radeon Instinct based GPU accelerators as all three Vega GPUs listed have been featured inside a Radeon Instinct graphics cards for HPC and Artificial Intelligence. Arcturus also falls in the same list but looking at how the chips are mentioned in top-down order, the ones at the bottom for each family are also the latest additions to each specific lineup.

    LLVM 9.0 also featured support for Arcturus which is rumored to be branded under the GFX9 (GFX908) family which are Vega parts but considering this would be a new launch, something needs to be changed if AMD isn't planning a simple rebrand of their top Vega Radeon Instinct accelerator in 2020. The same Linux Patch (LLVM 9.0) also listed down three SKUs, & did mention GL/XL brandings. So for instance, AMD has Vega 10 which are featured on the Radeon RX Vega 64 and RX Vega 56 graphics cards. They both use the same GPU but different variants with different core configs with the Vega 64 featuring the XT and the Vega 56 featuring the XL variant.



    We have only seen little details which are also speculation at best such as the GPU cache info that is part of the Virtual CRAT (vCrat) size. The GPU cache correlates with the CU count. In the case of AMD Arcturus GPU, the cache size has been increased and so have the CU count from 64 to 128. That is twice as many CUs as Vega 10 which would give us 8192 stream processors if AMD is using 64 stream processors per CU like their current and modern-day GPU designs. Now AMD already released their Radeon Pro Vega II workstation series cards but remember, they are aimed at the workstation market and are based on Vega 20, a chip that is clearly mentioned in the list above. It is not to be related to Arcturus which is a separate part/chip and aimed at a different market, AI.

    Related RUMOR: AMD’s ‘Threadripper 3000’ TRX40 Platform May Not Be Backwards Compatible After All

    We know that AMD has rDNA 1 GPUs planned for the next few quarters before moving to rDNA 2 which is basically Navi 2 if rumors are accurate. Now Navi is from the ground up a gaming-oriented architecture first and the deployment in consoles and gaming graphics cards is evident of that. It is why AMD still uses Vega to power its HPC / AI stuff as it was designed for HPC / AI in mind.



    So if AMD wants to continue with Vega architecture as their top design for HPC / AI accelerators, than Arcturus can be a fully custom-designed chip for the Radeon Instinct lineup. Over at his blog site, Komachi also posts more about the AMD Radeon Instinct MI100 which he believes is based on the Arcturus chips. AMD's current flagship Radeon Instinct accelerator is the MI60 which is based on the 7nm Vega 20 GPU. One thing that we may see happen is that since Vega saw a refresh from 14nm down to 7nm, Arcturus may bring the 3rd refresh down to 7nm+ which is something AMD is eyeing for 2020. Some of the new features that have been listed for the new Mi100 Radeon Instinct accelerator include the following:



    That's all as far as the details we have got on AMD's Arcturus chip. When it launches is something we would only know when it's officially revealed by AMD and given the specs, don't expect it to be introduced as a consumer gaming card as Navi has taken up that task.

    Leave a comment:


  • mistercteam
    replied
    interesting ... thanks to slide 1 image from
    @DrkFX
    the image from Scarlett video here clearly showed component which POP (package over Package) slide2 and 3 example of POP in use, BTW (Hololens1 and 2 also POP)


    Leave a comment:


  • mistercteam
    commented on 's reply
    indeed Orion

  • OrionWolf
    commented on 's reply
    There is more and more credence to the possibility of 4t/C in Zen3 ... which makes the initial claim of XB Scarlet having 3t/c more and more possible, albeit not an assured thing, but considering the corroboration of the info we got it doesn't seem impossible at the moment. And look at everyone on Era dismissing the idea of more threads having any kind of impact on games/game development.

  • OrionWolf
    replied
    Rumor : AMD Zen 3 Architecture to Support up to 4 Threads Per Core With SMT4 Feature

    While AMD is still hard at work tweaking and refining its flagship 16 core Ryzen 9 3950X to get it ready for prime time in two months time, the company has reportedly completed the design of its next generation Zen 3 core microarchitecture, and it packs one hell of a surprise. Rumor : AMD Zen 3 Architecture to Support up to 4 Threads Per Core With SMT4 Feature

    Rumor has it, that AMD’s next generation CPU microarchitecture is going to have a brand new feature called SMT4, and as the name implies it’s a simultaneous multi-threading feature. Whilst the company’s Zen 2 core does improve on the SMT capability of the original Zen architecture, which was the company’s first ever design to feature SMT, Zen 3 is said to make a giant leap forward by doubling the execution thread count per core from two to four.

    Related AMD Ryzen 5 3600 Benchmark Leaked, Dominates Intel’s i9 9900K in Single-threaded Performance



    AMD won’t be the first ever company to do this kind of thing. some iterations of IBM’s Power architecture support up to eight execution threads per core. However, if the rumors are to be believed, AMD will be the first ever to introduce an x86 microarchitecture capable of executing more than two threads per core.

    As CPU microarchitectures become more complex and as the cores get bigger and bigger some parts of the pipeline grow underutilized and inefficiencies are bound to inherently show up whilst existing ones could grow more exaggerated. This is why SMT makes sense to begin with, and enabling the design to take advantage of these inefficiencies and underutilized resources by executing additional threads per core may be the right solution, if done right and not at the expense of single threaded performance.

    If what the rumor alleges ever comes to fruition, SMT4 could be a game changer for the server market. Until then however, take it with a grain of salt.


    https://wccftech.com/rumor-amd-zen-3...-smt4-feature/

    Leave a comment:


  • mistercteam
    replied
    try to post something after long time :

    PS5 = Custom Chip (APU)
    X2 = Custom Processor now you know why ?
    Simple AMD or MS could not lie,





    Leave a comment:


  • Direct X-Static
    commented on 's reply
    "Some networks do not depend on the level of precision that FP32 offers, so by doing math in FP16, we can process around twice the amount of data in the same amount of time. Since models benefit from this data format, the official release of WinML will support floating point 16 (FP16), which improves performance drastically. We see an 8x speed up using FP16 metacommands in a highly demanding DNN model on the GPU. This model went from static to real-time due to our collaboration with NVIDIA and the power of D3D12 metacommands used in DirectML. "

    Yeah, ML HW is almost definitely on the table for Scarlett. They've gone real-time and performance gains from ML are more than obvious from the pictures of the car shown above.

  • mistercteam
    replied
    And Now #megabooom for something that mocked X1
    lets we Compare X1 Command Processor block vs PS4 CP block from its own leaked doc
    this time no lies forum keep mislead,
    let them, afterall The nobody approved then it is done ....
    X1 has 2x eveything from PS4 from CP POV




    Leave a comment:


  • mistercteam
    replied
    #thetruthyoucantdeny #ps4_X1 #boom

    PS4 vs X1 General High level GPU diagram X1 diagram each block in the center is smile block
    hint:check the PS4 diagram dataflow
    people easily downplay X1 because there is no GCN High level to compare with. AMD also shy mode



    Leave a comment:


  • mistercteam
    commented on 's reply
    it will there

  • OrionWolf
    commented on 's reply
    Tell me how MS doesn't see the benefits of ML or how they're not invested into gaming! That talk about a specialized chip for AI/ML seems more and more feasible to me. Also isn't it possible for them to use their Azure AI to test out future HW and games that will run on them? What if it allowed them to come up with HW that could be beyond 2020 (if not 2021)?
Working...
X