No announcement yet.

Xbox One Secret Sauce™ Hardware

  • Filter
  • Time
  • Show
Clear All
new posts

  • mistercteam
    started a topic Xbox One Secret Sauce™ Hardware

    Xbox One Secret Sauce™ Hardware

    XBOX Hotchip August 2013:

    XBOX XDK NOV 2014 CHM file :!gRFXjbKT!y964qf...veQI2is-0mh9fY

    XBOX ONE John Sell IEEE April 2014!VNV2AAIB!Opv06_...K6S9PV8SgHHNq8

    Xbox One Architect Interview

    Altera, XB1 SOC, said, die to die

    XBOX ONE, ISCA 2014
    "Keynote I (Marquette I/IV): Insight into the MICROSOFT XBOX ONE Technology
    Dr. Ilan Spillinger, Corporate Vice President, Technology and Silicon, Microsoft"

    Usenix Advanced Computing Systems Association (June 2014)
    Possible Futures, Category processor
    Xbox One: Next Gen Game Processor
    Microsoft: John Sell
    AMD: Sebastien Nussbaum (Trinity/APU Architect)

    Xbox One architecture Panel transcript (May 2013)

    Kryptos (High performance APU) fact:
    means there is other SOC beside mainSOC
    1. AMD slide showed AMD jaguar is not categorized as High performance

    2. Linkedin of Kryptos showed as High Performance APU

    3. More Proof Kryptos is for X1, but it is not MainSOC

    MrX live journal reference to it , (OBAN/kryptos SOC):
    (will add later)....
    Last edited by mistercteam; 07-04-2015, 09:54 AM.

  • mistercteam
    commented on 's reply
    PS5 BW is 400-500GB plus RT on NAvi Is not the same dedicated RT based on Arcturus etc

  • OrionWolf
    commented on 's reply
    Hmm, interesting, but according to nvidia: "Continuing on single GPU considerations - nearly all ray tracing applications are single precision, so only the GPU's single precision speed is relevant. Ray tracing also tends to be high latency, so the GPU's memory bandwidth has minimal impact on ray tracing performance. Error correction is also not generally relevant for ray tracing, and can be turned off to regain some GPU memory.

    For most people, the GPU's memory size rivals its performance traits, as most ray tracing applications require the entire scene (geometry + texture maps + acceleration structure) to fit within the GPU's memory. Exceeding memory by one byte will usually either prevent the rendering or cause a fallback to far slower CPU processing (if the renderer has a fallback). Some more recent renderers (like that introduced in After Effects CS6) will page to system RAM, at a reduced performance that's still far better than CPU alone. Regardless of the fallback, staying within GPU memory is required for the best performance, with many artists choosing to get the largest memory cards they can obtain to maximize their chance of staying fast. Adding additional GPUs (be they on the same card or in other slots) doesn't increase the memory available for rendering, as the same data set must be hosted on each GPU."

    So I'm kinda wondering if Sony is going with 8GB of HBM2 (maybe Samsung Aquabolt HBM2) to do their RT?

    Also from Wccftech: "This means that a solution based on a 384-bit interface and surrounded by 12 DRAM dies could feature up to 24 GB of VRAM while a 256-bit solution can house up to 16 GB of VRAM. That’s twice the VRAM capacity as current generation cards. While VRAM is one thing, the maximum bandwidth output on a 384-bit card can reach a blistering fast 864 GB/s while the 256-bit solution can reach a stunning 576 GB/s transfer rate."

    16 and 24GBs ... where did I hear that before. =)

  • mistercteam
    commented on 's reply
    each RCC chiplet is 640GB/sec from its own base die remember ray tracing need lots of BW, but MS will only tells about Arden/Anaconda main GPU

    if you see the slide you see 1x TF, base TF is from lockhart, each chiplet is mimic lockhart but using Arcturus

  • OrionWolf
    commented on 's reply
    MrC do you think it's possible that there's no listing for Argalus info on the PC_ID list because MS was still designing it when the list came out? I mean that's one way how to actually come out with better specs or "beyond" next gen. What if they started with Lock heart, but bid their time with Anaconda until the tech they wanted/needed was actually available or possible to fab?

    What if Lockheart and ps5 are closely matched in specs and price, but Anaconda depending on how much later it started with the design phase could be whatever comes after Navi because MS wanted to make sure they kept the promise of the power advantage. Having two SKUs allows them quite a lot, question is only how powerful Lockheart is.

    If it's near ps5 whole system wise, than Anaconda is going to be quit the different thing.

    Another two things if I may:

    1) What is the "Persephone" code name referring to exactly? Is it another name of Argalus or is it something totally different?
    2) If they go with HBM for l4, what do you think of HBM3 instead of HBM2? I think it's around 700GB/s, but if MS aims for the best possible even up to 1tb/s is not out of the realm of possibility i mean low latency + high BW would be ideal for cache no? Is there anything else that could serve as even faster cache with lower latency?

    I'm kinda starting to understand the "Arcturus" leak more now.

  • mistercteam

    seems My dig on Arteris is basically interconnect fabric based for Xbox one on correct track, for X2 MS probably use AMD own Infinity fabric, but on X1, it is Arteris FlexNOC (network on chip) based,
    1 old slde
    2 new evidence from Linkedin

    Leave a comment:

  • OrionWolf
    commented on 's reply
    I took my time and properly watched the video.

    So the Arcturus moniker is not for the arch, but a specific GPU?!

    Adored, for whatever reason (hopefully I'm not reading to much into this), gives the suggestion that it could be the xb2 GPU. Also, Navi is heavily implied to still be monolithic and the last GCN based arch with a CU limit of 64 ... are we possibly going beyond that, or could they use 2GPUs and use CUs from both?

    Like Navi 12 with 48CUs x 2 which would give 96 or even at 40CUs at 80 that's twice the amount of what the x1 has! In Tf, let's say 1.4GHz based on current GCN, with 40CUs we're looking at 7.16Tflops, if you use another GPU you get 14+Tflops in total ... But would that be possible and less costly than going with whatever is beyond Navi and single GPU design? I mean NTB is not going to be used on CPUs, why would you need two CPUs in a console?

    GPUs on the other hand or "crossfire on a single package" is another thing. And it works in MS favor it they "basically" used Lockheart as a based model and "just" put another GPU into it.... that's very simplistic way of looking at it I know, but they would save on yields and still be able to produce a very powerful system with Anaconda. That is if they don't go with whatever comes next.

    Lol, I didn't even know about NTB and it actually made sense to me that AMD would pursue a better yields strategy i.e. MCM/chiplet (are they interchangeable?) design which also gives you better perf/watt.

    In regards to the HBM2, I was curious about that due to latency "issues" with GDDR, you get lot less latency with HBM, but I guess it's still quite cost prohibitive, so large l4 ... isn't that exorbitantly big tho? Albeit, that leak with the l4 doens't seem that crazy of an idea anymore ...

    Also isn't Arden = Lockheart ; Argalus = Anaconda? So is the inclusion of NTB inside Lockheart actually a sign that they're designing the SoC in such a way that any LH can be used as a basis for Anaconda? Or am I completely misreading things?
    Last edited by OrionWolf; 04-20-2019, 11:00 PM.

  • OrionWolf
    commented on 's reply
    Also MS was the first to introduce liquid cooling in the smartphone industry. SMARTPHONE! Let alone a console, they are probably trying to increase either the clock-speed or increase the CU count, all dependent on the cooling and the power draw. And lets consider this, the supposed Navi 10 with 56CUs draws 180W of power, the RX580 that the xb1x is supposedly based on, has 36CUs (the xb1x has 40CUs!) and a power draw of 185W on average, but it can easily go beyond 200W when gaming, while the xb1x with the gpu, cpu, mobo etc. draws only 172W while playing Gears 4! Tell me how there's no benefit or extraordinary accomplishment in the custom design of the xb1x!
    Last edited by OrionWolf; 04-20-2019, 10:48 AM.

  • OrionWolf
    This is what I've been talking about previously, this are huge investments so plans for cpu/gpu etc. architectures and future tech are made 10 years in advance! They're not going to share everything, but if the ps5 design has started 4 years ago and it includes Navi and Zen2, well before Zen was even a thing, you think they couldn't go with something beyond that? I mean, from Brad Sams, this time around MS has been a lot more hands on with the development, I'm guessing instead of paying crazy amounts of money for super advanced tech directly from AMD they're making their own in collaboration with AMD. I mean even Sony with the ps5, when do you think they got the chips based on Zen2 and Navi? In 2015-2016? How do you think their internal studios designed their next gen games and engines when Zen, Navi nor anything about their characteristics were known? AMD patented a lot of stuff, Navi is a far different approach than Polaris, so is Zen.

    They share info between each other, info that will not get to the public. I know it sounds (well sounded) like a crazy person rambling, but if the ps5 has been in development for 5 years, when is it that they decided to go with Navi and Zen instead of Vega and Zen (for example) when Navi is still to be properly announced?

    Leave a comment:

  • OrionWolf
    commented on 's reply
    Yeah, Navi is supposed to be a radically different design ... I mean if they went with MCM (or whatever they're going to call it for the GPU side of things) as they're doing with Zen2 and offering very low prices for better perf and a lot more cores than the competition (I would like to see Intel offer a 16c/32t CPU for under a $1000!). Also Adored has been peculating for a while that AMD was going to ditch the monolithic design approach for the "chiplet" design as it guaranties better yields, lower prices and more perf. I mean the rumor is going around that they're going to offer a 2080 competitor for $400 ... how beliavable that is I don't know, but if they even come close to that, than we're bound to see some real difference in the games. This gen is going to a huge perf jump in comparison. Not just thanks to the HW, but the software especially the utilization of AI to help devs to far more work with fever bugs than ever before.

  • T10GoneDev
    commented on 's reply
    Swizzle wizzle, still stuck at 900p either way.