Announcement

Collapse
No announcement yet.

Xbox One Secret Sauce™ Hardware

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • recent Adored tv youtube Navi10-20
    https://www.youtube.com/watch?v=7mJC...ature=youtu.be

    Chiplet need NTB
    look at PCI id of leaked PS5 vs X2

    arden is X2
    Ariel is PS5
    NTB present on X2

    plus the leaker of early X2 on reddit also said what he saw physically












    Comment


    • OrionWolf
      OrionWolf commented
      Editing a comment
      Yeah, Navi is supposed to be a radically different design ... I mean if they went with MCM (or whatever they're going to call it for the GPU side of things) as they're doing with Zen2 and offering very low prices for better perf and a lot more cores than the competition (I would like to see Intel offer a 16c/32t CPU for under a $1000!). Also Adored has been peculating for a while that AMD was going to ditch the monolithic design approach for the "chiplet" design as it guaranties better yields, lower prices and more perf. I mean the rumor is going around that they're going to offer a 2080 competitor for $400 ... how beliavable that is I don't know, but if they even come close to that, than we're bound to see some real difference in the games. This gen is going to a huge perf jump in comparison. Not just thanks to the HW, but the software especially the utilization of AI to help devs to far more work with fever bugs than ever before.

    • OrionWolf
      OrionWolf commented
      Editing a comment
      I took my time and properly watched the video.

      So the Arcturus moniker is not for the arch, but a specific GPU?!

      Adored, for whatever reason (hopefully I'm not reading to much into this), gives the suggestion that it could be the xb2 GPU. Also, Navi is heavily implied to still be monolithic and the last GCN based arch with a CU limit of 64 ... are we possibly going beyond that, or could they use 2GPUs and use CUs from both?

      Like Navi 12 with 48CUs x 2 which would give 96 or even at 40CUs at 80 that's twice the amount of what the x1 has! In Tf, let's say 1.4GHz based on current GCN, with 40CUs we're looking at 7.16Tflops, if you use another GPU you get 14+Tflops in total ... But would that be possible and less costly than going with whatever is beyond Navi and single GPU design? I mean NTB is not going to be used on CPUs, why would you need two CPUs in a console?

      GPUs on the other hand or "crossfire on a single package" is another thing. And it works in MS favor it they "basically" used Lockheart as a based model and "just" put another GPU into it.... that's very simplistic way of looking at it I know, but they would save on yields and still be able to produce a very powerful system with Anaconda. That is if they don't go with whatever comes next.

      Lol, I didn't even know about NTB and it actually made sense to me that AMD would pursue a better yields strategy i.e. MCM/chiplet (are they interchangeable?) design which also gives you better perf/watt.

      In regards to the HBM2, I was curious about that due to latency "issues" with GDDR, you get lot less latency with HBM, but I guess it's still quite cost prohibitive, so large l4 ... isn't that exorbitantly big tho? Albeit, that leak with the l4 doens't seem that crazy of an idea anymore ...

      Also isn't Arden = Lockheart ; Argalus = Anaconda? So is the inclusion of NTB inside Lockheart actually a sign that they're designing the SoC in such a way that any LH can be used as a basis for Anaconda? Or am I completely misreading things?
      Last edited by OrionWolf; 04-20-2019, 11:00 PM.

  • to give clarity

    Chas boyd DX12 architect in 2015 slide:
    https://www.slideshare.net/mistercteam/3-boyd-direct3d12-1 …

    at tht time people mocked X1
    & disbelief about X1 have 12_1 feat they think how come ?

    so they attack & said swizzle is Std then boom we got confirmation from AMD,those feat is new start with Vega














    Comment


    • T10GoneDev
      T10GoneDev commented
      Editing a comment
      Swizzle wizzle, still stuck at 900p either way.

  • This is what I've been talking about previously, this are huge investments so plans for cpu/gpu etc. architectures and future tech are made 10 years in advance! They're not going to share everything, but if the ps5 design has started 4 years ago and it includes Navi and Zen2, well before Zen was even a thing, you think they couldn't go with something beyond that? I mean, from Brad Sams, this time around MS has been a lot more hands on with the development, I'm guessing instead of paying crazy amounts of money for super advanced tech directly from AMD they're making their own in collaboration with AMD. I mean even Sony with the ps5, when do you think they got the chips based on Zen2 and Navi? In 2015-2016? How do you think their internal studios designed their next gen games and engines when Zen, Navi nor anything about their characteristics were known? AMD patented a lot of stuff, Navi is a far different approach than Polaris, so is Zen.

    They share info between each other, info that will not get to the public. I know it sounds (well sounded) like a crazy person rambling, but if the ps5 has been in development for 5 years, when is it that they decided to go with Navi and Zen instead of Vega and Zen (for example) when Navi is still to be properly announced?


    https://wccftech.com/amd-gpu-apu-roa...5-2020-emerge/

    Comment



    • seems My dig on Arteris is basically interconnect fabric based for Xbox one on correct track, for X2 MS probably use AMD own Infinity fabric, but on X1, it is Arteris FlexNOC (network on chip) based,
      slide
      1 old slde
      2 new evidence from Linkedin
      linkedin->bob-wang-4087183622









      Comment


      • OrionWolf
        OrionWolf commented
        Editing a comment
        MrC do you think it's possible that there's no listing for Argalus info on the PC_ID list because MS was still designing it when the list came out? I mean that's one way how to actually come out with better specs or "beyond" next gen. What if they started with Lock heart, but bid their time with Anaconda until the tech they wanted/needed was actually available or possible to fab?

        What if Lockheart and ps5 are closely matched in specs and price, but Anaconda depending on how much later it started with the design phase could be whatever comes after Navi because MS wanted to make sure they kept the promise of the power advantage. Having two SKUs allows them quite a lot, question is only how powerful Lockheart is.

        If it's near ps5 whole system wise, than Anaconda is going to be quit the different thing.

        Another two things if I may:

        1) What is the "Persephone" code name referring to exactly? Is it another name of Argalus or is it something totally different?
        2) If they go with HBM for l4, what do you think of HBM3 instead of HBM2? I think it's around 700GB/s, but if MS aims for the best possible even up to 1tb/s is not out of the realm of possibility i mean low latency + high BW would be ideal for cache no? Is there anything else that could serve as even faster cache with lower latency?

        I'm kinda starting to understand the "Arcturus" leak more now.

      • mistercteam
        mistercteam commented
        Editing a comment
        each RCC chiplet is 640GB/sec from its own base die remember ray tracing need lots of BW, but MS will only tells about Arden/Anaconda main GPU

        if you see the slide you see 1x TF, base TF is from lockhart, each chiplet is mimic lockhart but using Arcturus

      • OrionWolf
        OrionWolf commented
        Editing a comment
        Hmm, interesting, but according to nvidia: "Continuing on single GPU considerations - nearly all ray tracing applications are single precision, so only the GPU's single precision speed is relevant. Ray tracing also tends to be high latency, so the GPU's memory bandwidth has minimal impact on ray tracing performance. Error correction is also not generally relevant for ray tracing, and can be turned off to regain some GPU memory.

        For most people, the GPU's memory size rivals its performance traits, as most ray tracing applications require the entire scene (geometry + texture maps + acceleration structure) to fit within the GPU's memory. Exceeding memory by one byte will usually either prevent the rendering or cause a fallback to far slower CPU processing (if the renderer has a fallback). Some more recent renderers (like that introduced in After Effects CS6) will page to system RAM, at a reduced performance that's still far better than CPU alone. Regardless of the fallback, staying within GPU memory is required for the best performance, with many artists choosing to get the largest memory cards they can obtain to maximize their chance of staying fast. Adding additional GPUs (be they on the same card or in other slots) doesn't increase the memory available for rendering, as the same data set must be hosted on each GPU."

        So I'm kinda wondering if Sony is going with 8GB of HBM2 (maybe Samsung Aquabolt HBM2) to do their RT?

        Also from Wccftech: "This means that a solution based on a 384-bit interface and surrounded by 12 DRAM dies could feature up to 24 GB of VRAM while a 256-bit solution can house up to 16 GB of VRAM. That’s twice the VRAM capacity as current generation cards. While VRAM is one thing, the maximum bandwidth output on a 384-bit card can reach a blistering fast 864 GB/s while the 256-bit solution can reach a stunning 576 GB/s transfer rate."

        16 and 24GBs ... where did I hear that before. =)

      • mistercteam
        mistercteam commented
        Editing a comment
        PS5 BW is 400-500GB plus RT on NAvi Is not the same dedicated RT based on Arcturus etc
    Working...
    X