Announcement

Collapse
No announcement yet.

Xbox One Secret Sauce™ Hardware

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #34
    Originally posted by revben View Post
    HALO 5 DYNAMIC RESOLUTION

    It seems that halo 5 does have dynamic resolution. However, there is somethings I have noticed.

    First, the GPU IS NOT BEING TAXED during warzone multiplayer gameplay. The max the gpu reaches during gameplay is 25%.

    You also notice that the sometimes the GPU PERCENTAGE is high while the resolution is low and then the resolution is high but the GPU PERCENTAGE is low. You would think this does not make any sense. After all the resolution should be high when the GPU % is high.
    However, resolution is more determined by the bandwidth available to the game. If the bandwidth is not the bottleneck, the GPU then become the bottleneck. GPU is not the bottleneck in XBOX one despite what people think.
    The bottleneck has been bandwidth. DDR3 does not has enough bandwidth to consistently drive 1080p. However, the xbox one does have the esram. If most of the render target is placed in it, then game will be at 1080p.
    If you look carefully at the videos, you will notice only one thing determine the resolution and that is REN. Notice that the lower REN is, the higher the resolution. LOOK AT THE VIDEO AND YOU WILL SEE. My theory is that REN represent the about of the render target in DDR3, the less in DDR3, meaning more render target in esram, the better the resolution. Therefore, the only thing is needed to improve resolution is better use of esram as the GPU can handle make 4k resolution with ease.
    That is why at GDC, when the PIX of the esram was improved, the proformance improved by 15% and that was without tile resource and dx12, imagine when those come on stream. Plus MAYBE HBM?.

    Next, we can see that the jaguar core in the xbox can only feed the GPU up to 25% during gameplay. That is why it was said that dx12 will make it seem as if the XBOX ONE has gotten an extra GPU.This is better dx12 will increase the cpu performance by 100-300%. At this time the GPU can maybe reach up to 50% use during gameplay, therefore more particles, effects etc.

    The next point is that XBOX ONE has reduced performance during cutscenes, and at that point GPU can reach up to 60%. Weird!!!. And this can be seen in games such as Tome raider remaster, the withcer 3, fifa, halo 5 etc. Most games perform worst in cut scenes. Therefore in the future, gameplay reach 4k 60fps, cut scenes will still be at 1080p 30fps. That means developers should stat using real-time cut scenes.
    I can't find any report of it having a dynamic resolution. we already know multiplayer was said to have 1080 60fps and we assumed single player was 1080P 30FPS but no word on that.

    Comment


    • #35
      Originally posted by Shonuff View Post

      I can't find any report of it having a dynamic resolution. we already know multiplayer was said to have 1080 60fps and we assumed single player was 1080P 30FPS but no word on that.
      I do not believe that either, I am making reference to the tech, no one knows.

      Comment


      • #36
        Yup and they also confirmed that the build is using assets that are over 10 weeks old and pretty much all we have seen is placeholders and unfinished assets of H5.

        I'm guessing they are using dx11+ to get locked 60fps then when the polishing stage comes and they have a whole bunch of extra resources to spend on the visuals.

        Also the the pics from neogaf revealed the the gpu thread was averaging 25% usage at 33ms (30fps) That wouldn't make any sense as the beta was even running at 60fps. Unless gaf just decided to record the numbers whenever it was at its worst.

        Also, if 1 thread uses 25%, 2 thread uses 50%?? They are nowhere near utilizing the X1 to its potential and more or less points to a massive upgrade in visuals before launch of halo 5. It just doesn't make any sense that they are only 4 months away from launch and its so horribly optimized, not to mention H2A has better graphics and it was still running the old H2 engine in the background at all times. Polishing comes last with MS, gameplay first.
        Last edited by qO-Lantern-Op; 06-21-2015, 09:15 PM.

        Comment


        • #37
          Maybe Halo 5 was designed to launch with dx12 ready in mind.(this is why we have dynamic resolution right now)

          Windows 10 on xbox One is planned to launch before Halo 5 ? If yes then that's it i think.

          I think that because i remember Phil Spencer said on twitter, the first dx12 games on xbox one is coming this holiday 2015, and i don't think fable legends is the only one for this year..
          Last edited by kipotan; 06-21-2015, 10:23 PM.

          Comment


          • #38
            Originally posted by revben View Post
            HALO 5 DYNAMIC RESOLUTION

            It seems that halo 5 does have dynamic resolution. However, there is somethings I have noticed.

            First, the GPU IS NOT BEING TAXED during warzone multiplayer gameplay. The max the gpu reaches during gameplay is 25%.

            You also notice that the sometimes the GPU PERCENTAGE is high while the resolution is low and then the resolution is high but the GPU PERCENTAGE is low. You would think this does not make any sense. After all the resolution should be high when the GPU % is high.
            However, resolution is more determined by the bandwidth available to the game. If the bandwidth is not the bottleneck, the GPU then become the bottleneck. GPU is not the bottleneck in XBOX one despite what people think.
            The bottleneck has been bandwidth. DDR3 does not has enough bandwidth to consistently drive 1080p. However, the xbox one does have the esram. If most of the render target is placed in it, then game will be at 1080p.
            If you look carefully at the videos, you will notice only one thing determine the resolution and that is REN. Notice that the lower REN is, the higher the resolution. LOOK AT THE VIDEO AND YOU WILL SEE. My theory is that REN represent the about of the render target in DDR3, the less in DDR3, meaning more render target in esram, the better the resolution. Therefore, the only thing is needed to improve resolution is better use of esram as the GPU can handle make 4k resolution with ease.
            That is why at GDC, when the PIX of the esram was improved, the proformance improved by 15% and that was without tile resource and dx12, imagine when those come on stream. Plus MAYBE HBM?.

            Next, we can see that the jaguar core in the xbox can only feed the GPU up to 25% during gameplay. That is why it was said that dx12 will make it seem as if the XBOX ONE has gotten an extra GPU.This is better dx12 will increase the cpu performance by 100-300%. At this time the GPU can maybe reach up to 50% use during gameplay, therefore more particles, effects etc.

            The next point is that XBOX ONE has reduced performance during cutscenes, and at that point GPU can reach up to 60%. Weird!!!. And this can be seen in games such as Tome raider remaster, the withcer 3, fifa, halo 5 etc. Most games perform worst in cut scenes. Therefore in the future, gameplay reach 4k 60fps, cut scenes will still be at 1080p 30fps. That means developers should stat using real-time cut scenes.
            I hear what you are saying, and im just saying im not 100% convinced either way yet.

            We know this game has been in development for 3 years so them using FULL DX12 is sort of out of the question.

            However like you said the lower the ren the higher the resolution, BUT, also remember that IF and im saying IF the GPU number is measured in 'how long it takes the gpu to finish the frame' then it would ALAWAYS go up when the resolution goes DOWN (which from what I have seen is what happens every time)

            Because the way I see it is if the GPU is going to take longer to finish the frame then it will scale back the resolution to try and help the GPU finish the frame quicker.

            Do I hope it is a GPU percentage (OF COURSE), im just trying to do the math and make sure we are not jumping to a conclusion that others will then later laugh at us. I do numbers for a living and just trying to analise them.

            Keep up the good work.

            G
            Last edited by G-Force; 06-22-2015, 12:44 AM.

            Comment


            • #39
              Originally posted by mistercteam View Post
              For me i already explained, that the CPU like core for future GPU is HP-APU
              it is part of where used to be called as OBAN
              it is certainly why it is low yield
              Main SOC would not get low yield

              Gfx Core consist
              4 CP = Future HP APU, Also connected to 768 Scalar like mike mantor patent or Journal paper
              6 CU group = Front end + Middle + back end (all CUs/SIMD based), read Intel DX12 ROV, it is basically will be basis for programmable backend

              it is also why in 2012 when Denver project announced by Nvidia, they talk about cpu like core in gfx core, (they use ARM)

              it is also why in my prev slide i showed that DX12 need to be streamlined process before CP stream to Vector block
              means CP will become cpu like core, as DX12 hinted that they need enabling more cpu thread as CPU will take care in the front end
              means in the future even CP is gfx core function, internally it could be simpler cpu like core,

              all above also same as Nvidia echelon,
              cpu like core + SIMD

              in simple X1
              CPU like Core (branch+Scalar) + 768 scalar float (Serial + Cotrol)
              SIMD Processor unit = Parrarel (DLP) = 6 CU group

              Plus pay attention to John Sell latest conference titled Xbox One Next Gen Game Processor with AMD Sebastien
              as we know AMD sebastien is trinity SOC architect
              plus we know Trinity for me is base concept for all future APU
              trinity has 6CU
              PIM also per 6CU or 12CU
              Just like X1 12 CU per group
              but X1 using very low clock for SIMD,

              will talk and put my slide from prev discussion later
              A lot of what I see is first implementation of HSA DSP's. there are a plenty of DSP's on processors, but non that where fully integrated to be used as needed. Large data sets being transformed without noticeable lag. everyone knows how gpu's are wider than cpu's, dsp are incredibly wide. a 360 processor pim could be replicated in the One's dsp's.

              Comment


              • #40
                Originally posted by Jedidiah73 View Post
                Why don't we tweet one of the developers and just ask? Just ask what that number is...
                we do, several times, but Holmes not returning any answer
                i believe there is some surprises about H5,
                as matter of fact the Halo5 image res on xbox.com remain very high resolution
                when TR and Forza6 remain the same as the native resolution

                but maybe let others twit holmes, maybe got some answer

                Comment


                • #41
                  Originally posted by mistercteam View Post
                  we do, several times, but Holmes not returning any answer
                  i believe there is some surprises about H5,
                  as matter of fact the Halo5 image res on xbox.com remain very high resolution
                  when TR and Forza6 remain the same as the native resolution

                  but maybe let others twit holmes, maybe got some answer
                  Regardless of all of this, I do have to say...with what we saw the 360 do with 512 megabytes, now we have 8 gigs...yes it's shared, but if W10 is so great, I bet it doesn't take up much memory either. That was one of the saving factors in why we saw such awesome graphics on the 360 later. Eventually the UI was more efficient with less memory. If any part of DX12 is true, coupled with W10, I just don't see how we won't see a performance boost in games at least 3 times what they are right now. Forget PS4, which I'm no tech guy, but I don't see this massive advantage in the games the ps4 is supposed to have...and yes I own one. I dunno, good effort though. I still think square and crystal dynamics are being jerks about the exclusive deal. They are totally feeding Playstation fan boys without straight up telling them...it's like, at this point does it matter if it's exclusive? I really do wish xbox took TR away from them completely...it would have really been a true blow as much as they botch about it. Can't wait though. On a sidenote, anyone picking up Batman this week? I have it pre-downloaded.
                  Last edited by Jedidiah73; 06-22-2015, 12:36 PM.

                  Comment


                  • #42
                    start tomorrow i will added XDK proof linked also to page 1 in this thread
                    XDK thread
                    showed
                    2 L2 system
                    GDS to GDS transfer (on current AMD GCN it means 2 physical or xfired, but on X1 it is in logic design)
                    TCP = L1
                    TCC = L2
                    16 TCC = 16 VA
                    and others
                    this is directly from XDK, all of it which nobody outside MrX Journal previously ,
                    dare to post it (as means they have to eat crow)


                    Comment


                    • #43
                      I'd like to focus on how BC was achieved. Even DF said "technological miracle".
                      Dou you think is there a sort of porting ppc->x86 made by Ms before releasing the game or is just virtualization with an interpreter that converts "on the fly" ppc instructions to x86 ones?!

                      Comment


                      • #44
                        Originally posted by Seiya di Pegaso View Post
                        I'd like to focus on how BC was achieved. Even DF said "technological miracle".
                        Dou you think is there a sort of porting ppc->x86 made by Ms before releasing the game or is just virtualization with an interpreter that converts "on the fly" ppc instructions to x86 ones?!

                        Could be some of those 50 processors/mcu were put in to do hardware porting from PPC > X86/x64 code

                        Comment

                        Working...
                        X