SkylightActive

About

Username
SkylightActive
Joined
Visits
4
Last Active
Roles
member
Points
27
Badges
0
Posts
9
  • Apple Silicon Mac mini dev kit looks like a desktop iPad Pro

    The GPU side of things
    Companies usually license each other's IP's. Apple did with Imagination for their GPU chip design, until 2017 when they terminated it. Back then, we had years of laps where APPLE thought that they could design better GPU them selves. Not needing PowerVR for their GPU. If you think about it, its pretty bold. A company that doesn't do GPU, takes GPU chip design all in-house. The fruits bared from that decision is evident them selves on their year-year GPU performance gains. 

    Funny enough, recently they reactivated a new license agreement with ImgTec. That new license agreement is reported to be for the RAY TRACING chip design "optimized for Mobile" IP's. ImgTec's way of doing Ray Tracing is to offer Ray tracing capability that uses the least power as much as possible. Which aligns with Apple's move to ARM for the Mac. Performance per Watt. 

    Nvidia and AMD both have their roadmaps regarding beefy GPU with Ray tracing. Ampere and RDNA2. Both aimed at tech like DLSS which will help with efficiency in Ray calculations using AI without having to keep adding hardware to scale calculations. Max power for max performance. These are things that Apple have to either compete with or simply just license them and add onto their chip. But looking at how things are going, I don't think that's going to happen. Apple don't want power-hungry chips. Its going to be a different approach. x86 based CPU/GPU is power driven. Apple's ARM is least power but more performance. I foresee Apple taking the GPU in-house fully. They clearly were focused on Machine learning, I think they have something like DLSS that we don't know about (Although kinda had a hint in Developer's Video in WWDC19 when they talk about Raytracing on Metal API). If Ray tracing and brute force GPU is name of the game in 2020, then Apple have a lot to do to match or exceed in what's about to come from AMD and Nvidia. I foresee Apple taking the same path as Xbox Series X CPU/GPU combo solution for the bigger MacBook Pro. I don't know much about CHIP fabrication or engineering limits, but to place say, 40-50 GPU compute units onto the same die isn't too far fetched. If Microsoft and Sony are using custom APU chips with setups like that, I don't see why Apple won't. It just that it will be their own architecture instead of say AMD's. Forget Nvidia. That relationship is long gone.

    Either way, Apple knows that the 16" MacBook Pro has to perform. Giving customers underpar GPU solution is a death sentence which I don't think they are willing to risk. The pattern here is Apple realizes that they can't keep pissing PROSUMERS off. Like the 15" MacBook pro thermal throttling, the keyboard, the screen. etc. I think they know what's at stake. The birth of the 16" MacBook Pro is the prove of that. When you ask customers to join a complete brain transplant with you, you need to have something to back it up. Particularly the PRO's. They are hard to convince. SO they gotta to deliver. This mysterious chip is so tightly guarded, we all are scrambling for any juicy details possible. I for one, believes in Srouji and is positive to what he will bring to the table. 

    What could happen is something that maybe too out there (my opinion). Apple could increase the SOC die and house all the guts (on their WWDC 2020 slide) with massive CPU and GPU and memory. But by doing so, created many space available for other things on the logic board like..... "COOLING". If ARM is so good at thermal efficiency, then fitting everything inside the SoC and retain the fans and cooling could allow the Mac Laptop to really go beyond. A laptop with the power of a desktop. A sales pitch that they could really gloat. We saw some evidence of the 50Watt envelope for the AMD Radeon 5600m GPU on the 16" MacBook Pro. Max the compute units, but keep everything cool. And it performs excellent from what I can see. The fact that Apple asked AMD to custom build this chip, kinda gives a glimpse of what's about to come I believe. Nvidia and AMD has to create a different GPU die, simply because they don't make the CPU's them selves (Apart from AMD APU's). Or that Intel blocked them. 

    Apple stands at a different position to bridge what AMD and Nvidia couldn't. We always have the belief that a dedicated GPU will always be better than an integrated GPU. But what about if we are witnessing an industry transition? Where instead of having the oh so many components spread out across the entire motherboard replaced by a single large chip. You remove latency. The very first guy who thought of the first Silicon probably had the same idea and people probably thought he was nuts. On a grand scale of things, I believe that this is going to change how motherboards are designed and built. SoC's have existed for sometime. But to push it at the forefront of mainstream market is something rather exciting. These are all speculations and personal predictions obviously. But it is worth thinking about. 
    patchythepirateDan_DilgerfastasleepGG1mattinozMplsPwatto_cobra
  • Apple highlighting App Store benefits to customers & developers in new promotional push

    I agree with Apple in this. Allowing alternate stores, means going outside Apple's app approval vetting control. Forcibly by Congress. I highly disagree with this because Congress is trying to make Apple like Android or Windows. I chose to be with Apple because I trust that they can protect me and my family of cybersecurity threats from malicious Apps/plugins. I control every credit card purchases used for all my nephew's devices. I authorized them with Apple. Closed gate method have worked well for users since Appstore's inception. Letting this gate open will lead to security issues that Apple can no longer guarantee. This change (if it happens) will get reflected in the user terms of agreement. Like voiding the warranty etc. Apple provides warranty on hardware and software. Their complete control allows them to do that. Without that control, one cannot guarantee. To me that's not something I opted for. 

    I understand that there's a choice issue here. Many would want iOS to be an open system. Having alternate store would not compromise existing Apple's payment system. You can opt to not use it. That argument is true. What I fear though, is that Congress would go beyond that and force Apple to allow side loading etc. What happens if there is a security breach on the alternate store front or 3rd party app that is not vetted? That malicious activities can exploit the loophole in the operating system forced on by Congress for the sake of having it OPEN. Whereas before, Apple controls and would be the one to quickly identify and release the patch as soon as possible. In this scenario, who would be responsible? Do I trust that 3rd party store would be as responsible and quick as Apple? I don't think so. But I trust Apple. I think that's the fundamental here. Trust. One that doesn't come easy, it takes time and years to earn this from its users.

    Apple is no perfect company. But I do believe that this particular issue has security consequences that comes with being open. How much can this be open but do not compromise security is something Senators will have to be careful with. It can really backfire. 
    Beatswatto_cobraDetnator
  • Apple's Federighi and Joswiak discuss Apple silicon, iOS 14, Big Sur and more

    I’m gonna go out on a limb here but I think this move is a good choice. I truly believe something’s about to be released that is quite astounding. Albeit with some trade offs. But, in the long term I think it’s good. I’m just a Mac user who needs Final Cut, After effects, cinema4d, all of Adobe to work. 

    fastasleepwatto_cobra
  • What the Apple Silicon M1 means for the future of Apple's Macs

    I’ve said this a while ago right after WWDC2020 about how this would be sort of a big industry shift. For the better. Not a dooms day scenario for the Mac like many have anticipated. What people needs to wrap their heads around Apple Silicon moving forward is that Efficiency means better performance. 

    Take a basic example. An office with staffs sorting mail for delivery. With limited amount of people, you can only do so much at a given time. So instead of just simply throwing more people to increase the speed (which would drive cost up and is what Intel does), you eliminate unnecessary jobs and streamline the process so that they can achieve more. Then if that’s not enough, you add a few more people so it can overall improve the speed of sorting mail. This is efficiency. You can do more if you’re efficient. 

    We have to move away from judging all this as a direct comparison with other vendors who just quote Gigahertz. Big gigahertz numbers is useless unless they can be fully exploited. Apple doesn’t need to build the highest gigahertz cpu on the planet if they can already achieve better real world usage. It’s better than having high clock counts just so that you can say you have the highest count. A shift to “what are people really using it for? They need faster coders for video editing right? How about digital stabilization tool for post editing that needs to be faster, ok! We’ll throw in ML acceleration so that our devs can use that to accelerate that process”. I think this is the right way to go moving forward. Custom solutions for specific tasks. So that those tasks is faster without sacrificing on low power consumption and HEAT. Less heat means they can sustain the higher speed for a longer time. The iPhone and iPad had always had less RAM and clock speed compared to their competitors. They don’t even put that into their slides. Yet, they always outperform their competition in real world usage. Like Apple vs. Samsung and Qualcomm for mobile devices, this will be Apple vs. Intel. We just have to learn how to decipher comparisons in a different way. 

    System on a chip is a very innovative shift for microprocessors. However, from the lack of info so far on the M1 regarding RAM resource management between GPU and CPU is a concern of mine. How does the operating system prioritize which is more essential? what happens when the RAM is full? Will they use SSD’s as virtual memory? Will apps need to be written or rewritten in such a way that targets both CPU and GPU at the same time to get the most benefit from the efficiency and speed of the Silicon? Where some apps prior to this may be 85% on CPU and 15% on GPU. 

    These are some of the questions I wished I had more info. I’m not too worried about the future GPU. Having said that, GPU performance is very important to me. Probably due to how the software that I use requires it to be. Not yet optimized. Not factoring unified memory approach so therefore needing beefier GPU than they really need to. The other point that weren’t mentioned is Ray tracing. I think Apple needs to address this in their upper tier SoC. This particular Graphics technology goes well beyond just gaming. If Apple wants to remain relevant in this arena, they need to address this. 

    Apple isn’t showing the SoC for the 16” MacBook Pro in this event because they are giving them selves time to release. Probably wanting to wait and see Ampere and RDNA2 too but most likely not wanting to release all at once. It is 2020 after all. But to say that they don’t have anything to fight those beefier GPU would be unjustified for Apple. Somethings cookin in their labs. We just haven’t had the luxury of seeing it apart of Sruji. Ampere and RDNA2 will be the sort of benchmark performance they need to match or exceed.  If the Air can gain drastic performance upgrade, I can’t wait to see the silicon that will go into the new 16” MacBook Pro 2021. 

    watto_cobra