dope_ahmine

About

Username
dope_ahmine
Joined
Visits
52
Last Active
Roles
member
Points
725
Badges
0
Posts
270
  • iPad mini 7 two-month review: Now I get it

    Anybody tried to use this with Fortnite (native install, not streaming)?
    williamlondon
  • M4 MacBook Pro display uses quantum dot film for more vibrant color & motion performance

    … nanocrystals made from semiconductor material called quantum dots …

    A quantum dot is actually not a semiconductor material but a nanocrystal (ie a specially shaped particle) typically made of materials such as CdSe or InP.
    Alex1Nwatto_cobra
  • M4 Mac mini review roundup: Pint-sized powerhouse that won't break the bank

    blastdoor said:
    The Tech Crunch quotes seem oddball to me. 

    I guess if you landed on Earth in a pod yesterday you might not have a monitor, keyboard, and mouse handy. But I'll bet a very large share of adults over the age of, say, 30 have a monitor, keyboard, and mouse already. That's not a small market.
    I will do something entirely different though. I will connect the Mini with my iPad.

    Sidecar over wifi is already quite decent, but with a usb-c cable you get yourself a a really nice monitor + keyboard + touch input. And there are plenty of iPad users out there that would love this setup.
    EddyMacjamnapAlex1NForumPostbala1234watto_cobra
  • Generation gaps: How much faster Apple Silicon gets with each release

    chasm said:
    netrox said:
    Exactly why do we need to keep adding more CPU cores when most creative oriented applications would benefit from having more GPU cores? 
    Not that I’m the last word on this topic, but to put this VERY simply CPUs do math and GPUs take that math and manipulate pixels. Graphics are created through math, so more CPUs enable GPUs to do their job better.

    More GPUs are needed when you have really really large screens/more screens. More CPUs are needed when you need more graphics.
    Sorry, but that is wrong.  GPUs excel at doing math at high memory bandwidths... but they basically need to be able to do the math in parallel, and the application has to be written specifically to use the GPU.  CPUs are the default place for code to run, and are generally better at doing complex logic with lots of decisions, randomly chasing through memory for data, and doing less "orderly" computations.  To leverage multiple CPUs, the application has to be written to do that and it isn't the default.  Code typically starts its existence on a single CPU, then the programmer improves it to take advantage of multiple CPUs, then they might improve it further to either use the CPU SIMD or matrix hardware, or re-write critical pieces to run on the GPU.  These days it is also quite common for application programmers to use libraries (often Apple's) which do things like leverage multiple cores, SIMD, matrix hardware, and GPUs.  Creative oriented applications are often graphics or audio heavy, and those things can usually take advantage of all this potential hardware parallelism as long as they are optimized to do so (and the good ones are).

    The question of CPUs vs GPUs on the SoC is a complex one.  Many applications don't use the GPU at all, except for the UI (which hardly needs any GPU at all) but are optimized for multiple CPUs... adding more GPU for those applications gets you nothing.  Even GPU-heavy applications can also benefit from more CPUs, in some cases.  Ultimately though, the GPUs tend to be memory bandwidth limited, so scaling up the GPU beyond what the memory bandwidth can support gets us very little.
    Glad you stepped in here and corrected the previous two comments, @programmer ;


    On top of that, GPUs today are crucial for AI and creative tasks because of their ability to handle tons of calculations in parallel, which is exactly what’s needed for training neural networks and other data-heavy workloads. Modern GPUs even come with specialized hardware, like NVIDIA’s Tensor Cores, that’s purpose-built for the matrix math involved in deep learning. This kind of hardware boost is why GPUs are so valuable for things like image recognition or natural language processing—they’re not just for “manipulating pixels”.
    CPUs and GPUs actually complement each other in AI. CPUs handle tasks with lots of decision-making or data management, while GPUs jump in to power through the raw computation. It’s not just about one or the other; the best results come from using both for what each does best.
    As for energy efficiency, GPUs perform many tasks at a much lower power cost than CPUs, which is huge for AI developers who need high-speed processing without the power drain (or cost) that would come from only using CPUs.
    And on top of all that, new architectures are even starting to blend CPU and GPU functions—like Apple’s M-series chips, which let both CPU and GPU access the same memory to cut down on data transfer times and save power. Plus, with all the popular libraries like PyTorch, CUDA, and TensorFlow, it’s easier than ever to optimize code to leverage GPUs, so more developers can get the speed and efficiency benefits without diving deep into complex GPU programming.
    Alex1Nwatto_cobra
  • Is Apple finally serious about gaming after its latest push?

    Apple beyond PCs in gaming??? Yeah, right
    VictorMortimerneoncatnubuswilliamlondon