- Last Active
I would be surprised if the MFi program even considered charging speeds for wired connections. Of course Apple is going to use USB-PD to negotiate charging.
For MFi, they care a lot more about manufacturing standards (label ensures a minimum level of functionality) and security implications. I kind of suspect the phones are only going to have a USB 2 channel (look at the USB-C charging cable for their laptops), but maybe a limited version of Thunderbolt. After all, they have their own Thunderbolt controller core and their own IOMMU to lock it down. Maybe the phones get a two-lane controller instead of the full four-lane controller the tablets, laptops, and desktops get. Then to use Apple's extensions to the Thunderbolt standard, you have to be MFi-certified. Think like how the watch charger and phone-MagSafe both involve proprietary extensions of Qi.
Xed said:tht said:I'm curious why Apple didn't, well, is rumored not to have, put the computing chip bits in with the battery pack too. If would have made the goggles lighter, slimmer, thermally cooler. The only reason I can think of is that it's pretty difficult to push >5K 120 Hz through a wire, and this would have 2.
It may have to have a fan to cool all the chips and to cool your face.
Our field of vision is about 165º per eye horizontally (with ~120º binocular overlap) and 130º vertically. 20/20 (6/6) visual acuity is defined as the ability to see the separation of two lines placed one arcminute apart. There are 60 arcminutes per degree, so that comes out to 9,900 horizontal pixels by 7,800 pixels per eye just to keep up with 20/20 vision. Visual acuities up to 1.5x sharper are pretty common.
Let's target 20/20 (6/6) vision and a 4320p (7680x4320, erroneously marketed as "8K") screen per eye. That would be able to cover 7680/60 => 128º horizontally by 4320/60 => 72º vertically, which excludes basically all of your peripheral vision. Both of my eyes are sharper than 20/20. My stronger is 20/12 (6/3.8), or 1.6x sharper than 20/20. To keep up with that over the same area, I would need a 12288x6912 display.
Targeting 20/40 (6/12) and a 5K (5120x2880) screen per eye yields 171º by 96º. This would cover peripheral vision passably (though still not well), but the sharpness would be on the level of the pre-Retina iPhone screens.
blastdoor said:zimmie said:anonconformist said:blastdoor said:DAalseth said:longfang said:DAalseth said:I have a feeling that Apple will introduce an M-Series Mac Pro, but keep the Intel version around.Also consider that getting rid of Intel would vastly simplify software development efforts.
That said, I’m not a computer engineer so it may be a crazy idea;
Would it be feasible to use two tiers of RAM? The high speed RAM built into the chip, and then a TB or more of comparatively slow conventional RAM in sticks on the MB like it has now? It would have to keep track of what needed to be kept in the extra high speed on chip space, and what could be parked on the sticks. It would be like virtual RAM does now but not to the SSD.
Not sure how feasible this is but it was a crazy idea that just crossed my mind.
This is literally talking about using some high-performance SSDs as dedicated swap devices. No more, no less.
The headache would be random access, just like with swap. NVMe SSDs can get DDR2 levels of data throughput, but much worse latency for random operations. If you mix reads and writes, performance gets ridiculously bad. Give a 70/30 R/W workload to a flash-based SSD and its performance drops to around 15% of high-queue-depth peak read or peak write.To need more than 1TB of RAM is way outside my experience. For folks who have such experience— does your work involve a lot of random reads and writes to that 1TB or do you think you’re really swapping in and out multi-GB chunks?InspiredCode said:The consensus opinion seems to be that Apple will add AMD GPU support to Apple Silicon since they need some sort of GPU expansion. I'm having some trouble believing that though. If anything, at this point a shift back to nVidia would make more sense because those cards are more preferred by scientific and high-end rendering users...
adam venier said:My dream version of the Mac Pro: Modular Design.
Create a series of bricks having the same length and width (a la Mac Studio, OWC Ministack STX, etc.) but with different heights. Allow these to be vertically stacked to customize the server of your dreams. I would then create:
- CPU - essentially an upgraded Mac Studio
- RAID array module based on M.2 SSDs
- RAID array module based on HDDs
- PCI expansion modules (for graphics cards, scientific packages, etc.)
- Power backup module (lithium battery for space)
Were Apple to go this route, I believe they could capture the majority of revenue associated with server hardware.dinoone said:I propose an alternative interpretation to the news above: a proper bus mainboard holding extension boards, each extension board including both RAM and Apple Silicon processors.In this way each extra extension board (RAM+M2) would simultaneously extend both amount of RAM and number of processors.This would be compatible with the wording "lack of user-upgradable memory".