zimmie

About

Username
zimmie
Joined
Visits
169
Last Active
Roles
member
Points
2,737
Badges
1
Posts
651
  • iPhone will catch a sales block in EU countries if Apple limits USB-C

    I would be surprised if the MFi program even considered charging speeds for wired connections. Of course Apple is going to use USB-PD to negotiate charging.

    For MFi, they care a lot more about manufacturing standards (label ensures a minimum level of functionality) and security implications. I kind of suspect the phones are only going to have a USB 2 channel (look at the USB-C charging cable for their laptops), but maybe a limited version of Thunderbolt. After all, they have their own Thunderbolt controller core and their own IOMMU to lock it down. Maybe the phones get a two-lane controller instead of the full four-lane controller the tablets, laptops, and desktops get. Then to use Apple's extensions to the Thunderbolt standard, you have to be MFi-certified. Think like how the watch charger and phone-MagSafe both involve proprietary extensions of Qi.
    ronnspock1234Alex1Nkillroywatto_cobra
  • Apple's MR headset to use magnetically-attached tethered battery

    Xed said:
    tht said:
    I'm curious why Apple didn't, well, is rumored not to have, put the computing chip bits in with the battery pack too. If would have made the goggles lighter, slimmer, thermally cooler. The only reason I can think of is that it's pretty difficult to push >5K 120 Hz through a wire, and this would have 2.

    It may have to have a fan to cool all the chips and to cool your face.
    Why does it need 5K (14.7M pixels) if the screen is an inch from you eye?
    That's when you need pixel density the most.

    Our field of vision is about 165º per eye horizontally (with ~120º binocular overlap) and 130º vertically. 20/20 (6/6)  visual acuity is defined as the ability to see the separation of two lines placed one arcminute apart. There are 60 arcminutes per degree, so that comes out to 9,900 horizontal pixels by 7,800 pixels per eye just to keep up with 20/20 vision. Visual acuities up to 1.5x sharper are pretty common.

    Let's target 20/20 (6/6) vision and a 4320p (7680x4320, erroneously marketed as "8K") screen per eye. That would be able to cover 7680/60 => 128º horizontally by 4320/60 => 72º vertically, which excludes basically all of your peripheral vision. Both of my eyes are sharper than 20/20. My stronger is 20/12 (6/3.8), or 1.6x sharper than 20/20. To keep up with that over the same area, I would need a 12288x6912 display.

    Targeting 20/40 (6/12) and a 5K (5120x2880) screen per eye yields 171º by 96º. This would cover peripheral vision passably (though still not well), but the sharpness would be on the level of the pre-Retina iPhone screens.
    designrlollivermuthuk_vanalingamfastasleepwatto_cobra
  • Tim Cook salary to drop 40%, at his request

    DT36MT said:
    amar99 said:
    How will he survive on such a low salary? What a guy.
    Will you do it if you were in his place just because someone or some group of people say you should?
    If I were in such a position, I would have my salary lowered without somebody else pushing for it because I can't imagine ever needing more than about $100M in assets. That level of wealth is self-sustaining for even an unreasonably extravagant lifestyle. It's enough for separate his and hers Ferraris for every day of the month with plenty left over to still get about two million dollars of income per year on bank interest. What would you ever spend it on?
    beowulfschmidtwatto_cobran2itivguyStrangeDayswilliamlondonFileMakerFeller
  • Apple's muted 2023 hardware launches to include Mac Pro with fixed memory

    blastdoor said:
    zimmie said:
    blastdoor said:
    DAalseth said:
    longfang said:
    DAalseth said:
    I have a feeling that Apple will introduce an M-Series Mac Pro, but keep the Intel version around.
    Apple historically doesn’t really do the hang on to the past thing. 

    Also consider that getting rid of Intel would vastly simplify software development efforts. 
    True, but there’s Mac Pro users that need massive amounts of RAM. Amounts that Apple Silicon just doesn’t support.

    That said, I’m not a computer engineer so it may be a crazy idea;
    Would it be feasible to use two tiers of RAM? The high speed RAM built into the chip, and then a TB or more of comparatively slow conventional RAM in sticks on the MB like it has now? It would have to keep track of what needed to be kept in the extra high speed on chip space, and what could be parked on the sticks. It would be like virtual RAM does now but not to the SSD.
    Not sure how feasible this is but it was a crazy idea that just crossed my mind. 
    I think that’s highly feasible and the only question is whether it’s necessary. Another alternative is very fast pcie5 SSD swap (virtual memory). Apparently a pcie 5 SSD could hit up to 14GB/sec in bandwidth (https://arstechnica.com/gadgets/2022/02/pcie-5-0-ssds-promising-up-to-14gb-s-of-bandwidth-will-be-ready-in-2024/). That’s similar to dual channel ddr2 sdram. 
    That’s interesting throughput, but SSD (unless using Optane) has a rather finite number of write cycles, so that’s bonkers for a solution. I’ve purposely provisioned all my machines with enough RAM keeping that in proper perspective and context, because I don’t replace machines super often. Swap files in a normal usage scenario are one thing, but treating SSD with finite write cycles as RAM is a whole other story.
    Flash write durability hasn’t been a serious limit on SSD lifespan in close to two decades. Controller firmware bugs take out multiple orders of magnitude more drives than the flash wearing out does.

    This is literally talking about using some high-performance SSDs as dedicated swap devices. No more, no less.

    The headache would be random access, just like with swap. NVMe SSDs can get DDR2 levels of data throughput, but much worse latency for random operations. If you mix reads and writes, performance gets ridiculously bad. Give a 70/30 R/W workload to a flash-based SSD and its performance drops to around 15% of high-queue-depth peak read or peak write.
    The only time I recall being in a situation where I needed more than 256GB of RAM, I doubt that a lot of small random writes to swap were needed. Instead, it really was a case of swapping big chunks, and I could see the SSD read/write statistics in glances go through the roof (Linux threadripper system). 

    To need more than 1TB of RAM is way outside my experience. For folks who have such experience— does your work involve a lot of random reads and writes to that 1TB or do you think you’re really swapping in and out multi-GB chunks?
    I mostly need lots of RAM for running lots of virtual machines. When you have a parent scheduler swapping stuff out, and multiple guest schedulers working with what they think is RAM and swapping stuff out on their own, you can wind up with extremely unpredictable memory access latencies in software in the guests.
    The consensus opinion seems to be that Apple will add AMD GPU support to Apple Silicon since they need some sort of GPU expansion. I'm having some trouble believing that though. If anything, at this point a shift back to nVidia would make more sense because those cards are more preferred by scientific and high-end rendering users...
    Not happening. Nvidia won't give anybody else sufficient documentation and access to write drivers for their GPUs, and Apple won't give anybody else the ability to write kernel code for their platforms. This is what caused the split in the first place. Apple wants control of all of the code which can cause a kernel panic, because the users who see one will always blame them. Remember that presentation where they said something like 90% of all Safari crashes were caused by "plugins" and everybody knew they meant "Flash"? This is that, but for kernel code.
    thtmuthuk_vanalingamkillroymacike
  • Apple's muted 2023 hardware launches to include Mac Pro with fixed memory

    My dream version of the Mac Pro: Modular Design.

    Create a series of bricks having the same length and width (a la Mac Studio, OWC Ministack STX, etc.) but with different heights.  Allow these to be vertically stacked to customize the server of your dreams.  I would then create:
    • CPU - essentially an upgraded Mac Studio
    • RAID array module based on M.2 SSDs
    • RAID array module based on HDDs
    • PCI expansion modules (for graphics cards, scientific packages, etc.)
    • Power backup module (lithium battery for space)
    The Mac Pro 'special sauce' would be the integration between all the components.  I'd make the footprint larger than the Mac Studio to support the 312mm PCI cards and appropriate cooling fans.  Extra credit for allowing multiple CPU modules to work together.

    Were Apple to go this route, I believe they could capture the majority of revenue associated with server hardware.
    dinoone said:
    I propose an alternative interpretation to the news above: a proper bus mainboard holding extension boards, each extension board including both RAM and Apple Silicon processors.
    In this way each extra extension board (RAM+M2) would simultaneously extend both amount of RAM and number of processors.
    This would be compatible with the wording "lack of user-upgradable memory".
    Apple tried that with the Jonathan in 1985. There are a lot of reasons that didn't end up happening. Some business, but quite a few technical as well.
    adam venier