zimmie

About

Username
zimmie
Joined
Visits
172
Last Active
Roles
member
Points
2,737
Badges
1
Posts
651
  • Apple TV+ production of 'Metropolis' has shut down permanently

    I was really looking forward to this one, but the strike is more important, without a doubt. Oh well. We can always hope they pick it up again in a few years.
    gregoriusmronnAlex_VbeowulfschmidtStrangeDayswilliamlondondarkvaderwatto_cobraFileMakerFeller
  • Apple's muted 2023 hardware launches to include Mac Pro with fixed memory

    blastdoor said:
    zimmie said:
    blastdoor said:
    DAalseth said:
    longfang said:
    DAalseth said:
    I have a feeling that Apple will introduce an M-Series Mac Pro, but keep the Intel version around.
    Apple historically doesn’t really do the hang on to the past thing. 

    Also consider that getting rid of Intel would vastly simplify software development efforts. 
    True, but there’s Mac Pro users that need massive amounts of RAM. Amounts that Apple Silicon just doesn’t support.

    That said, I’m not a computer engineer so it may be a crazy idea;
    Would it be feasible to use two tiers of RAM? The high speed RAM built into the chip, and then a TB or more of comparatively slow conventional RAM in sticks on the MB like it has now? It would have to keep track of what needed to be kept in the extra high speed on chip space, and what could be parked on the sticks. It would be like virtual RAM does now but not to the SSD.
    Not sure how feasible this is but it was a crazy idea that just crossed my mind. 
    I think that’s highly feasible and the only question is whether it’s necessary. Another alternative is very fast pcie5 SSD swap (virtual memory). Apparently a pcie 5 SSD could hit up to 14GB/sec in bandwidth (https://arstechnica.com/gadgets/2022/02/pcie-5-0-ssds-promising-up-to-14gb-s-of-bandwidth-will-be-ready-in-2024/). That’s similar to dual channel ddr2 sdram. 
    That’s interesting throughput, but SSD (unless using Optane) has a rather finite number of write cycles, so that’s bonkers for a solution. I’ve purposely provisioned all my machines with enough RAM keeping that in proper perspective and context, because I don’t replace machines super often. Swap files in a normal usage scenario are one thing, but treating SSD with finite write cycles as RAM is a whole other story.
    Flash write durability hasn’t been a serious limit on SSD lifespan in close to two decades. Controller firmware bugs take out multiple orders of magnitude more drives than the flash wearing out does.

    This is literally talking about using some high-performance SSDs as dedicated swap devices. No more, no less.

    The headache would be random access, just like with swap. NVMe SSDs can get DDR2 levels of data throughput, but much worse latency for random operations. If you mix reads and writes, performance gets ridiculously bad. Give a 70/30 R/W workload to a flash-based SSD and its performance drops to around 15% of high-queue-depth peak read or peak write.
    The only time I recall being in a situation where I needed more than 256GB of RAM, I doubt that a lot of small random writes to swap were needed. Instead, it really was a case of swapping big chunks, and I could see the SSD read/write statistics in glances go through the roof (Linux threadripper system). 

    To need more than 1TB of RAM is way outside my experience. For folks who have such experience— does your work involve a lot of random reads and writes to that 1TB or do you think you’re really swapping in and out multi-GB chunks?
    I mostly need lots of RAM for running lots of virtual machines. When you have a parent scheduler swapping stuff out, and multiple guest schedulers working with what they think is RAM and swapping stuff out on their own, you can wind up with extremely unpredictable memory access latencies in software in the guests.
    The consensus opinion seems to be that Apple will add AMD GPU support to Apple Silicon since they need some sort of GPU expansion. I'm having some trouble believing that though. If anything, at this point a shift back to nVidia would make more sense because those cards are more preferred by scientific and high-end rendering users...
    Not happening. Nvidia won't give anybody else sufficient documentation and access to write drivers for their GPUs, and Apple won't give anybody else the ability to write kernel code for their platforms. This is what caused the split in the first place. Apple wants control of all of the code which can cause a kernel panic, because the users who see one will always blame them. Remember that presentation where they said something like 90% of all Safari crashes were caused by "plugins" and everybody knew they meant "Flash"? This is that, but for kernel code.
    thtmuthuk_vanalingamkillroymacike
  • Apple Vision Pro firmware hints at three distinct battery models

    mayfly said:
    zimmie said:
    mayfly said:
    "Apple said at WWDC that the battery is not casually removable from the headset. There is a USB-C port on the battery for charging and directly powering the Apple Vision Pro."

    Just my uninformed opinion, but I think Apple missed an opportunity by attaching the battery this way. Better to put two USB-C ports on the headset, so users could just swap battery packs without it powering down. Weight gain would be marginal, and it would be easy to add a latch to prevent accidental detachment. Unless it's possible to attach a backup battery to the USB port on the battery. At this time, there's been no mention of that from Apple.
    USB power delivery is extremely janky. It isn’t suitable for anything without a built-in battery. Devices can abruptly lose their internal state for any number of reasons. The only possible safe response to this failure is to drop to the lowest supported voltage and current, which definitely isn’t enough to keep the Vision Pro running. It also flatly does not support seamless handoff between two power sources.

    They could have done it with a sizable internal battery and USB power delivery, but then all this upcoming legislation mandating easily replaceable batteries would bite them. They would probably also need to change away from glass for the outer face to hit their target weight. Apple uses plastics for AirPods, the Magic Mouse, and all of their keycaps, but they haven’t done plastic in front of a display in a long time.
    I have two Leviton USB-C outlets with power delivery in my house. There is none of this "jank" of which you speak, and they charge every USB-C device we own (2 iPads, an M1 MacBook Air, and an M2 MacBook Pro), and fast. Yes, there is a port on the VP battery, and I mentioned it may be possible to attach a second battery or charger to it, but that would violate every page of the Apple Design Book. I'd guess the real reason there's no USB port on the VP is that Apple doesn't want anyone attaching any third party hardware to it. And there's plenty of historical precedent to back that up!
    They absolutely do have the jankiness I mentioned, you just don't notice it. iPads and MacBooks can tolerate the sudden drops to 5v@100mA because their internal batteries provide enough power to keep going. It only takes half a second or so to renegotiate the higher delivery, so even 1% charge is enough.

    Without an internal battery, Vision Pro would just suddenly lose power. While that would be bad for any device, it is absolutely not tolerable for something covering your entire visual field. Further, the battery would need to be able to provide fairly high current. A capacitor bank could be used, but would take up a lot of volume, increasing the lever effect of the weight of other components (like the exterior glass) which have to be pushed further our from your face.

    Part of the reason USB-PD is so bad is that it's built on USB 2, which is a nightmare of a protocol. Talk to anybody who has ever had to implement USB 2 at an electrical level. They will absolutely agree that USB-PD can't ever be remotely reliable enough for a battery-to-load connection.
    byronlwatto_cobraFileMakerFeller
  • Vision Pro prescription lenses to start at $300, guesses Gurman

    dewme said:
    I wonder if they’ll offer standard non-prescription reading glasses style magnifier lenses? Nothing fancy, just $1.5X - $3.5X magnifiers with high quality lenses. A large number of 40-ish and older folks have presbyopia and later in life,or due to other vision complications, far field corrected  monovision as a result of cataract surgery. Cataract surgery is one of the most, if not the single most, common outpatient surgeries performed in the US. 
    Magnifiers would make the problem worse. These are among the sharpest head-mounted displays ever announced, but they're still 20/40 sharpness at best.

    Instead of trying to magnify an already-grainy screen, use Dynamic Type to adjust the size of text.
    williamlondonwatto_cobra
  • Apple's muted 2023 hardware launches to include Mac Pro with fixed memory

    My dream version of the Mac Pro: Modular Design.

    Create a series of bricks having the same length and width (a la Mac Studio, OWC Ministack STX, etc.) but with different heights.  Allow these to be vertically stacked to customize the server of your dreams.  I would then create:
    • CPU - essentially an upgraded Mac Studio
    • RAID array module based on M.2 SSDs
    • RAID array module based on HDDs
    • PCI expansion modules (for graphics cards, scientific packages, etc.)
    • Power backup module (lithium battery for space)
    The Mac Pro 'special sauce' would be the integration between all the components.  I'd make the footprint larger than the Mac Studio to support the 312mm PCI cards and appropriate cooling fans.  Extra credit for allowing multiple CPU modules to work together.

    Were Apple to go this route, I believe they could capture the majority of revenue associated with server hardware.
    dinoone said:
    I propose an alternative interpretation to the news above: a proper bus mainboard holding extension boards, each extension board including both RAM and Apple Silicon processors.
    In this way each extra extension board (RAM+M2) would simultaneously extend both amount of RAM and number of processors.
    This would be compatible with the wording "lack of user-upgradable memory".
    Apple tried that with the Jonathan in 1985. There are a lot of reasons that didn't end up happening. Some business, but quite a few technical as well.
    adam venier
  • VMware Fusion 13 adds Windows 11 virtualization for Apple Silicon Macs

    Still no macOS virtualization on Windows or Linux hosts regardless if old Intel or ARM. Go figure.
    Sure there is. Install Windows or Linux on your Mac, install VMware Workstation on it, then set up a macOS VM. Works fine, at least on Intel machines.
    maciekskontakt said:
    As I said only as host - macOS is not a client and never was. ... Otherwise, you are nobody in IT especially in virtualization space where banks use it extensively. I am chief architect at one of NYC banks.

    ... I was using virtualization products then on Mac OSX and not only Fusion or Parallels. Still only host system not capable to be client.
    That's a funny misuse of terminology for someone who claims to be so deep in IT. The proper term is guest, not client. And as I mentioned above, VMware virtualization products have historically been perfectly happy to run macOS guests, as long as it's on Apple-branded hardware. Admittedly, I haven't tried this myself since somewhere in the Workstation 12s, so it may have been removed.

    As for my credentials, I have been network lead, infrastructure lead, enterprise architecture lead, enterprise reliability lead, and a few other positions at various companies. Now focusing more on real engineering, which has been a nice change of pace. Every company I've worked or consulted for in the last 15 years has offered employees a choice of workstation, including Macs. That includes several financial companies.
    rrabuchadbagmwhitewilliamlondon
  • Post-apocalyptic drama 'Silo' to debut May 5 on Apple TV+

    hmlongco said:
    Getting rather tired of the endless series of post-apocalyptic, dystopian movies and shows.

    How about a series where people actually thrive and succeed?

    How about a series like The Martian, where ingenuity counts and is showcased?

    How about a modern Star Trek (or some such) that shows the future in a positive light?
    The Orville is relentlessly positive about the future. It had a rough start, but got pretty good for the second half of its first season and its second season. Some decisions in the third season disappointed me, but it was decent overall. Star Trek: Lower Decks is great, and Strange New Worlds had a fantastic first season.

    Agreed, it might be nice to see some hopeful sci-fi from Apple as well.
    lolliverwatto_cobrafreeassociate2
  • Apple scales back plans for 'Extreme' Apple Silicon Mac Pro

    danvm said:
    thadec said:
    michelb76 said:
    Gurman is wrong. 

    Apple didn’t delay the Mac Pro two plus years in order to give us a Mac Studio with a different name. 

    The new Mac Pro won’t arrive until apple is ready to blow the doors off everything else - even if that means waiting for M3 Extreme. 

    The M2 just isn’t the destroyer hoped for. It’s great, but not something that will meet expectations of the delayed pro. 

    M3 has been for a long time where the convergence of all the good things was headed. It may mean Apple breaks a promise, but it’s better than releasing something prematurely just because an ambitious project didn’t work out in time. 

    The only way an M2 Ultra goes in a Mac Pro is if Apple developed an external-to-SOC traffic controller that mimics how their Fabric works - and then add multiple M2Ultra packages in a “modular” config. 
    We don't know. It's very much possible that an 'M3 extreme' is also uneconomical.
    It is not that it is "possible." It is that it is the entire case. Threadripper, Epyc, Xeon, even ARM servers benefit from a combination of economies of scale and data center customers willing (needing) to pay whatever they cost. So it makes sense for the chipmakers (chip designers?) to pay the foundries to stand up the manufacturing lines. But it doesn't make economic sense for Apple to do the same for "Extreme" SOCs to be used in Mac Pros and iMac Pros that are only going to move maybe 10,000 units a year and still need to have a starting price of around $5000. This problem didn't exist with the Mac Pro because it used Xeon CPUs that Intel sells to everybody else, allowing them to be made and sold to Apple relatively cheaply.

    The original 2020 M1 merely required adding 2 cores to what was basically a pre-existing smartphone chip at a manufacturer that had already been making octacore ARM chips for Android devices for years. By contrast M2 Extreme would have required horizontal combining 4 M2 chips to make by far the largest PC chip (in surface area) in history. Please note that Intel and AMD have abandoned the horizontal thing with 3-D scaling (vertical and horizontal) AND are atomizing the CPU parts into components - AMD calls them chiplets, Intel calls them tiles - because it is less expensive to manufacture and package in a motherboard. But even there, it is only financially feasible because Intel and AMD are going to sell a lot more Xeon and Epyc CPUs than Apple will Extremes.

    I still maintain that Apple should go into the ARM server market. It would require them to emulate what Microsoft did with Windows Server and come out with a bona fide server version of macOS. Not only would they make a lot of money directly, but the byproduct would mean making workstation CPUs that would otherwise need to be manufactured in very small numbers for a few niche customers financially viable. So Apple, go ahead and make a competitor to Nvidia Grace https://www.nextplatform.com/2022/08/29/details-emerge-on-nvidias-grace-arm-cpu/ except limit it to small and medium sized servers!
    I'm not sure an Apple server would succeed.  They'll be competing with HP, Dell and other large server's OEM's that have been doing business with SMB and enterprises for years.  Most of those business have devices, services and applications from different vendors, and Apple is terrible integrating with other vendors.  Another thing Apple lacks is the ecosystem companies as Microsoft have.  They also would be competing with public cloud providers, considering a lot of business and enterprises are moving some or all workloads to AWS, Azure / MS 365 and GCP / Google Workspace.  

    I don't think creating an additional niche product would work at all.  
    If absolutely nothing else, they're going to make a server for themselves. Right now, Xcode Cloud runs on amd64 processors. The VMs have four cores and 16 GB of RAM. A few WWDC 2021 sessions were filmed in front of multiple racks filled top-to-bottom with rackmount macpro7,1 (2019) units. They (and many, many more racks like them) are almost certainly being used for the Xcode Cloud VMs.

    It's more than a little embarrassing for Apple's shiny new developer-focused cloud offering to use a processor architecture they barely even sell anymore. I would be fairly surprised if WWDC 2023 doesn't include an announcement that Xcode Cloud VMs are moving to aarch64 with amd64 as a non-default option.
    blastdoorroundaboutnow
  • Apple TV+'s 'Reluctant Traveler' Eugene Levy gets a second season

    ITGUYINSD said:
    Each episode is basically the same.  Some opulent resort that no one can afford and most of the show talks about how Eugene Levy has some sort of phobia about something or another and how he normally never leaves the resort.  More of a "Watch Eugene Levy do things he's never done before" show than a travel show.

    Mildly entertaining if you want to watch something that doesn't require a lot of thought.
    1. Arctic TreeHouse Hotel in Finland - $300-$650 per night
    2. Nayara Tented Camp in Costa Rica - $800-$1200 per night
    3. Gritti Palace in Venice - $800-$1200 per night
    4. Amangiri in Utah - $3k-$4k per night
    5. Kudadoo in the Maldives - $5k-$7k per night
    6. Kruger Shalati Train Lodge in South Africa - $600-$900 per night
    7. Verride Palácio Santa Catarina in Lisbon - $500-$5k per night (they have a weird variety of rooms)
    8. HOSHINOYA in Tokyo - $500-$1k per night
    Kudadoo and Amangiri were the only wildly expensive ones. Several of the others are actually pretty reasonable for a vacation involving an international flight. During the winter (when the aurora is visible), Arctic TreeHouse in Finland is around $650 per night, but during the summer it's $300 or so. More than a Motel 6, but I've had to pay more than that for worse hotels in much less interesting parts of the US.
    lolliver
  • EU lawmaker wants Big Tech regulations to specifically target US firms

    I think Volkswagen's board should be broken up and be subject to criminal prosecution for committing environmental testing fraud - but Volkswagen is not an American company and we can't do that.
    BMW, Ford, GM, Mercedes, and more pulled the same stunt. Volkswagen was just the one which got caught first. Once regulators knew to look for this, they found the problem was pervasive.

    The thing which bothers me about that whole saga is that VW's newer engines (Common Rail, or CR) at the center of the issue didn't have worse emissions than their previous ones (Pumpe Düse [unit injector], or PD), they just didn't improve as quickly as the regulatory goalposts were moved. Yet the PD engines with worse emissions are still allowed. Whole situation is weird.

    I found it amusing how people were describing the engines as "not green". The particular emission at issue was a category called nitrogen oxides, which plants love. It's humans which don't do well when exposed to them.



    Back on topic, I don't necessarily have a problem with this. Market dominance should come with increased scrutiny. The EU is economically significant enough that Apple and other big companies can't reasonably just walk away, especially since their tax avoidance strategies depend so heavily on attributing large earnings to their EU branches in Ireland!

    Apple in particular should have to justify why the App Store is the only allowed way to get closed-source software onto an iPhone. While I think they have justified that with the security explanation, the level of control they have should come with responsibilities, and the question should be asked often to make sure they are still living up to those responsibilities.