tenthousandthings

About

Username
tenthousandthings
Joined
Visits
179
Last Active
Roles
member
Points
2,055
Badges
1
Posts
1,068
  • Massive auction has nearly every vintage Apple product up for sale

    JP234 said:
    Doesn't look like they are testing anything. It's hard to know what kind of condition any of this is in. Despite the sales pitch, I don't see many signs that he maintained his collection (he died in 2015). Note also, the Macintosh (which appears to be without accessories) has a 220-240V power supply. A lot of the other items also look like they may be European editions, and thus require 220-240V power.

    The Bell & Howell Apple II Plus is interesting. I didn't know it existed.
    Who cares if they work or not? They're useless in 2023. You buy them to collect or hoard, not use.
    ? Collectors care, and people reselling them to nostalgia buyers on eBay care. A working machine is worth a lot more than a non-working one. The reason that Lisa sold for $50K recently was because it was a working original 1983 machine that had never been upgraded. 

    The Lisa in this auction still has the original floppy drives, so if they can show it is working, or at least that it has all its parts in good condition (so it can be repaired), it will sell for more. Most of them were upgraded (Apple did it for free), so intact non-upgraded machines are rare.

    For what’s worth, I have a working Macintosh 512K — the floppy drive (and external drive) work but I don’t stress them, instead I use a Floppy Emu with it, so I can transfer files via SD card, no problem. Floppy Emu also works with Lisa and Apple II machines:

    https://www.bigmessowires.com/floppy-emu/
    FileMakerFellerbestkeptsecretwatto_cobra
  • Massive auction has nearly every vintage Apple product up for sale

    Doesn't look like they are testing anything. It's hard to know what kind of condition any of this is in. Despite the sales pitch, I don't see many signs that he maintained his collection (he died in 2015). Note also, the Macintosh (which appears to be without accessories) has a 220-240V power supply. A lot of the other items also look like they may be European editions, and thus require 220-240V power.

    The Bell & Howell Apple II Plus is interesting. I didn't know it existed.
    FileMakerFellerwatto_cobra
  • What is a 'Retina' display, and why it matters

    I wonder about the iMac — there aren’t a lot of obvious explanations for why it didn’t get a simple M2 bump. One possibility is that it is going to get a different display, and that has to wait for some reason. Like Apple has a big reveal coming up with regard to desktop display technologies, and the iMac is part of that.
    watto_cobra
  • What is a 'Retina' display, and why it matters

    AniMill said:
    Something now discussed: How Apple applies its high density configurations. I’ve used the Retina equipped iMacs since they were first introduced, but when I switched to a 5K2K LG display macOS does not recognize it as “Retina” compatible. This is manifested as images seen in Safari being 1/2 rez and pixelated, same with many dialogs/menus in Adobe apps. I’ve tried to see if the system can be forced to perceive the 5K2K display as high DPI capable, but no avail. All other text and images are rendered correctly. I should compare this display to other 4K monitors to see if this upscale issue is common to non-Apple monitors.
    Is it Retina compatible? The smallest of LG UltraWide 21:9 is the 34-inch, with a pixel density of 163.44 per inch, which is short of the Apple Studio Display at 218 per inch. So you may be seeing some kind of macOS problem, but whatever it is, it's not that it isn't properly recognizing the display as Retina. It's not Retina.
    watto_cobra
  • Future Mac Pro may use Apple Silicon & PCI-E GPUs in parallel

    This seems like a good place to re-post my transcript of Anand Shimpi's comments on Apple Silicon for the Mac in the Apple interview with Andru Edwards. I recently posted it there, but that thread is buried. For those who don't know, Anand is the founder of AnandTech.com, but Apple hired him away from himself in 2014. It is edited for clarity, removing interjections like "I think" and "kind of" and "sort of," but I've retained his rhetorical ", right? ..." habit, as it captures the feel of the dialogue.

    It's a good overview of what Apple is trying to do. If you think about what he's saying carefully, despite what he says about the Mac joining the "cadence" of the iPhone and iPad, you can see Apple isn't going to release silicon just because: "If we’re not able to deliver something compelling, we won’t engage, right? ... We won’t build the chip." 

    [General questions]

    The silicon team doesn’t operate in a vacuum, right? ... When these products are being envisioned and designed, folks on the architecture team, the design team, they’re there, they’re aware of where we’re going, what’s important, both from a workload perspective, as well as the things that are most important to enable in all of these designs.

    I think part of what you’re seeing now is this now decade-plus long, maniacal obsession with power-efficient performance and energy efficiency. If you look at the roots of Apple Silicon, it all started with the iPhone and the iPad. There we’re fitting into a very, very constrained environment, and so we had to build these building blocks, whether it’s our CPU or GPU, media engine, neural engine, to fit in something that’s way, way smaller from a thermal standpoint and a power delivery standpoint than something like a 16-inch MacBook Pro. So the fundamental building blocks are just way more efficient than what you’re typically used to seeing in a product like this. The other thing you’re noticing is, for a lot of tasks that maybe used to be high-powered use cases, on Apple Silicon they actually don’t consume that much power. If you look, compared to what you might find in a competing PC product, depending on the workload, we might be a factor of two or a factor of four times lower power. That allows us to deliver a lot of workloads, that might have been high-power use cases in a different product, in something that actually is a very quiet and cool and long-lasting. The other thing that you’re noticing is that the single-thread performance, the snappiness of your machine, it’s really the same high-performance core regardless of if you’re talking about a MacBook Air or a 14-inch Pro or 16-inch Pro or the new Mac mini, and so all of these machines can accommodate one of those cores running full tilt, again we’ve turned a lot of those usages and usage cases into low-power workloads. You can’t get around physics, though, right? ... So if you light up all the cores, all the GPUs, the 14-inch system just has less thermal capacity than the 16, right? ... So depending on your workload, that might drive you to a bigger machine, but really the chips are across the board incredibly efficient.

    [Battery life question]

    You can look at how chip design works at Apple. You have to remember we’re not a merchant silicon vendor, at the end of the day we ship product. So the story for the chip team actually starts at the product, right? ... There is a vision that the design team, that the system team has that they want to enable, and the job of the chip is to enable those features and enable that product to deliver the best performance within the constraints, within the thermal envelope of that chassis, that is humanly possible. So if you look at what we did going from the M1 family to the M2 Pro and M2 Max, at any given power point, we’re able to deliver more performance. On the CPU we added two more efficiency cores, two more of our e-cores. That allowed us, or was part of what allowed us, to deliver more multi-thread performance, again, at every single power point where the M1 and M2 curves overlap we were able to deliver more performance at any given power point. The dynamic range of operations [is] a little bit longer, a little bit wider, so we do have a slight increase in terms of peak power, but in terms of efficiency, across the range, it is a step forward versus the M1 family, and that directly translates into battery life. The same thing is true for the GPU, it’s counterintuitive, but a big GPU running a modest frequency and voltage, is actually a very efficient way to fill up a box. So that’s been our philosophy dating back to iPhone and iPad, and it continues on the Mac as well.

    // But really the thing that we see, that the iPhone and the iPad have enjoyed over the years, is this idea that every generation gets the latest of our IPs, the latest CPU IP, the latest GPU, media engine, neural engine, and so on and so forth, and so now the Mac gets to be on that cadence too. If you look at how we’ve evolved things on the phone and iPad, those IPs tend to get more efficient over time. There is this relationship, if the fundamental chassis doesn’t change, any additional performance you draw, you deliver has to be done more efficiently, and so this is the first time the MacBook Pro gets to enjoy that and be on that same sort of cycle.

    On the silicon side, the team doesn’t pull any punches, right? ... The goal across all the IPs is, one, make sure you can enable the vision of the product, that there’s a new feature, a new capability that we have to bring to the table in order for the product to have everything that we envisioned, that’s clearly something that you can’t pull back on. And then secondly, it’s do the best you can, right? ... Get as much down in terms of performance and capability as you can in every single generation. The other thing is, Apple’s not a chip company. At the end of the day, we’re a product company. So we want to deliver, whether it’s features, performance, efficiency. If we’re not able to deliver something compelling, we won’t engage, right? ... We won’t build the chip. So each generation we’re motivated as much as possible to deliver he best that we can.

    [Neural engine question]

    … There are really two things you need to think about, right? ... The first is the tradeoff between a general purpose compute engine and something a little more specialized. So, look at our CPU and GPU, these are big general purpose compute engines. They each have their strengths in terms of the types of applications you’d want to send to the CPU versus the GPU, whereas the neural engine is more focused in terms of the types of operations that it is optimized for. But if you have a workload that’s supported by the neural engine, then you get the most efficient, highest density place on the chip to execute that workload. So that’s the first part of it. The second part of it is, well, what kind of workload are we talking about? Our investment in the neural engine dates back years ago, right? The first time we had a neural engine on an Apple Silicon chip was A11 Bionic, right? ... So that was five-ish years ago on the iPhone. Really, it was the result of us realizing that there were these emergent machine learning models, where, that we wanted to start executing on device, and we brought this technology to the iPhone, and over the years we’ve been increasing its capabilities and its performance. Then, when we made the transition of the Mac to Apple Silicon, it got that IP just like it got the other IPs that we brought, things like the media engine, our CPU, GPU, Secure Enclave, and so on and so forth. So when you’re going to execute these machine learning models, performing these inference-driven models, if the operations that you’re executing are supported by the neural engine, if they fit nicely on that engine, it’s the most efficient way to execute them. The reality is, the entire chip is optimized for machine learning, right? ... So a lot of models you will see executed on the CPU, the GPU, and the neural engine, and we have frameworks in place that kind of make that possible. The goal is always to execute it in the highest performance, most efficient place possible on the chip.

    [Nanometer process question]

    ... You’re referring to the transistor. These are the building blocks all of our chips are built out of. The simplest way to think of them is like a little switch, and we integrate tons of these things into our designs. So if you’re looking at M2 Pro and M2 Max, you’re talking about tens of billions of these, and if you think about large collections of them, that’s how we build the CPU, the GPU, the neural engine, all the media blocks, every part of the chip is built out of these transistors. Moving to a new transistor technology is one of the ways in which we deliver more features, more performance, more efficiency, better battery life. So you can imagine, if the transistors get smaller, you can cram more of them into a given area, that’s how you might add things like additional cores, which is the thing you get in M2 Pro and M2 Max—you get more CPU cores, more GPU cores, and so on and so forth. If the transistors themselves use less power, or they’re faster, that’s another method by which you might deliver, for instance, better battery life, better efficiency. Now, I mentioned this is one tool in the toolbox. What you choose to build with them, the underlying architecture, microarchitecture and design of the chip also contribute in terms of delivering that performance, those features, and that power efficiency.

    If you look at the M2 Pro and M2 Max family, you’re looking at a second-generation 5 nanometer process. As we talked about earlier, the chip got more efficient. At every single operating point, the chip was able to deliver more performance at the same amount of power.

    [Media engine question]

    ... Going back to the point about transistors, taking that IP and integrating it on the latest kind of highly-integrated SOC and the latest transistor technology, that lets you run it at a very high speed and you get to extract a lot of performance out of it. The other thing is, and this is one of the things that is fairly unique about Apple Silicon, we built these highly-integrated SOCs, right? ... So if you think about the traditional system architecture, in a desktop or a notebook, you have a CPU from one vendor, a GPU from another vendor, each with their own sort of DRAM, you might have accelerators kind of built into each one of those chips, you might have add-in cards as additional accelerators. But with Apple Silicon in the Mac, it’s all a single chip, all backed by a unified memory system, you get a tremendous amount of memory bandwidth as well as DRAM capacity, which is unusual, right? ... In a machine like this a CPU is used to having large capacity, low bandwidth DRAM, and a GPU might have very low capacity, high bandwidth DRAM, but now the CPU gets access to GPU-like memory bandwidth, while the GPU gets access to CPU-like capacity, and that really enables things that you couldn’t have done before. Really, if you are trying to build a notebook, these are the types of chips that you want to build it out of. And the media engine comes along for the ride, right? ... The technology that we’ve refined over the years, building for iPhone and iPad, these are machines where the camera is a key part of that experience, and being able to bring some of that technology to the Mac was honestly pretty exciting. And it really enabled just a revolution in terms of the video editing and video workflows.

    The addition of ProRes as a hardware accelerated encode and decode engine as a part of the media engine, that’s one of the things you can almost trace back directly to working with the Pro Workflows team, right? ... This is a codec that it makes sense to accelerate to integrate into hardware for our customers that we're expecting to buy these machines. It was something that the team was able to integrate, and for those workflows, there’s nothing like it in the industry, on the market. 
    keithwdanoxd_2watto_cobraroundaboutnow