Last Active
  • Mac Studio with M1 Ultra is heavier because of heat & material choices

    dk49 said:
    Why doesn't this need 370W power supply if its peak performance can be achieved at 100W?
    PC power supplies are typically the most efficient between oh, let's say 30-70% load (about 110-260 watts for this unit).

    In the Mac Studio the same power supply supports both the M1 Max and the M1 Ultra, the latter which will draw more power. Don't forget that there are different GPU configurations (number of cores). There are also other transistors that draw power both in the package (e.g., unified memory, the interconnect) as well as elsewhere in system (e.g., NAND flash storage, connectivity, Bluetooth and wireless radios, etc.). And there are the mechanical components (two blowers) that also draw current.

    This PSU is specced out to support the maximum configuration (M1 Ultra, 128GB memory, maxed GPU, 8TB SSD, all six TB4 ports running at full speed) not the entry-level offering or something in the middle. Unlike a typical PC chassis, you can't just swap out a 650W ATX PSU with a 1200W replacement on a Mac Studio.

    From a cost and supply standpoint, it's probably easier for Apple to simply design one PSU rather than have multiple PSUs with different capacities. I'm sure they tried a bunch of different PSUs while prototyping.

    Consumer semiconductors also have a sweet spot. The performance-to-watt curve typically isn't a straight line. At the top end of the curve, you can increase the power but only reap modest gains with today's silicon. Gone are the days of overclocking CPUs and GPUs for massive gains. Today's chip designers are putting most of that headroom in the chip's boost design. There's very little improvement to be gained from Joe Consumer.

    PSUs lose efficiency at really low loads and really high loads. If a hypothetical system maxes out at 198 W sustained, you don't want to put in a 200 W PSU. That's a 99% load; you lose efficiency and there's no headroom for short term boosts of power.
  • Facebook CFO says personalized advertising 'under assault' by Apple privacy changes

    The CFO (Chief Financial Officer) and the CRO (Chief Revenue Officer) are NOT the same person.

    They are completely different roles. The Chief Financial Officer (CFO) is in charge of the company's finances and is a fiduciary responsible for accurate representation of the company's financials. This includes SEC filings.

    The Chief Revenue Officer (CRO) in a company like Facebook is in charge of sales, essentially the sales director. Since most of Facebook's revenue is selling ads, this guy is basically the head ad salesman.

    Per Facebook's investor relations site:

    David Wehner is the CFO. His department is mostly accountants, financial analysts, etc.

    David Fischer is the CRO. His department is mostly ad sales people, ad sales directors, account executives, etc.

    Two completely different people in two completely different roles.

    CFOs of Fortune 10 companies don't swagger and make bombastic and sweeping accusatory statements. They generally talk about numbers in the past tense, usually the same numbers published in an SEC filing. They aren't trash talkers. That should have been the glaring hint that this Fischer guy is NOT the CFO.

    AppleInsider needs to rewrite large parts of this article and the headline to correctly represent who these two senior Facebook managers are.

  • Apple Silicon transition may hit its two-year target with 2022 Mac Pro

    mcdave said:
    The longer they’re leaving it, the more the competition has stepped up. The buying public can’t see beyond marketing specs so genuine advantages are already mitigated. Single-core performance has been matched by Intel 11th gen i7/i9 so it’ll be interesting to see how much Apple has left in the tank.
    This is not how Apple is approaching their ASi architecture.

    During the WWDC 2020 keynote, Johny Srouji explained that Apple Silicon's focus was performance-per-watt. Not benchmarks. He pounded this concept again and again during his segment.

    Because of Intel's ineptitude in advancing their process technology, they had to throw efficiency out the window to keep up with AMD. So Rocket Lake (10 nm Ice Lake architecture backported to the 14 nm node) generates massive amounts of heat and tops out at 8 cores. Hell, Intel even had to come up with a new platform which will probably be abandoned after one generation.

    Intel can beat Apple on single-core performance but at a massive power load. However that's not the sole usage case. Apple's big.LITTLE implementation on M-series SoCs is superior to the competition and for the PC market, there is no competition yet from Intel nor AMD. 

    Apple has improved price-per-watt to the point that ASi can beat Intel handily. That's why they shipped ASi last fall. They have been working on this for years and have been advancing faster than Intel.

    This is same thing that happened with their M-series SoCs. Apple's performance-per-watt on mobile silicon has advanced faster than their competitors.

    As mentioned by KTR, the hardware-software integration is better. This includes that managing big.LITTLE both on PC silicon and mobile silicon, the latter has provided Apple many years of experience. My guess is that Apple has been running macOS on A-series SoCs as well as prototype M-series SoCs on Macs for YEARS in their labs.

    And Apple is really just getting started with machine learning on Macs. I expect Apple to pull ahead over the next couple of years when considering multiple tasks and usage cases as more tasks get handled by the Neural Engine instead of having the CPU cores do it. You wouldn't see this superiority in a standard artificial benchmark. Those benchmarks don't include machine learning because AMD and Intel currently don't have any machine learning silicon in their CPUs.

    Apple does not design their CPUs and GPUs so they can be King of Cinebench or King of Furmark.
  • Meta's Zuckerberg takes shots at Apple App Store fees, maintains its own

    For anyone keeping score:

    Galaxy Store: 30%
    Amazon App Store: 30%
    Microsoft Store: 30% for apps, games, in-app purchases made on the Xbox console; 15% for PC apps, 12% for PC games


    Nintendo Store: 30%
    PlayStation Store: 30%


    Oculus Store: was 30% a couple of years ago.

    I don't know if Facebook changed this Oculus Store recently. Note that signing up for Oculus now requires a Facebook account. I bought my Rift S before Facebook made this change so it will likely be my last Oculus device.
  • What Apple's T2 chip does in your new MacBook Air or MacBook Pro

    davgreg said:
    What about offering an external fingerprint reader and/or mic for the mini?
    As the owner of a Mac mini with the top spec CPU, I can tell you the T2 does nothing to help video transcoding. The lack of a dedicated GPU makes that task spool up the fans quickly for a not very fast cycle time.
    You are either A.) using video encoding software that doesn't support T2 hardware encoding or B.) you don't know the correct settings to enable it.

    I already knew this from previous tests a week ago (between a Mac mini 2018, MacBook Air 2019, and a MacBook 2017). But for giggles, I decided to set up a bunch of tests while I left my house for a few houses.

    I transcoded a 4K video (approximately 30 minutes in runtime) six consecutive times using the same software (latest version of Handbrake 1.2.2) but with different settings and it is clear that hardware encoding is real. Hardware is Mac mini 2018, 3.2 GHz i7 (6 cores, 12 threads) with 16 GB RAM and 1 TB SSD running macOS 10.14.5.

    1. H.265 (HEVC) VideoToolbox: 28 minutes via hardware (despite the fact I set this up for 8000 kbps bitraate, it ended up as 7150, thus resulting in a smaller file. More later.) The fan ran around 1800 rpm, and the CPU load was nearly zero.
    2. H.265 (HEVC) x265 software encoder: 82 minutes. This was clearly CPU intensive; the user load was 12 (six cores, 2 threads per core maxed out). The fan ran at 4500 rpm.
    3. H.265 (HEVC) x265 software, 12-bit encoding: 110 minutes. Even slower. Again, the fan ran around 4500 rpm.
    4. H.264 VideoToolbox: 30 minutes. Hardware encode, the fan was around 1800 rpm and the CPU load was nothing.
    5. H.264 x264 (software): 38 minutes. I wasn't around to use XLD to view the CPU nor fan status, but clearly this took more effort than the VideoToolbox setting.
    6. H265 (HEVC) VideoToolbox with bitrate set at 8850: 29 minutes. In an attempt to get more comparable resulting file than the earlier encode (7150), I upped the bitrate from 8000 to 8850. The resulting file took 1 minute more than the control and I ended up with a 8150 file.

    You don't need to look at your computer display to know whether or not it is using hardware encoding. The Mac mini 2018 fan tells you.

    Of all the comparisons, the most important are between #1 and #2. The latter is all software and takes nearly 3x longer than the encode that leverages hardware. The #2 encode maxes out the CPU and cranks up the fan. The HEVC hardware encode is a walk in the park for the Mac mini 2018.

    Note that I used Handbrake, an application that is known to have access to hardware encoding features for years.

    If your fan kicks in during a transcode on a Mac mini 2018, you are basically doing it wrong.

    But don't take my word for it. Go ahead and do you own tests.

    Find some suitable 4K content to transcode and open it up in Handbrake. Use a preset like "HQ 1080p30 Surround", switch to the Video tab and set the Quality to an average bitrate of 8000 kbps. Then use the pulldown menu selector for Video Encoder and start selecting different encoders. Name the resulting file accordingly and save it to the queue. Then switch the Video Encoder to something else, rename the resulting file and add it to the queue.
  • Microsoft dethrones Apple, becomes world's most valuable company

    Beats said:
    Is the Apple share drop because they didn’t meet random people’s expectations even though they had another record-breaking quarter?
    Random? No.

    If you own shares of AAPL either directly or indirectly and you didn’t buy or sell today, you weren’t one of those who sent the share prices down.

    The people who sent the share price down were today’s buyers and sellers. Some sellers offered some of their shares and today’s buyers didn’t want to pay yesterday’s prices. Some of those sellers decided to accept lower bids. It’s called a free market economy.

    They aren’t random people. They are both institutional investors as well as retail investors. They all have tax IDs. The SEC knows exactly who these parties are and so will the IRS eventually. It’s not just Jimmy Fanboy Investor on AppleInsider, it’s more like some fund manager at FMR or someone who manages the pension fund of your state’s teachers union, hedge fund managers.

    While the stock market will punish companies (and not just Apple) for missing expectations, the market is mostly forward looking. The market is also unhappy when the company withholds forward guidance.

    Apple hasn’t offered guidance since the pandemic began and today’s market reaction was predictable.
  • Bill Gates said Steve Jobs caught Microsoft 'flat-footed' with launch of iTunes Store

    It wasn't predictable. That's why Microsoft was caught flat footed. Remember that Apple's market valuation was decidedly small compared to Microsoft, really a David vs. Goliath situation at the time.

    Time and time again Jobs surprised. The iPod itself was widely doubted when it debuted (2002 I think); just do an Internet search for "cmdrtaco ipod" to see what a popular technologist thought.

    Apple stunned again with the iPhone in 2007 and many predicted failure due to the lack of a physical keyboard. Remember that the RIM BlackBerry was the smartphone gold standard at the time and Windows mobile phones were still a significant player.

    Apple again caught the industry off guard when it released its own silicon in the form of the A-series SoCs and a few years later left the entire semiconductor industry speechless when the A-series jumped to 64-bit architecture, years before it was expected to show up in a mainstream product.

    Apple crushed it again with the iPad and then killed it with Apple Watch, the latter despite long-standing rumors that Apple had been testing "wearables" on its corporate campus for years. Remember that each time, Apple was not first to market MP3 players, online media stores, smartphones, tablets, smartwatches, custom ARM silicon.

    About the only thing predictable from Apple in the past five years was the Apple Silicon Mac. Savvy industry watchers presumed that Apple had been running macOS on prototype ARM-powered Macs in their labs for years, maybe as early as that first 64-bit A7 SoC. There were hints all along: the deprecation of OpenGL, the end of support for 32-bit apps. The inclusion of specialized chips like the T2 Security Chip which was clearly an interim solution to be paired with Intel CPUs until Apple could ship their own SoC with that functionality built in.

    One can see where this is headed for the Macs. macOS Monterey is leveraging the Neural Engine in the M1 SoC, the machine learning silicon. My guess is that the M2 and future designs will vastly improve on the Neural Engine's capabilities which will take on tasks that it is better suited for than the CPU cores: image recognition, text recognition, voice recognition, signal processing (both audio and video). As mentioned in the WWDC keynote, more machine learning tasks will be handled on device rather than being sent to Apple's servers.

    Nvidia's GeForce RTX GPUs do audio and video processing with their Tensor cores (machine learning); if you have an GeForce 20 or 30 series graphics card, you can use the Nvidia Broadcast software to clean up audiocasting and videocasting.
  • Apple's claims about M1 Mac speed 'shocking,' but 'extremely plausible'

    cloudguy said:
    h4y3s said:
    Don’t overlook the unified memory architecture that Apple can deploy, (as they own the whole stack) this will save 2x on a lot of common functions! 
    Unified Memory Architecture is not a new idea. It was created by Nvidia years ago and several companies use it for their own products. It has its own set of advantages and tradeoffs. On some of the more technical blogs, folks are already debating the virtues of UMA versus segregated CPU/GPU memory as the thinking - and experience for those who have it - is that you can run into problems if the GPU exhausts memory resources that the CPU needs and vice versa. One of the reasons why Nvidia created UMA in the first place was that they are promoting the idea to data centers that you get more performance per dollar by shifting as many computations from the CPU to the GPU as possible for workloads that don't require a CPU's general purpose flexibility..

    Sorry, SGI implemented UMA with their O2 workstation (circa 1996), years before Nvidia’s founding. 

    You are right that it is not a new idea.
  • Google Stadia getting added to the graveyard of failed services

    I am surprise they haven't killed Google+ yet. I have no idea what it does beside just being a profiler of an existing Google profile?

    They shut down Google+ services in 2019.

    I think Google Plus URLs resolve elsewhere just to prevent 404 errors.
  • Apple Park now Apple's official corporate address

    zoetmb said:
    Moving an entire workforce?   I thought lots of people were staying at 1 Infinite Loop and that address isn't going away.   I was under the impression in addition to those people who weren't moving, the freed-up space at 1 Infinite Loop was going to be used to consolidate employees who are currently in various leased buildings around Cupertino and it was those buildings that would eventually be emptied of Apple employees.    I seem to remember reading something about fears of an office space real estate glut in Cupertino once Apple completed this.   
    I do not work at Apple but I have numerous pals who do.

    1 Infinite Loop isn't shutting down. As you theorized, many other employees will be moving into 1 Infinite Loop once the current occupants (mostly iPhone, iPad, Mac, hardware engineers as well as iOS and macOS software engineers). Let's say you work on Apple Maps or iCloud Backup; well, you're not going to Apple Park, but you probably will move into 1 Infinite Loop.

    Apple Park has been deemed an engineering campus so 1 Infinite Loop will likely become the home for a bunch of administrative groups: Product Marketing, Finance, Legal, etc.

    Some of the satellite Apple campuses will eventually shut down as leases expire. Apple has a number of buildings in neighboring Sunnyvale and eventually those people will move closer to the Infinite Loop/Mariana/Bandley area.

    This will likely free up some highly desirable smaller buildings and complexes for smaller companies as Apple has had a stranglehold on Cupertino office space for a number of years.