dewme
About
- Username
- dewme
- Joined
- Visits
- 932
- Last Active
- Roles
- member
- Points
- 15,802
- Badges
- 2
- Posts
- 6,117
Reactions
-
iOS 16.3 now available with support for new HomePod, security keys
beowulfschmidt said:DAalseth said:Does anyone know why you download the update, it verifies the update, then when you tell it to install it verifies the update again? I mean I’m all for security, but literally it just did that,Yeah, not only that, but I really appreciate the convenience of pressing the "Download and Install" link, and then having to press the "Install" link after it's downloaded.And has anyone else noticed that a Yubikey is just a number, albeit a long one? A never changing number?
I've also seen cases where it will download the install image a second time, as well as repeating the verification process.
I use Apple's Content Cache to speed up the update process across several Apple devices, but for OS updates it actually has little impact because I have decent download performance and the time required to complete the "Preparing" phase of the update process dwarfs every other phase in the installation.
I suspect there are a number of data integrity checks, "dirty flags" that can get set by other on-device processes, along with some timeouts/expiries that cause the process to "phone home" more than once just to make sure that everything is good to go before the final installation process kicks off. FYI - dirty flags are a way to indicate that "something" has changed or may have changed in a managed resource and needs to be verified again to ensure it's still in a consistent state. I suspect that repeated Verifications are being triggered because some other process/thread on the same device made a change that the installer may be concerned about and since the last consistency check was done by the installation process. With hundreds or even thousands of threads still running during the update process, this isn't outside the realm of possibility. Better safe than sorry.
These seemingly redundant behaviors are somewhat of an annoyance, but I can tell you firsthand that installation programs and processes are one of those things that have so many ways to fail and so few ways to succeed. Double checking and been very exacting in terms of maintaining consistency is totally justified to achieve a successful outcome. Being right matters more than being fast.
Apple's track record with me has been nearly perfect. The only time I ended up with a device in a bad state after installation, Betas included, was the very first macOS Beta that introduced APFS. It totally hosed my system because APFS had not yet been certified to work with Fusion drives and my iMac had a Fusion drive. Fortunately I only lost a few hours of my time, but lost no data, which is the only acceptable outcome. When you play with early Beta releases, you should be prepared for this. Fortunately, or unfortunately, Apple's update and installation process is extremely stable and reliable so even those who play fast & loose with Beta releases very seldom get burned, at least based on my experience and compared to other platforms. -
Early previewers praise new HomePod's 'just wow' audio
macxpress said:Skeptical said:Wait until it poops the bed and Apple says nothing is wrong with but can replace it for 80% the cost of a new device.Call it what you want, but there are obviously enough owners of HomePods that no longer power up and are restored by replacing the same exact diode to make you think that there is a component in the product that is failing prematurely without Apple addressing the issue for owners who’ve gone beyond the warranty period.It’s easy to say, “But but but … the product lasted beyond the warranty period, so what are you complaining about?” Well, if consumer electronic products in particular only lasted as long as the warranty period, the consumer electronics industry and buyers would be in a state of total disaster.The failure rate for electronic products and components tend to follow what looks like a bathtub curve with higher failure rates very early in the product life, ie, infant mortality, and very late in the product life, ie, wear out, with a long span of very low failure rates in between. Product warranties are really intended to cover infant mortality and failures due to correctable manufacturing issues. At least in theory and based on historical data. But as a product or system, any component that does not conform to this pattern and introduces a single point failure mode upsets the whole Apple cart.
I’ve had two (2) original HomePods die, one during warranty and one outside of warranty. The failure mode was the same in both - device doesn’t power up. Despite my love of the HomePod it is clearly the least reliable Apple product the I’ve ever owned. Hopefully Apple addressed the component failure in the second version.Frankly, after reading about the changes Apple brought to the second generation HomePod I’m a bit surprised that the audio performance would be noticeably improved. The changes look to me more like cost reduction motivated changes - plus a number of improvements intended to improve the device’s compatibility with Apple’s home automation initiatives.The flip side is that the original HomePod was so far beyond all but a few competitors that Apple could trim a few things judiciously without sacrificing the overall quality to the point where they would lose ground to serious competitors. Apple dominating the likes of Amazon’s and Google’s budget priced loss leaders, eg, the Dot, are meaningless.Finally, from a positioning standpoint I’d be a ton more thrilled with the biggy HomePod if it was $199, or 2X the price of the mini. Seeing a third product, a HomePod Video to take down Amazon’s wall mounted Echo, occupy the $299 (3X the mini) slot would be nice. The form factors in the HomePod product line don’t have to remain similar. I’d have no problem with a premium HomePod that fit into the sound bar form factor, ie, HomePod TV, that also pulled in some Apple TV functionality. -
M2 Pro Mac mini vs Mac Studio - compared
I’ve resigned myself to buying the best available product at a price I’m willing to pay when I actually need the product.As product categories reach higher levels of maturity the deltas between single release versions get smaller and more incremental. I used to upgrade cellphones every two years, now it’s like four to five years.I still see a ton of “glowing apple” MacBooks out there in the real world. Somehow their owners are getting stuff done with them and dealing with their FOMO. -
Mac Pro trade-in value plummets after M2 announcements
This entire article is totally whacked. If anyone was offering up used 2019 Mac Pros configured at the price levels implied in this article there would be lines of resellers around the block to snap them up. The Pros in the 50K original price range are still magnificent machines for software development for any platform you desire. These "old" machines would be a dream platform for running multiple Windows 10/11 VMs.
I would also assume that the vast majority of people who buy machines of the caliber of $50K Mac Pros use them for generating revenue that greatly exceeds the cost of the Mac - plus they depreciate the Mac over whatever the depreciation schedule is for this type of business purchase, probably in the 5 year range depending on gov't incentives. Unless you're giving your kids brand new Ferraris for their 16th birthday, and there are plenty of people who do, these are business tools that get purchased for precisely calculated economic reasons. -
Apple prepares HomeKit architecture rollout redo in iOS 16.3 beta
elijahg said:I'm stuck in limbo where I can't invite anyone who has ever opened the Home app to my home, because no one can join the upgraded architecture without upgrading their own home first for some ridiculous reason. Therefore since the rollout was cancelled people can't upgrade their home, and so it's impossible for them to join.
I can't downgrade my home even if I reset everything, because despite the upgrade being cancelled new homes still use the new architecture. And besides that resetting doesn't work properly anymore either, even with the special home it reset profile. It's a mess.Apple's software QA is abysmal these days, it used to be top notch. It's extremely disappointing for a "premium" brand. Some things are nearly as bad as OS 9 - though the kernel seems to be rock solid at least.
That said, the upgrade seemed to improve the responsiveness and reliability of homekit. I'm sure they could have used the homekit hubs as a bridge between old and new versions - though less incentive to upgrade of course.Apple's quality challenges are actually very typical of most large scale software projects being done over the past couple of decades. The whole notion of software "QA" as it was once was, something performed by a dedicated team that descended upon a product development process, usually late in the cycle, with a great deal of zeal to make sure that nothing bad leaked out the door and that everything that was promised was actually delivered to an acceptable level of quality, i.e., verification, and actually solved the intended end-user problem. i.e., validation no longer exists.Don't get me wrong, software is still tested. Heavily tested in fact. It's tested at many levels from the nuts & bolts deep in the code as unit testing as part of test-driven-development, to integration testing, to system testing, and of late security testing, e.g., penetration testing, interface fuzzing, etc. The lower levels of testing are done repeatedly, very often automated, at typically on every code commit and build. Collections of tests that comprise a cross section of wider functionality are often declared as "regression tests" which are essentially a high level smoke test to provide a level of confidence that the most recent changes introduced into the code base didn't break what was already working.So why do software products and systems that are supposedly so extensively tested still seem to break so often? Imho, and as someone who's worked at pretty much every level of product and system development engineering, it's a combination of continuously increasing complexity, never-ending releases (the software is never really done, so we can fix the broken things in the next release which may be tomorrow or next month), monotonically accumulation of technical debt (anomalies that don't get addressed), and an insufficient number of team members who fully understand the problem domain and the customer challenges the product is intended to solve, which ultimately results in validation failures.Of course there are several other factors like schedule pressure, cost pressure, poorly defined specifications (nowadays - maybe no specifications at all), late breaking changes, bad planning, over promising, actual bad code, naive software development processes, and all manner of management problems. But from an engineering standpoint, the lack of having a clear understanding of the problem domain, the customer's needs/concerns/pain points/cost concerns (acquisition and lifecycle, TCO), the larger system the software has to live within, and all of the things that define whether the software meets the validation bar are what's missing. There's testing aplenty, but veritably "good code" that does the wrong thing or sacrifices consideration for quality attributes that code that passes low level testing impacts, like security, privacy, transactional integrity, etc., still dooms the software product.What's missing today are the system engineers, product owners, and the problem domain aware architects and system testers who in the past would collaborate upfront on making sure the product to be built was the "right product" and would be subjected to the appropriate validation standards that would be applied for the problem being solved. Most of these things have been eliminated or scaled back for the sake of time-to-market, velocity, cost, resource availability and cost (outsourcing, contracting, anyone can code farms, etc.), and "agility." Building the wrong thing quickly and iterating over it sixteen times is okay, because you know, the software is never really done.Learning how to "build software right" is a much easier challenge than learning how to "build the right software." The latter requires decades of investment.Has Apple fallen into these traps? Probably. I suppose a "glass half empty" interpretation of their greatly expanded Beta Testing programs is a soft admission that they simply don't have the full scope of internal expertise in the primary problem domains that they are targeting with their software. This kind of soft admission is much better than them foisting well-intentioned but poorly executed software on their customer base or trying to hide their recognized shortcomings. I think it's an admirable approach. The constantly increasing complexity of software in-general may make Apple's approach inevitable for many more software products. Nobody likes scary surprises. Adding actual customers to the feedback loop is a good thing. Hopefully it frees up their internal teams to give more attention to the "build software right" aspects of the task at hand.