Apple releases updated iOS 9.3 to fix Activation Lock bug on older devices
Apple on Monday pushed out yet another new build of its iOS 9.3 mobile operating system, attempting to fix issues related to an Activation Lock bug that presented itself on older iPhones and iPads.

Still identified as iOS 9.3, the new update is distinguished with a new build number, 13E237. Users who have been stuck on the Activation Lock screen since last week may need to restore their device via iTunes to upgrade to the new version and address the issue.
Apple stopped signing the previous, broken version of iOS 9.3 for older devices last week. The problem occurred in the password authorization phase of the iOS 9.3 setup process.
Apple confirmed that the problem affected iPhone 5s and earlier and iPad Air and earlier. AppleInsider was first to report on the issue, noting that certain device owners were unable to proceed past the password authentication stage after updating to iOS 9.3.
Some users found success in downloading iOS 9.3 through iTunes on a Mac and installing the firmware via a hardwired connection, suggesting there is an underlying issue on Apple's end. Others have found a full system restore also works, though the method is hit-or-miss.
A different iOS 9.3 build, 13E236, was released specifically for the iPad 2 last week to address the authentication issue as well.
Apple has also published a support document offering workaround suggestions. The company urges affected users to reset their password through iCloud, perform an iTunes-based installation and activation, or remove Activation Lock through iCloud.com. As reported on Tuesday, those who tried these methods have found limited success.
iOS 9.3 still has another significant, unrelated bug that causes apps to crash and freeze when attempting to open hyperlinks in Safari, Mail, and Messages, as well as third-party Web browsers like Google Chrome. The issue is apparently unpatched in the new iOS 9.3 update, and affects all devices, not just older ones.

Still identified as iOS 9.3, the new update is distinguished with a new build number, 13E237. Users who have been stuck on the Activation Lock screen since last week may need to restore their device via iTunes to upgrade to the new version and address the issue.
Apple stopped signing the previous, broken version of iOS 9.3 for older devices last week. The problem occurred in the password authorization phase of the iOS 9.3 setup process.
Apple confirmed that the problem affected iPhone 5s and earlier and iPad Air and earlier. AppleInsider was first to report on the issue, noting that certain device owners were unable to proceed past the password authentication stage after updating to iOS 9.3.
Some users found success in downloading iOS 9.3 through iTunes on a Mac and installing the firmware via a hardwired connection, suggesting there is an underlying issue on Apple's end. Others have found a full system restore also works, though the method is hit-or-miss.
A different iOS 9.3 build, 13E236, was released specifically for the iPad 2 last week to address the authentication issue as well.
Apple has also published a support document offering workaround suggestions. The company urges affected users to reset their password through iCloud, perform an iTunes-based installation and activation, or remove Activation Lock through iCloud.com. As reported on Tuesday, those who tried these methods have found limited success.
iOS 9.3 still has another significant, unrelated bug that causes apps to crash and freeze when attempting to open hyperlinks in Safari, Mail, and Messages, as well as third-party Web browsers like Google Chrome. The issue is apparently unpatched in the new iOS 9.3 update, and affects all devices, not just older ones.
Comments
As someone else noted above , what is the public beta program for if not to catch stuff like this? I have been an Apple advocate for years, and I own their stock, but I'm afraid someone is letting the ball drop with these updates. This problem is not one of a near obsolete iPad 2 with activation problems. This is an issue affecting their browser on their current (or near current) hardware.
I suspect that many devs do not support older hardware as much as Apple does, thus they don't test the betas on older hardware. This is all up to Apple. If an individual chooses not to apply the update, they may be subjecting themselves to security risks, but it is their decision. When Apple drops support, you should definitely consider updating your hardware. As it is they are still supporting 4 year old iPad 2 so there is a reasonable expectation that it should work as advertised.
Now, stories like this one, or recently on the MacOS side related to wifi still do make me wonder to what extend any bigger software project is just a clusterf**k of code with basically no chance of catching everything and why is it like this? Last time I wrote somewhat serious code was when turbo pascal and c were fashion, so please bear with me here for a sec.
A mechanical system has basically all analogue interfaces through the components physical properties/dimensions, and therefore basically an infinite number of states to be theoretically checked.
in contrast, software is (usually) not "fuzzy" and comes with a finite state space. What makes it so hard to make bullet proof bug free software then?
Or, over decades, why has - let's call it - insufficiently tested code not been replaced over time and piece by piece with "100%" sound code?
Is it a question of effort versus quality and some pareto that would increase costs by a multifilament just to catch the last few bugs?
Or, is there a a fundamental and proven law/theorem that any code by force contains what we consider as bugs?
Maybe some of you professional software developers can provide some insight.
As far as iOS is concerned you can thank jail breakers for the unsigning of older iOS versions.
It's not just the new capabilities. At some point your iOS device will conveniently download a large update file taking up space on your device and show a nagging red dot on your System Preferences icon.
This is all well and good until the update cripples the device. Then this information trickles out into the general media and all of a sudden Apple doesn't look so shiny. At some point this results in a hit in sales/profits.
They could do it if the phone is completely wiped and thus don't gain anything from going back.
Enabling a backup, going back to a old buggy release and then putting the data back would be a good way to crap phones if it was allowed; It is not.
But, those tests often cannot really run on end devices, for practical reasons. It's when they get out of their neat little sandbox that the fun really occurs.
Testing also fail because people that build software fail in imagining how people could actually use it badly, or not in the way devised to provide the service they programmed.
Good QA should probably be paid more because breaking software in a non obvious way often takes more work than to create it.
Just imagine the immense variants that come from simply what the user can do in settings!
When you reach real devices, users, the environment, variations in manufacturing, variations in the software on the device, variations in configurations, variations in hardware it connects to and their configutration (routers, telecom switches, carriers, bluetooth devices), variation in the configuration of other Apple device it must coordinate its services with and their own bug. Timing issues, heat issues, resource issues, race conditions, silent bugs (bugs that only occur when you set out to make them happen, like someone trying to overflow a buffer). Then there's the whole interacting with external services in a speedy and secure way.
Often, bugs, especially security ones, work in chains, one bug enabling another.
Considering how complex modern operating systems are, it is a miracle that they are not more buggy.
I am also not denying that it's a complex issue. By far not.
The point I am asking about is on a generic level, independent of Apple and the current issue.
I m perfectly aware that "all software has bugs". The question is why? Is it inevitable? If yes, because of an economic decision that you put a natural limit to the testing resources? Or because from a system theory POV there cannot be bug free software?
I guess, I am more on Apple's side than many, and I realise that bugs do happen. And that they do happen even after n beta releases. I'm not denying or belittling their effort, or the effort of any decent software developer. And for sure I'm not belonging to a group of people shouting out loud "what a poor QS they have! again more proof of their decaying quality blablabla".
I simply like to understand.
Let me comment on that: I am aware of the complexity that's coming through he countless blocks of interacting code and then hardware plus environment.
Actially, professionally I am dealing with solving complex technical issues on large scale for again complex products such as automotive and aerospacesuch as product failures leading to recalls, unacceptable scrap rates etc.
When identifying the root cause for an issue in such a mechanical / electric / mechatronic etc system in most cases it is an oversight in terms of a dimension on a component that was eg not specified correctly. (Less it's stuff actually out of spec) You might consider this a "bug" and it's affecting people usually in a ppm order of magnitude. The fundamental reason behind this is that you cannot possibly test all state space, not even at new part level (I.e. Ignoring wear and tear) in a physical manner. And simulation as well can't cover all, as it is a) a simplification of reality and b) can only check a substance of continuous variables, such as environment temp. Usually what you do is boundary checks.
On the software side of things i always feel developers are in a much more comfortable situation as it is a) everything much more black and white. A setting in an iPhone eg is either digital (on or off) or discrete (eg has ten well defined levels) as opposed to let's say a clearance in a bearing which has infinite levels. b) you can test much faster because you simulate all rather than having to manufacture parts. Assemble them and see the result.
Therefore I always have this idea that - if one really wanted - you can make a 100% test. At least of features where analog parameters such as temperature do not matter. For example the beauty of a software update - when compared to we a car release - is that each and every unit of a product number receives exactly the same bits and bytes (provided the download was successful which should be caught by the verification step). Not almost the same. The very identical same. Compare this to the possible and real physical variations of a specific car engine model.
And let's take the example of a small code fragment, eg one method of a class. It is supposed to receive input from let's say two variables of a certain type and within a certain range. Then perform a defined action upon both and return an output. You can check all prerequisites beyond doubt can you not? And if they are violated throw an exemption to handle this properly. Therefore, it should be impossible for this small block of code to go haywhere, at least from a technical POV. Now from a functional point of view (not sure I'm using the right terminology here) let's assume the input variables are representing physical dimensions. Therefore you must ensure that the units of receiving code and sending code are the same and avoid such beautiful errors like in that space mission where the whole thing went south because one function expected centimetres but received inches. This should be manageable as well, no?
Next up I know I'm allocating memory wishing this method. And hey, I am on an old code base or compiler where this specific function is prone to stack overflow. Actually found through bug tracking. Cool. I fix the foundation and never use that old allocation routine again.
Too simple?
Maybe im wrong and both worlds, the mechanical and the software one, are much closer to each other than I think.
I just did a successful and uneventful update of my mid'07 iMac. Originally shipped with 10.4, it is now happily running 10.11.4 at my shop.
I don't think it would be as useful running Snow Leopard these days.
Oh, and I do have a new 5K here at home.