Java has always been 64-bit (long data types), and the Java VM has supported 64-bit addressing for years due to Java's use in the enterprise. Dalvik adheres to this spec and supports 64-bit data types and addresses. For application developers on Android, you won't even need to recompile software if it uses no native code, you'll get 64-bit use automatically once Dalvik is deployed at 64-bit.
But Dalvik runs on top of Linux, so the question is, is Linux 64-bit ready? Yes, it was the first consumer OS to be 64-bit, before Apple and Microsoft. What is usually not 64-bit bit ready are proprietary binary device drivers that you can from Qualcomm, Nvidia, et al.
But, there is evidence this isn't going to be a problem. The Android x86 project shipped Android running on 64-bit x86 AMD/Intel chips a long time ago and it works.
What I find interesting is, when Apple is not the first to market (multicores, bigger ram, bigger screens, etc), then "specs don't matter" and Apple is not "following", but when Apple is first, all of a sudden, specs are the most important thing in the world, and anyone who was thinking the same thing, but late out of the gate is a copycat.
The reality is, 64-bit was inevitable, as mobile phones are already within a factor of 4 of the upper RAM limit, and ARMv8 was specced out in 2011 and vendors licensed SoCs in 2012 with predictions of shipping in 2014. Apple made it first, but being first out of the gate isn't always best either. It's like when NVidia and ATI used to release ultra powerful GPUs, but no games could really leverage them to the fullest, because it would fragment the market against mostly everyone else who had lower end HW. 64-bit on the 5S is going to give only modest speedups for the time being, nothing like the 2x quoted.
What's missing from this cheery discussion is the effect that pushing bigger die sizes with more transistors and switching fabs has on yields. The iPhone 5s seems to indicating short supply, this is due to either the fingerprint sensor, the 64-bit CPU, or a combination of both. My guess is the A7 is having yield problems. TSMC is notorious for this, and NVidia and ATI which both used TSMC used to have to frequently respin their silicon when they started hitting +1billion trannies. The more trannies and die area, the higher the probability of defects.
Thus, the A7 may in fact be costing Apple delays in parts and margins, because the more bad chips that come off of a wafer, the worse the margins. When most of the industry goes 64-bit in 2014, no one's going to care that Apple was first, just like no one cared who had the first dual core, quad core, or octa-core. Consumers don't care, only fanboys do, so they can engage in continuous penis measuring contests.
The iPhone 5S confers no benefit in battery life. Imagine if instead of doubling the number of transistors, Apple had instead, used the new semiconductor process to end up with a fall smaller SoC, it would drain less power, and be cheaper, and battery life would be greater. The question needs to be asked, does the iPhone already perform well enough? Does it really need 2x more CPU? Will consumers really notice that, or maybe they'll notice 20 hr battery life instead of 10? It's like having the ability to design a car with 2x the HP or 2x the gas mileage. Increasing the power isn't always the best win, as we learned from the desktop, eventually there are marginal returns.
(I'm guessing I'll be banned for writing an article that speaks positively of Android and semi-critical of Apple, as it seems you can get banned by these forums for merely stating the facts, while other people spew downright nasty ad hominems and nothing ever seems to happen, as long as you tow the Apple party line, it seems criticizing Apple is too hurtful)