Intel exec claims iPhone runs on Xscale chip

Posted:
in iPod + iTunes + AppleTV edited January 2014
As part of a recent interview with an Italian trade paper, an Intel exec explained that the chipmaker's involvement with Apple's iPhone's hardware was at once more substantial and complex than previously thought.



Intel Italy chief Dario Bucci told Il Sole 24 Ore that while the semiconductor firm had no direct part to play in the CPU powering Apple's new handset, the chip is still an Xscale and therefore shared a link to the company's efforts.



Intel had originally developed the Xscale platform in 2002 while branching out into mobile phones and PDAs, but later sold the architecture to Marvell Technology group in June of last year when it chose to return its attention to x86 and Itanium designs. Bucci identified Marvell as the source of the Xscale during the interview.



The explanation may help explain the contradictory statements made by Apple and Intel last week when the two companies were asked about the underlying hardware of the iPhone. A spokesman for the Cupertino-based developer initially claimed that Intel was directly responsible for the low-power processor, but promptly had his argument refuted by Intel in a statement supplied just hours later.



Intel's headman also made the surprising but unverified claim that the company was also responsible for the flash memory used to store music and other information in the phone, asserting not only that its NAND chips were used but that Apple was now one of Intel's primary clients.



If true, this admission may be a double blow to Samsung according to technology news site Electronista, which pointed out that Samsung was both a direct competitor in the phone business and a potential parts supplier. The Korean electronics firm already supplies media processors for Apple's iPod shuffle and flash for the iPod nano.

Comments

  • Reply 1 of 18
    sunilramansunilraman Posts: 8,133member
    1st post!! Woooooooooo 8) ...Now, on a serious note, can someone identify PortalPlayer's role in all this? IE. Where are they used in terms of iPod Video, iPod Nano, iPod Shuffle, iPhone, AppleTV(no PortalPlayer), AirportExtreme(no PortalPlayer).....??



    Of the top of my head PortalPlayer is only involved in the current shipping iPod Video 30gb and 80gb. AFAIK with my knowledge at this stage Xscale and above will be needed to drive the iPhone 2.5G and (eventually 3G global) models.



    Looks like Apple is in the mix hardcore with their sourcing/ supply chain with regards to Flash [NAND] for iPod and a hell of a lot more needed for iPhone. By the end of 2007 the iPod Nano/Video could go to NAND with say 4x8gb NAND = 32gb. Maybe even 6x8gb NAND which = 48gb.



    "gb" of course above refers to Gigabytes
    I generally don't like to write gb ghz and ram in capitals. 8)
  • Reply 2 of 18
    sunilramansunilraman Posts: 8,133member
    Oh boy when the first iPhones come out a whole lot of people will be instantly shredding it to pieces

    discovering every single component that goes into it. .......Awww yeah. Sacrilegious but necessary.
  • Reply 3 of 18
    Intel would have been better off dumping the Itanic and keeping the Xscale rather than the other way round!





    At least the Xscale has a future, unlike the Itanic, which doesn't even have a past!
  • Reply 4 of 18
    Quote:
    Originally Posted by samurai1999 View Post


    Intel would have been better off dumping the Itanic and keeping the Xscale rather than the other way round!





    At least the Xscale has a future, unlike the Itanic, which doesn't even have a past!



    Because the Xscale does not perform 6 instructions per clock, have more than 128 registers, run enterprise class data centers in a 64x dual-core processor system with 1TB of RAM: Superdome.



    Look, everyone likes to bash the IA-64, and it does suck as a PC processor, but it works extremely well as a high-end server processor. HP, Unisys, NEC, Bull, etc. make a lot of money on such servers, and they definitely have their place.
  • Reply 5 of 18
    Hold the applause..



    http://forums.appleinsider.com/showt...89#post1023489



    my 69th post too!
  • Reply 6 of 18
    jeffdmjeffdm Posts: 12,951member
    Quote:
    Originally Posted by AppleInsider View Post


    Intel had originally developed the Xscale platform in 2002 while branching out into mobile phones and PDAs,



    I don't think Xscale is a totally homegrown product. Xscale's heritage is from DEC's StrongARM chip technology, which Intel paid to get from DEC. While the first year or two had some rough spots, I think the architecture did fairly well under Intel's stewardship.
  • Reply 7 of 18
    sunilramansunilraman Posts: 8,133member
    Quote:
    Originally Posted by SteveGTA View Post


    Hold the applause..

    http://forums.appleinsider.com/showt...89#post1023489

    my 69th post too!



    Well bloody done. 8) You get a slightly used, squirted-once-in-a-while Zune player.
  • Reply 8 of 18
    sunilramansunilraman Posts: 8,133member
    Really though, this Intel transition and OSX.... wow, you set the foundation and we're continuing to see all the fruits of 30 years of innovation and research and development. And pr0n. Where would pr0n be without modern computing....
  • Reply 9 of 18
    Quote:
    Originally Posted by MonaLisa View Post


    Because the Xscale does not perform 6 instructions per clock, have more than 128 registers, run enterprise class data centers in a 64x dual-core processor system with 1TB of RAM: Superdome.



    Look, everyone likes to bash the IA-64, and it does suck as a PC processor, but it works extremely well as a high-end server processor. HP, Unisys, NEC, Bull, etc. make a lot of money on such servers, and they definitely have their place.



    Yes, I'm aware that the Itanium is a server class processor, and the Xscale is a mobile/low power orientated processor, based on StrongArm, which is in turn based on the ARM.



    I don't understand why Intel sold off the Xscale business - as it seems like a good processor for the mobile market, which as SJ said has a TAM of 1B units/pa.



    Intels two big disasters in recent years have been 1) the Netburst Architecure & 2) the Itanium Architecture

    The Netburst was less of a disaster because they still managed to keep their business going until they realised their mistake and got the Conroe/Mermon/Woodcrest out.



    Whereas, the Itanium is an on-going disaster, that Intel continues to sink $billions into, and that Intel won't abandon due to loss-of-face issues.



    But anyway, who cares, the iPhones gonna be great!

  • Reply 10 of 18
    Quote:
    Originally Posted by JeffDM View Post


    I don't think Xscale is a totally homegrown product. Xscale's heritage is from DEC's StrongARM chip technology, which Intel paid to get from DEC.



    Don?t forget that the ARM architecture was originally developed by Acorn for use in its RISC OS-based computers. The Acorn RISC Machine (as it was originally called) was subsequently developed by the spin-off company Advanced RISC Machines, which in turn licensed the technology to DEC (among others).



    Perhaps the most interesting evolutionary development of the ARM architecture was AMULET, an asynchronous processor developed at the University of Manchester. Something like this has yet to come to market, despite the potential for high performance and low energy consumption. But in the days of clock-frequency wars, it didn?t stand a chance from a marketing point of view (an asynchronous processor having no clock!).
  • Reply 11 of 18
    From a performance standpoint the XScale is quite good, but from a low power standpoint, not really. I wonder if they're using the 270 or 290. . . .
  • Reply 12 of 18
    bentonbenton Posts: 161member
    Steve Jobs will be informing Apple partners to shut up and pay up.
  • Reply 13 of 18
    Quote:
    Originally Posted by samurai1999 View Post


    Whereas, the Itanium is an on-going disaster, that Intel continues to sink $billions into, and that Intel won't abandon due to loss-of-face issues.



    Way back when Apple announced the Intel switch and I read up on all the future chips, Intel was bringing the Itanium in line with the Xeon chips. I _think_ it was going to be with the Xeon chips that are now in MacPros? (but may be the next generation).

    (edit: next generation xeons: http://www.theregister.co.uk/2004/07...anium_chipset/ )



    The idea was that the Itanium & Xeon's CPU sockets and supporting chips would be pin compatible - you can take out the Xeon, and put in an Itanium, without any other design changes. It would seem to be an excellent idea for Intel - but I don't know what became of it.



    Intel is smart to keep the Itanium chip alive - having multiple similar chip architectures saved them from their Netburst mistakes, having another line may do similar in a few years (and may be useable now?)
  • Reply 14 of 18
    zandroszandros Posts: 537member
    Quote:
    Originally Posted by SteveGTA View Post


    Hold the applause..



    http://forums.appleinsider.com/showt...89#post1023489



    my 69th post too!



    Sorry to ruin your parade, but it looks like I were faster.



    In any case, the EPIC architecture is great, in my opinion. It just does not run Counter-Strike that fast.



    /Adrian
  • Reply 15 of 18
    jerkjerk Posts: 8member
    Quote:
    Originally Posted by sunilraman View Post




    "gb" of course above refers to Gigabytes
    I generally don't like to write gb ghz and ram in capitals. 8)



    Well, you'd then better take of those silly glasses to see things more clearly. This would give you problems would you ever enter the world of knowledge and science.



    Those characters aren't just random characters you know, they are units and prefixes. Different casings are different units and prefixes.



    Examples:

    m - milli

    M - mega

    b - bit

    B - byte



    So "mb" means millibit, which is quite an uncommon unit, and "MB" means megabyte. They aren't the same, you see.



    Please stop abusing the units and prefixes. It isn't cool. It is just very ignorant and plain stupid.
  • Reply 16 of 18
    zandroszandros Posts: 537member
    Quote:
    Originally Posted by jerk View Post


    Well, you'd then better take of those silly glasses to see things more clearly. This would give you problems would you ever enter the world of knowledge and science.



    Those characters aren't just random characters you know, they are units and prefixes. Different casings are different units and prefixes.



    And to make things even harder, flash memory capacity is usually written in gigabit, so it wasn't really clear at all. 8)
  • Reply 17 of 18
    jeffdmjeffdm Posts: 12,951member
    Quote:
    Originally Posted by Zandros View Post


    And to make things even harder, flash memory capacity is usually written in gigabit, so it wasn't really clear at all. 8)



    The individual chips of most memory chip types are rated in bits. It is that reason because memory can be rearranged different ways, parallel and serial, and the board designer might install extra memory chips to handle parity or ECC information. Once it has been soldered to a memory module, then the rating is usually in bytes. Some types of memory chips are serial too, so they might need to gang eight or more in parallel to get what they want. Seriallized lines are usually rated in bytes.



    Most people don't handle the bare chips themselves, at most, it's in soldered modules. It's only a problem when geeks and their web sites see specs of the bare chips. They like to read the big number and be astonished when it's not really that big.
  • Reply 18 of 18
    Quote:
    Originally Posted by JeffDM View Post


    The individual chips of most memory chip types are rated in bits. It is that reason because memory can be rearranged different ways, parallel and serial, and the board designer might install extra memory chips to handle parity or ECC information. Once it has been soldered to a memory module, then the rating is usually in bytes. Some types of memory chips are serial too, so they might need to gang eight or more in parallel to get what they want. Seriallized lines are usually rated in bytes.



    Most people don't handle the bare chips themselves, at most, it's in soldered modules. It's only a problem when geeks and their web sites see specs of the bare chips. They like to read the big number and be astonished when it's not really that big.



    And let's not even start on the issue that most raw chips use the term "megabit/gigabit" to describe something entirely alien to the metric system of prefixes. 512 megabit is accepted to be 512 * 1024 * 1024 bits, or 67108864 bytes. And we'd traditionally be inclined to call that 64 MB.



    But the good folks in charge of the metric system have already claimed the M and G prefixes to mean 10^6 and 10^9 respectively. So a properly-labelled 512 megabit chip would contain 512 * 1000 * 1000 bits, or 64000000 bytes. We would traditionally call that just over 61 MB, but technically, the designation 64 MB is correct.



    It's fairly common nowadays for hard drive, storage card, etc manufacturers to use the technically correct base-10 definitions of MB, GB, etc.



    However, I'd be shocked to find any bare silicon vendors which actually used anything other than the base-2 definitions.



    Maybe we should follow the ISO, IEEE, etc, in using the correct units for all these base-2 numbers.



    1024 bytes is not a kilobyte (kB) , it's a kibibyte (KB or kiB).

    1024 kibibytes is not a megabyte (MB), it's a mebibyte (MiB).

    1024 mebibytes is not a gigabyte, it's a gibibyte (GiB).
Sign In or Register to comment.