ARM seen challenging Intel's notebook chip dominance by 2013

124

Comments

  • Reply 61 of 91
    Quote:
    Originally Posted by Snowdog65 View Post


    It is a very solid distinction. Apple actually had a transition strategy to move all future machines to an architecture that was offering equivalent to greater performance.



    Distinction 1: It was actually a clear strategy. Apple won't be selling OSX intel and OSX ARM computers at the same time. That is a confused muddle. Apple is all about streamlined products lines.



    Distinction 2: Intel was at least equal in performance. ARM is behind behind by a factor of ten so is completely unsuitable to actually replace Intel.



    ARM doesn't provide to power to replace x86 and it will be a confused mess coexisting.



    Your wrong. Apple already sells iOS and Mac OS X at the same time and no one confuses the products. As long as Apple makes a clear visual distinction in its products no one will confuse them. ARM as I wrote before will be (much) better than the current MacBook air performance wise because of its blazing GPU iOS offloads most of its tasks to. (Read my previous post.)



    J.
  • Reply 62 of 91
    jnjnjnjnjnjn Posts: 588member
    Quote:
    Originally Posted by Smalltalk-80 View Post


    I think it is pretty obvious to most people that the GPU as a separate unit will disappear. The advantages such as bandwidth, latency, cost, coherency in architecture etc. are numerous. ...



    GPUs differ quite a bit from CPUs due to the specific use of the cores.

    Merging the different design goals to one unified core is very difficult to do and involves all kind of trade offs.

    I think the current (ARM) setup of both types of cores on one chip is probably the best way to go. This already almost completely addresses the problems you mention. Maybe a different layout of the cores could make a difference but it seems to me that grouping similar cores together makes a lot of sense.

    The 'future' of processors will be multi-specialized-and non specialized-cores on one chip.



    J.
  • Reply 63 of 91
    Quote:
    Originally Posted by Smalltalk-80 View Post


    This is quite analogous to the discrepancy between Intel and ARM today.

    Intel has a huge kludgy, bolt-together legacy tech, that is only held up by Moore's law and the cache set-up allowed by the former.

    ARM on the other hand has tech with quite different roots, and which has been forced to keep lean and clean.

    I'm far from saying that ARM's approach is the the be all end all, it's not. But it IS better than Intels in the most important respects.



    This is an ignorant but often repeated opinion since the 1980s Risc/Cisc wars by people who understood little.



    If you actually research a bit on what has been said by actual CPU architects like Mitch Alsup and Andy Glew, the x86 overhead vs ARM is about 10-15% for simple in order ARM chips like the A-8, but once you get to more complex chips like A-15 that drops to only about 5% penalty for x86 overhead.



    There are three factors at work here.



    1) Overhead penalty from x86 complexity: Only 5% overhead going forward(as per above).



    2) Scaling. Even Intels own low power/low performance have several times performance/watt than their more powerful CPUs. It is simply much easier to increase performance/watt on low performing CPUs. An ARM CPU scaling up would also lose many times performance/watt.



    3) Execution. ARM has been concentrating for decades on handhelds (since Newton) while Intel concentrated on desktop performance crown. ARMs current designs are refined SoCs aimed perfectly at handhelds, Intel is just barely getting to one chip SoCs this year.



    But those are actually in reverse order of current importance. The current ARM advantage is primarily from 3) Intels weak execution from ignoring this segment, secondarily from 2) Scaling mismatch, and only minor, and shrinking, benefit from 1) the so called x86 overhead penalty.



    An analogy of where things are now in Mobile is kind of like where Intel was when they realized that the Netburst was a mistake and they needed to commit to a new direction.



    In a couple more cycles scaling will be matched and execution from Intel will be much better such that Intels Mobile CPUs will be extremely competetive on perf/watt.



    The x86 overhead penalty is really a red herring compared to the real factors involved.
  • Reply 64 of 91
    Quote:
    Originally Posted by Snowdog65 View Post


    This is an ignorant but often repeated opinion since the 1980s Risc/Cisc wars by people who understood little.



    If you actually research a bit on what has been said by actual CPU architects like Mitch Alsup and Andy Glew, the x86 overhead vs ARM is about 10-15% for simple in order ARM chips like the A-8, but once you get to more complex chips like A-15 that drops to only about 5% penalty for x86 overhead.



    There are three factors at work here.



    1) Overhead penalty from x86 complexity: Only 5% overhead going forward(as per above).



    2) Scaling. Even Intels own low power/low performance have several times performance/watt than their more powerful CPUs. It is simply much easier to increase performance/watt on low performing CPUs. An ARM CPU scaling up would also lose many times performance/watt.



    3) Execution. ARM has been concentrating for decades on handhelds (since Newton) while Intel concentrated on desktop performance crown. ARMs current designs are refined SoCs aimed perfectly at handhelds, Intel is just barely getting to one chip SoCs this year.



    But those are actually in reverse order of current importance. The current ARM advantage is primarily from 3) Intels weak execution from ignoring this segment, secondarily from 2) Scaling mismatch, and only minor, and shrinking, benefit from 1) the so called x86 overhead penalty.



    An analogy of where things are now in Mobile is kind of like where Intel was when they realized that the Netburst was a mistake and they needed to commit to a new direction.



    In a couple more cycles scaling will be matched and execution from Intel will be much better such that Intels Mobile CPUs will be extremely competetive on perf/watt.



    The x86 overhead penalty is really a red herring compared to the real factors involved.



    I would like to see the source for your first claim. I suspect we are talking specific cases here as it is quite pointless to do an average, since there will be so much spread in the data.

    And what is "overhead" referring to precisely? Is it power consumption and thermal profile, or power per square inch of diespace or everything summed?



    2. Not by as much as Intels equivalent size/price.

    It's hard to scale stuff that works well on one level, correct. But that is exactly the trouble Intel is in, ARM to a much lesser extent.

    Much of the power consumption overhead in Intel processors stems from their reliance in huge caches. They take up a lot of diespace, that could be used for scratchpads, SIMD units etc. And they burn a lot of power. ARM CPUs are by heritage not reliant on large L2 and L3 caches for speed.



    3. I don't see your point there. Of course they are better at SOCs and so what? That's a merit. We we will just have to see if Intel can do anything that comes close. If it was straightforward and easy they would have done it years ago.
  • Reply 65 of 91
    Quote:
    Originally Posted by Smalltalk-80 View Post


    I think it is pretty obvious to most people that the GPU as a separate unit will disappear. The advantages such as bandwidth, latency, cost, coherency in architecture etc. are numerous. IE. Sutherlands rule of the hardware wheel of reincarnation proves true once again.



    Intel has a huge kludgy, bolt-together legacy tech, that is only held up by Moore's law and the cache set-up allowed by the former.

    ARM on the other hand has tech with quite different roots, and which has been forced to keep lean and clean.



    AMD today has power integrated graphics, this is a theoretical, but if AMD focus on lowering power and not improving power on there APU chips, could this not also become a much larger threat to Intel?



    as most people have pointed out, Core 2 Duo's and i3/5/7 cores are fast enough, for most work. Meanwhile using the APU would allow for more open CL (my understanding) and lower power consumption than todays MBP's have.



    imagine keeping the MBP 13' the same, but with a "real" (still integrated) graphics card
  • Reply 66 of 91
    wizard69wizard69 Posts: 12,902member
    Quote:
    Originally Posted by Smalltalk-80 View Post


    I would like to see the source for your first claim. I suspect we are talking specific cases here as it is quite pointless to do an average, since there will be so much spread in the data.

    And what is "overhead" referring to precisely? Is it power consumption and thermal profile, or power per square inch of diespace or everything summed?



    That is a good question! Even so I think his numbers are either bogus or misleading. It is well known that Intel hardware can execute more instructions per cycle. One has to remember ARM is fairly simple in order hardware.

    Quote:

    2. Not by as much as Intels equivalent size/price.

    It's hard to scale stuff that works well on one level, correct. But that is exactly the trouble Intel is in, ARM to a much lesser extent.

    Much of the power consumption overhead in Intel processors stems from their reliance in huge caches. They take up a lot of diespace, that could be used for scratchpads, SIMD units etc. And they burn a lot of power. ARM CPUs are by heritage not reliant on large L2 and L3 caches for speed.



    I don't think you understand computer architecture. Caches exist to buffer the flow of data to much slower off chip devices ( in most cases memory ). The cost of going off chip is pretty significant for any processor. That includes ARM hardware

    Quote:

    3. I don't see your point there. Of course they are better at SOCs and so what? That's a merit. We we will just have to see if Intel can do anything that comes close. If it was straightforward and easy they would have done it years ago.



  • Reply 67 of 91
    Quote:
    Originally Posted by wizard69 View Post


    That is a good question! Even so I think his numbers are either bogus or misleading. It is well known that Intel hardware can execute more instructions per cycle. One has to remember ARM is fairly simple in order hardware.



    I don't think you understand computer architecture. Caches exist to buffer the flow of data to much slower off chip devices ( in most cases memory ). The cost of going off chip is pretty significant for any processor. That includes ARM hardware



    Yes more instructions per cycle, but also a much larger processor. With the ARM you can have the choice of a cheap power saving processor, a larger ASIC, many more of the same cores on the same die or a huge chunk of eDRAM, or a little of all.



    Caches is a wasteful, brute force general way to accelerate old/legacy or multi platform software.

    It uses a lot of power and takes up a lot of space.

    There are many other better ways to alleviate off chip latency, one of the simplest and most immediately applicable to RISC architectures being a eDRAM.
  • Reply 68 of 91
    jnjnjnjnjnjn Posts: 588member
    Quote:
    Originally Posted by Snowdog65 View Post


    This is an ignorant but often repeated opinion since the 1980s Risc/Cisc wars by people who understood little.



    If you actually research a bit on what has been said by actual CPU architects like Mitch Alsup and Andy Glew, the x86 overhead vs ARM is about 10-15% for simple in order ARM chips like the A-8, but once you get to more complex chips like A-15 that drops to only about 5% penalty for x86 overhead. ... .



    You seem to forget that ARM chips are build with 65nm feature size, versus Intels 30nm feature size. This means that Intels problems scale up a factor four in energy efficiency and performance. If you have read my pervious posts you would also understand that ARM socs are actually on par - performance wise - with current intel processors used for example in the MacBookAir. Next year ARM socs will blow the Intel processors away.



    J.
  • Reply 69 of 91
    Quote:
    Originally Posted by jragosta View Post


    Why couldn't iOS work in a laptop setting?



    Picture an iPad Pro which has iOS and a touch screen and a keyboard. Just why wouldn't that work?



    It might be very convenient for email and some other applications.



    It seems pretty clear that touch screens are not a necessary feature for iOS, rather an ARM processor is -- e.g., AppleTV is not touch screen, but it is iOS running on ARM.



    There is absolutely no reason that iOS couldn't work in a laptop setting, but there is absolutely no reason it has to be a touch screen laptop. On the other hand, it might be interesting to have an iOS device in an iMac-like form factor with keyboard and track pad, a couple of USB ports (for connecting cameras, etc)... and where the screen undocks and becomes an iPad.
  • Reply 70 of 91
    Quote:
    Originally Posted by jnjnjn View Post


    You seem to forget that ARM chips are build with 65nm feature size, versus Intels 30nm feature size. This means that Intels problems scale up a factor four in energy efficiency and performance. If you have read my pervious posts you would also understand that ARM socs are actually on par - performance wise - with current intel processors used for example in the MacBookAir. Next year ARM socs will blow the Intel processors away.



    Don't bring a feather to a gun fight.



    Intels actual "SoC" product on the market is actually a two chip 45nm/65nm design.



    Current ARM SoC for over a year now have been a one chip 40nm design.



    i5 processors in the MBA are beyond an order of magnitude more powerful than any available ARM SoC and not directly comparable.
  • Reply 71 of 91
    Quote:
    Originally Posted by Smalltalk-80 View Post


    I would like to see the source for your first claim. I suspect we are talking specific cases here as it is quite pointless to do an average, since there will be so much spread in the data.

    And what is "overhead" referring to precisely? Is it power consumption and thermal profile, or power per square inch of diespace or everything summed?



    You can start with this thread containing some info from Alsup/Glue. After that try a search engine.

    http://www.groupsrv.com/computers/ab...5-0-asc-0.html



    Thinking the ISA (Instruction Set Archictecture) is responsible for big swings in perf/watt is just naive.



    Quote:

    Much of the power consumption overhead in Intel processors stems from their reliance in huge caches. They take up a lot of diespace, that could be used for scratchpads, SIMD units etc. And they burn a lot of power. ARM CPUs are by heritage not reliant on large L2 and L3 caches for speed.



    You are confused about why this is the case and it is part of scaling, not ISA. If you run a slow CPU then you can be kept easily fed with instruction fetches from memory.



    But when you run a very fast CPU, memory isn't good enough for speed and you use those big caches.



    If you built a very fast ARM CPU to challenge x86 on the desktop, it would likewise have to use big caches to keep it running at full clip.



    This is why you can't compare perf/watt across any performance gulf. It is meaningless.



    If you built a high performance ARM part it would end up looking much like AMD/Intel designs, running on higher power silicon(lower perf/watt), using huge caches(lower perf/watt), higher voltage(lower perf/watt), higher GHz(lower perf/watt), all driving up power use at faster rate than performance.



    Everything you do to increase toward desktop performance, costs you perf/watt.





    Quote:

    3. I don't see your point there. Of course they are better at SOCs and so what? That's a merit. We we will just have to see if Intel can do anything that comes close. If it was straightforward and easy they would have done it years ago.



    Yes this is a merit, but it isn't related to ISA. ARM has no magic pixie dust, it is just that Intel has never really given this a serious effort. It's Atom SoC are like the neglected step child of the lineup. Intel is widely considered the process king, yet it's Atom "SoC" in 2011 has been on a worse process than any ARM competitor. While Intel's desktop has been at 32nm for over a year, the Atom, has been an archaic multichip 45nm/65nm design.



    Do you really think that is something Intel can't remedy quickly? The real question is whether Intel has woken up and is ready to get serious about making a really competitive SoC?



    I think they are getting serious now. This is why I think this is much like the Netburst (AKA Pentium 4) wake-up call.



    At the same time that Intel starts building true one chip SoCs and puts them on it's leading silicon process, ARM designs will be getting more complex to try and take on laptops. They will likely end up very close in perf/watt in similar performance envelope and in that case both will essentially keep their legacy OS with them.
  • Reply 72 of 91
    jnjnjnjnjnjn Posts: 588member
    Quote:
    Originally Posted by Snowdog65 View Post


    Don't bring a feather to a gun fight.



    Intels actual "SoC" product on the market is actually a two chip 45nm/65nm design.



    Current ARM SoC for over a year now have been a one chip 40nm design.



    i5 processors in the MBA are beyond an order of magnitude more powerful than any available ARM SoC and not directly comparable.



    Your not well informed. According to Intels own specs the current (2011) i5 CPU and GPU are on 32nm.

    Look it up. Wikipedia confirms this but misses the latest info regarding the GPU.

    It could be of course that Intel is lying ...

    Your right about the ARM SoCs, I missed that. It's still a big efficiency and speed difference between 45nm and 32nm though.

    But your wrong again about 'the order of magnitude more powerful', it's a factor 5 (20GFLOPS versus 100GFLOPS) and that isn't an order of magnitude.

    The current PowerVR series 6 GPU (possibly used in the A6 next year) is a factor 10 faster and therefore twice as fast as the current i5 GPU core.

    So I would love to bring the ARM SoC to the gun fight, and I will win.



    J.
  • Reply 73 of 91
    Quote:
    Originally Posted by jnjnjn View Post


    Your not well informed. According to Intels own specs the current (2011) i5 CPU and GPU are on 32nm.



    The i5 is mainstream desktop/laptop part. It isn't an SoC.



    Intels "SoC" is Atom and it is dual chip 65nm/45nm, just as I said.



    Also don't know where you are getting GFLOPs numbers.



    Best ARM numbers I have seen are about 50 MFLOPS, the Intel i5 is about 50 GFLOPS.



    That is MegaFlops vs GigaFlops. 1000 times better on i5. So three orders of magnitude. You simply can't compare.
  • Reply 74 of 91
    majjomajjo Posts: 574member
    Quote:
    Originally Posted by jnjnjn View Post


    Your not well informed. According to Intels own specs the current (2011) i5 CPU and GPU are on 32nm.

    Look it up. Wikipedia confirms this but misses the latest info regarding the GPU.

    It could be of course that Intel is lying ...

    Your right about the ARM SoCs, I missed that. It's still a big efficiency and speed difference between 45nm and 32nm though.

    But your wrong again about 'the order of magnitude more powerful', it's a factor 5 (20GFLOPS versus 100GFLOPS) and that isn't an order of magnitude.

    The current PowerVR series 6 GPU (possibly used in the A6 next year) is a factor 10 faster and therefore twice as fast as the current i5 GPU core.

    So I would love to bring the ARM SoC to the gun fight, and I will win.



    J.



    we're talking about Intel's SoC, which is atom, not core i5.

    atom is currently on a 5 year cadence on 45nm process. Its not so much Intel unable to compete as it is Intel waking the heck up.

    and Intel IS waking up. They are transitioning atom to the tick tock cadence with Silvermont and fast tracking it to current process tech. Unfortunately the transition won't be complete until 2013, and who knows how the landscape will look like then.



    as for performance, I don't know where you're getting 20gflops from, but the last benchmarks I've seen pegged ARM at Mflops. Granted that was a while ago on the a8 architecture

    http://www.brightsideofnews.com/news...ersus-x86.aspx
  • Reply 75 of 91
    hmmhmm Posts: 3,405member
    Quote:
    Originally Posted by majjo View Post


    we're talking about Intel's SoC, which is atom, not core i5.

    atom is currently on a 5 year cadence on 45nm process. Its not so much Intel unable to compete as it is Intel waking the heck up.

    and Intel IS waking up. They are transitioning atom to the tick tock cadence with Silvermont and fast tracking it to current process tech. Unfortunately the transition won't be complete until 2013, and who knows how the landscape will look like then.




    Are they transitioning it to the same release cycle as their other cpus as in alternating a new architecture with a die shrink year after year? I'm asking because Haswell = 2013 at 22nm with a die shrink to 14nm the following year. I do hope they examine the issue of component reliability as they continue to shrink things. Expensive unreliable components suck if everything is completely integrated.
  • Reply 76 of 91
    jnjnjnjnjnjn Posts: 588member
    Quote:
    Originally Posted by Snowdog65 View Post


    The i5 is mainstream desktop/laptop part. It isn't an SoC.



    Intels "SoC" is Atom and it is dual chip 65nm/45nm, just as I said.



    Also don't know where you are getting GFLOPs numbers.



    Best ARM numbers I have seen are about 50 MFLOPS, the Intel i5 is about 50 GFLOPS.



    That is MegaFlops vs GigaFlops. 1000 times better on i5. So three orders of magnitude. You simply can't compare.



    The i5 used in the MacBookAir has an integrated GPU. So thats a SoC.

    And we are comparing the integrated GPU from ARM with the integrated GPU of Intel (i5) CPUs.



    If you look at the GPU performance of the PowerVR GPU on the A5 chip its 5 times slower than the current integrated GPU of Intels i5.

    But the latest PowerVR GPU is 10 times faster, and thereby faster than the current integerated GPU of the i5.



    The point is that CPU performance isn't the most important factor, especially for iOS and Mac OS X, because they offload most of the work to the GPU and can use OpenCL to do all kinds of ultra fast calculations.



    So, readjust your focus. Look at the GPU.
  • Reply 77 of 91
    jnjnjnjnjnjn Posts: 588member
    Quote:
    Originally Posted by majjo View Post


    we're talking about Intel's SoC, which is atom, not core i5.

    atom is currently on a 5 year cadence on 45nm process. Its not so much Intel unable to compete as it is Intel waking the heck up.

    and Intel IS waking up. They are transitioning atom to the tick tock cadence with Silvermont and fast tracking it to current process tech. Unfortunately the transition won't be complete until 2013, and who knows how the landscape will look like then.



    as for performance, I don't know where you're getting 20gflops from, but the last benchmarks I've seen pegged ARM at Mflops. Granted that was a while ago on the a8 architecture

    http://www.brightsideofnews.com/news...ersus-x86.aspx



    I am looking at the integrated GPU, not the CPU.

    See my previous post.



    J.
  • Reply 78 of 91
    Quote:
    Originally Posted by jnjnjn View Post


    The i5 used in the MacBookAir has an integrated GPU. So thats a SoC.

    And we are comparing the integrated GPU from ARM with the integrated GPU of Intel (i5) CPUs.




    You really don't have the faintest notion what you are talking about. The i5 isn't an SoC, integrated graphics don't make it an SoC.



    The GPU isn't a complete FPU solution. BTW the latest Atom SoCs will be using PowerVR GPUs, so they should be equal on GPU.

    http://pcper.com/news/Processors/Int...R-GPUs-Planned



    As far as actual CPUs, The FPU inside ARM chips is laughable, several orders of magnitude behind Intels.



    Really you are just spouting off a bunch of incorrect, uninformed opinions and you have proven wrong at every turn.



    You don't actually know what an SoC is.

    You made big claims about Intel SoCs being ahead on process, yet you were wrong about the process for both Intel/ARM that is in use an actually had it backwards.

    You try to pretend GPUs compensate for weak ARM CPUs, even though they are incomplete for FPU duty.



    At every turn you are high on opinions, while being completely wrong on facts.



    If this is the case for a Macbook Air with ARM, it has ZERO probability.
  • Reply 79 of 91
    jnjnjnjnjnjn Posts: 588member
    Quote:
    Originally Posted by Snowdog65 View Post


    You really don't have the faintest notion what you are talking about. The i5 isn't an SoC, integrated graphics don't make it an SoC.



    The GPU isn't a complete FPU solution. BTW the latest Atom SoCs will be using PowerVR GPUs, so they should be equal on GPU.

    http://pcper.com/news/Processors/Int...R-GPUs-Planned



    As far as actual CPUs, The FPU inside ARM chips is laughable, several orders of magnitude behind Intels.



    Really you are just spouting off a bunch of incorrect, uninformed opinions and you have proven wrong at every turn.



    You don't actually know what an SoC is.

    You made big claims about Intel SoCs being ahead on process, yet you were wrong about the process for both Intel/ARM that is in use an actually had it backwards.

    You try to pretend GPUs compensate for weak ARM CPUs, even though they are incomplete for FPU duty.



    At every turn you are high on opinions, while being completely wrong on facts.



    If this is the case for a Macbook Air with ARM, it has ZERO probability.



    A system on a chip (SoC) isn't an absolute definition, it could have more or less building blocks and still be called a system on a chip (as you should know).

    So it isn't a problem to call a GPU/CPU combination a SoC.

    But I'am not taking about definitions, if you don't call it a SoC that's fine by me.



    I'am interested in performance and as I noted before its interesting to see that an ARM CPU/GPU combination is as good as an Intel i5 CPU/GPU combination in the near future.

    And that's what counts from a consumer point of view, real world performance.



    I was right about the feature size, Intel i5 is on 32nm (both GPU and CPU) and ARM is on 45nm (I already said I made a mistake on this one) and that's a big difference. (I assumed the difference was 32nm versus 65nm and it turned out to be 32nm versus 45nm, and the point was that there was a feature size difference.)



    But I see you don't dispute my claims about GPU performance and even claim that PowerVR GPUs will be included in future Atom SoCs (was it a SoC?), and that will make them comparable to ARM SoCs, so why using an Atom with an order of magnitude worse battery drain?



    You have to learn about discussing a topic and not lose your temper. Your social skills lack at least as much as my "uninformed opinions".



    J.
  • Reply 80 of 91
    Quote:
    Originally Posted by jnjnjn View Post


    A system on a chip (SoC) isn't an absolute definition, it could have more or less building blocks and still be called a system on a chip (as you should know).

    So it isn't a problem to call a GPU/CPU combination a SoC.

    But I'am not taking about definitions, if you don't call it a SoC that's fine by me.





    No one considers the i5 an SoC. Either you are trying to lie about it to make your case, or you are a fool. I haven't decided which.



    But I don't suffer liars or fools mildly.



    The i5 isn't an SoC, and for you stack your claims on that, makes everything wrong from that point forward.



    Intel has barely entered the SoC game and all their SoCs are Atom based. Every Atom SoC in products this year was a combo 45nm/65nm part. Far from Intels state of the art.



    But they will be moving to putting their SoCs on leading process going forward. 32nm in 2012 and 22nm in 2013.



    Intel is behind in SoCs, but has had it's P4 moment and is waking up in SoCs.



    By 2013 Intels 22nm SoCs should be very competitive, that same timeframe as the claim when ARM will be challenging in notebooks.



    ARM CPUs still lack the general purpose computing power to be taken as a competitor to Intel Laptop chips which have an order of magnitude more processing power.



    A GPU does not make up for the weaknesses in the ARM CPU. GPUs can only help is specific tasks and they need to be custom coded. It is not a general purpose computing solution for the weak CPU.
Sign In or Register to comment.