Apple's new MacBook Air dubbed world?s thinnest notebook

11415161719

Comments

  • Reply 361 of 399
    tenobelltenobell Posts: 7,014member
    Quote:

    In all fairness the MBA does have a more powerful CPU and integrated video card than these ultra-portables, but I figure it should give you about an extra hour.



    Which is interesting that the MBA does have more powerful CPU and GPU than these other ultra-portables and is cheaper than many of them. Yet so many people still complain.
  • Reply 362 of 399
    Quote:
    Originally Posted by vinea View Post


    Er...I'd be using a RAID array on a NAS even with a Pro. Oh wait, I do since we didn't go FC.



    Dude...the $800 Precision T3400 is a 1.8Ghz C2D. Yah think the 2.8Ghz Quad Mac Pro might be a tad faster? To get to the quad at 2.66Ghz with 2GB the cost goes up to $1600 and no option to go Octo...although its not confirmed that you can upgrade the single CPU pro is it?



    I wouldn't be giving my guys 1.8Ghz C2D and thinking I'm being fiscally responsible. I'd be thinking I was being pennywise and pound foolish.



    And your T3400 specs are for the 3.16Ghz wolfdale machine and it's not $800. Heck, it's not even on Dell's website yet. So you're comparing a 2006 Yonah iMac to a 2008 Wolfdale Dell.



    Simply genius.



    I posted the wrong link, my bad. This is the right one for the Dell Precision T3400 E6850: http://www.spec.org/cpu2006/results/...1210-02856.pdf that is $1228 with display. Still alot of power for the money. There's about a dozen links for Dell models using the T3400 model number, this link I got from Dell's website, but it's easy to get confused if you don't pay attention to the E#'s.



    At a score of 33.7, it's still not that far off. I gave the iMac link as comparison to the only iMac that has published specs. I know it's not representative of what they are now, which is why I added the Xserve with similar sized system bus, memory configuration and enabled processors. Like I said the Quad-core came in at 39.9, so let's see, Dell T3400 at $1228 or MacPro at $2299 with no monitor? Not a hard purchase decision for the minimal performance improvement that the Mac Pro would provide.
  • Reply 363 of 399
    solipsismsolipsism Posts: 25,726member
    Quote:
    Originally Posted by TenoBell View Post


    Which is interesting that the MBA does have more powerful CPU and GPU than these other ultra-portables and is cheaper than many of them. Yet so many people still complain.



    And a glass LED display, a backlight, full-sized keyboard and a full aluminium case instead of cheap plastic display, cheap mini-keyboard, and a cheap plastic case.



    I especially love the botching over the cost of the SSD. That is easily quantifiable with a simple Google search that yields no results for a 64Gb SSd under $1,400, yet Apple only charges $999.



    I withdrew my order for the MBA. I want to see read/write and battery differences between the the two storage types before making my decision.
  • Reply 364 of 399
    jeffdmjeffdm Posts: 12,953member
    Quote:
    Originally Posted by solipsism View Post


    In all fairness the MBA does have a more powerful CPU and integrated video card than these ultra-portables, but I figure it should give you about an extra hour.



    The comparisons to ultralights aren't very apt anyway, it's a 1.06/1.2GHz CPU against a 1.6/1.8GHz unit, 950 vs. 3xxx GPU, a small XGA screen against a larger, higher res screen, and non-full-spacing keyboards with full spacing keyboards. I don't think any of the competing keyboards are backlit. Many of the comparisons conveniently leave out in trying to paint the Air as a sub par machine. That the Air somehow fails miserably at hard drive space compared to other ultralights when no ultralight I've found is available for sale with a bigger than 80GB hard drive. At any rate, these feature hecklers are only using selective set of features to compare. Toughbooks are nice, but the fact that it takes an expedition to find one to order is not a good thing. Many models are preorder, and some of those store sites are defective, one had a non-functional preorder link.



    The CPU in Apple's ultralight is using a CPU for a notebook class above it, and that's pretty impressive.
  • Reply 365 of 399
    melgrossmelgross Posts: 33,599member
    Quote:
    Originally Posted by TenoBell View Post


    Which is interesting that the MBA does have more powerful CPU and GPU than these other ultra-portables and is cheaper than many of them. Yet so many people still complain.



    As we know from previous experience here, some people will only be happy with a new Apple product if it *EXACTLY* is what they want.



    The more it deviates from that, the more they will bitterly complain about how "bad" it is.
  • Reply 366 of 399
    solipsismsolipsism Posts: 25,726member
    Quote:
    Originally Posted by JeffDM View Post


    The CPU in Apple's ultralight is using a CPU for a notebook class above it, and that's pretty impressive.



    In case it was missed, AnandTech offers some insights into this "special" chip being used in the MBA.
  • Reply 367 of 399
    Quote:
    Originally Posted by melgross View Post


    You're misunderstanding what the tests say.



    Spec was devised to read cpu performance directly in one very small area. It measures nothing else. It doesn't rell us anything about real integer performance, which is what most programs use.



    It doesn't tell us anything about vector processing capabilities, which all processors now have, to a greater, or lessor extent.





    Considering that OS X, and Windows both push most floating calculations off into the vector processors hardware, the numbers are less than useful. They are deceiving. This has been a criticism of spec testing for years now.



    As far as for the Macworld tests, they DO make a comparison between all machines. The numbers themselves aren't important. What IS important is the difference between the numbers for the various machines under test.



    How you figure, from the SPEC.org website:



    CPU2006



    Designed to provide performance measurements that can be used to compare compute-intensive workloads on different computer systems, SPEC CPU2006 contains two benchmark suites: CINT2006 for measuring and comparing compute-intensive integer performance, and CFP2006 for measuring and comparing compute-intensive floating point performance.



    But I get you, unless its test results you post, they don't count. Gotcha.



    Mac World typically only posts results for Macs and don't compare them to Windows based PCs so again their tests are meaningless for comparison in this debate. Not that there is anything wrong with that, they are a Mac magazine afterall. If you had PC World test results that compared the iMac to the Dells (which PC World does often, but I couldn't find anything specific to these two machines) than that would be more credible.



    Post some links about the credibility of SPEC tests, I'm curious to see what people have said, I did a Google search and couldn't find any negative comments. I highly doubt companies would continue to fund and pay for these tests if they weren't credible and indicative of the actual performance of their hardware. It's not like they paid some magazine or for-profit trade group to do these tests. SPEC was specifically setup as a non-profit to provide independent, un-biased, standardized testing of computer hardware.
  • Reply 368 of 399
    solipsismsolipsism Posts: 25,726member
    I'm officially fed up with this thread!



    [CENTER]"MacBook Air: The World's Thinnest Notebook

    Hated by the World's Thickest People"
    [/CENTER]
  • Reply 369 of 399
    melgrossmelgross Posts: 33,599member
    Quote:
    Originally Posted by trboyden View Post


    How you figure, from the SPEC.org website:



    CPU2006



    Designed to provide performance measurements that can be used to compare compute-intensive workloads on different computer systems, SPEC CPU2006 contains two benchmark suites: CINT2006 for measuring and comparing compute-intensive integer performance, and CFP2006 for measuring and comparing compute-intensive floating point performance.



    But I get you, unless its test results you post, they don't count. Gotcha.



    Mac World typically only posts results for Macs and don't compare them to Windows based PCs so again their tests are meaningless for comparison in this debate. Not that there is anything wrong with that, they are a Mac magazine afterall. If you had PC World test results that compared the iMac to the Dells (which PC World does often, but I couldn't find anything specific to these two machines) than that would be more credible.



    Post some links about the credibility of SPEC tests, I'm curious to see what people have said, I did a Google search and couldn't find any negative comments. I highly doubt companies would continue to fund and pay for these tests if they weren't credible and indicative of the actual performance of their hardware. It's not like they paid some magazine or for-profit trade group to do these tests. SPEC was specifically setup as a non-profit to provide independent, un-biased, standardized testing of computer hardware.



    The SPEC site can be expected to rationalize it's own importance, wouldn't you say?



    Here's an article that puts a bit of perspective on it. I'll quote a bit first. But, this is just the first I've found.



    Quote:

    Processor versus Real Performance

    The advent of TPC testing allowed vendors and users alike to abandon MIPS as a measure of relative system performance. MIPS has now been effectively replaced by the SPEC suite of tests, but like MIPS, SPEC tests are primarily a measure of processor complex performance. There is more to transaction processing than just the processor, because the I/O and disk subsystems can substantially influence performance.

    To illustrate this, let's compare an 8-way SPARCserver 1000 and HP 9000 Series 800 H70. Table 1 compares the tpmC result with that of SPEC's SPECrate_int92 metric.



    Table 1

    \t

    SPECrate_int92

    tpmC

    $/tpmC

    SPARCserver 1000 8-way\t10,113\t1,079.4\t$1,032

    HP 9000 H70\t3,757\t1,290.9\t$961

    The processor-based SPEC test implies the Sun system has a significant performance superiority, nearly 2.7 times better. However, the TPC-C test shows that in multi-user transaction testing the systems are much more evenly matched.



    TPC testing provides a platform to look at the entire computing workload. Just using a processor benchmark can be very misleading.



    Sorry, the charts never make it in good form.



    Quote:

    Software and Performance

    It is very easy to get carried away with the performance of the hardware. However, often we forget the influence the software can have on OLTP performance, be it the operating system or the database. For example, Digital has tested a system with the same database, but with different operating systems, DEC OSF/1 in one case and OpenVMS in the other. The OpenVMS system offered 16.5% more performance than the DEC OSF/1 system.



    http://www.tpc.org/information/other...les/TopTen.asp



    Another:



    Quote:

    Yen said Sun is working to try to create measurements that better represent its chips' performance. Clock speed just isn't sufficient, he said, and neither are the current processor benchmarks from the Standard Performance Evaluation Corporation (SPEC) -- of which Sun is a member.



    Intel trumpeted high SPECint2000 and SPECfp2000 measurements for its coming Itanium 2 processor.



    But SPEC's tests no longer represent real-world performance, Yen said. In part, that's because the entire test software package now can fit into a chip's high-speed cache memory, rendering obsolete the need to fetch data from slower main memory. In real life, though, servers often have to use main memory.



    Customers "are not going to get the kind of performance they expected from reading the SPEC" benchmarks, Yen said.



    http://news.zdnet.co.uk/hardware/0,1...2117959,00.htm



    I'm trying to find a more general article, but so far,all are too technical.
  • Reply 370 of 399
    I'd like to know what the preorder numbers on this have been. My guess is it is pretty high.
  • Reply 371 of 399
    Quote:
    Originally Posted by melgross View Post


    The SPEC site can be expected to rationalize it's own importance, wouldn't you say?



    Here's an article that puts a bit of perspective on it. I'll quote a bit first. But, this is just the first I've found.







    Sorry, the charts never make it in good form.







    http://www.tpc.org/information/other...les/TopTen.asp



    Another:







    http://news.zdnet.co.uk/hardware/0,1...2117959,00.htm



    I'm trying to find a more general article, but so far,all are too technical.



    Those are good links. But they are also from 2002, and the SPEC tests have been updated, presumably to address those concerns, and Sun is still a member of SPEC so while I'm sure there is no all inclusive test that can give 100% accurate results, I'm sure what we do get out of them is at least pertinent and representative of the overall performance. Unfortunately Apple is not a member of TPC, so we have no idea what those tests would expose as results. The validilty of TPC's results are questionable based on this line from your link to their website:



    Quote:

    The TPC is presently in a transition phase. The General Implementation Guidelines have been implemented in all three current benchmarks in order to address concerns raised about benchmark specials. The TPC-A and B benchmarks are into the first phase of their ultimate obsolescence. It is hoped that the TPC-C benchmark will fill the breach left by TPC-A and TPC-B, but the initial take up of results has been slower than expected. The TPC's next benchmark, TPC-D, is in its final review stages before being officially sanctioned, and is being eagerly awaited by many. All of this change brings with it uncertainty, and in some cases people are questioning the worth of TPC testing in general.



    It also appears TPC's main focus is on server hardware, I didn't see any desktop hardware listed in their results. Also TPC is a competing testing provider so I'm not surprised to see them disclaiming SPEC's results. Their focus is also different, they look at transactional data transfer rather than pure hardware performance. SPEC's tests are run via C at the low system level, TPC's tests use higher level application interfaces such as Windows COM and Websphere app server. That's alot of system overhead there to get accurate results.
  • Reply 372 of 399
    melgrossmelgross Posts: 33,599member
    I decided to go to the SPEC organizations site, and see what they really have to say.



    Interestingly enough, even to them, SPEC is just a secondary method to see how your machine rates.



    From their site, two quotes:



    Quote:

    Ideally, the best comparison test for systems would be your own application with your own workload. Unfortunately, it is often very difficult to get a wide base of reliable, repeatable and comparable measurements for comparisons of different systems on your own application with your own workload.



    and:



    Quote:

    Note: It is not intended that the SPEC benchmark suites be used as a replacement for the benchmarking of actual customer applications to determine vendor or product selection.



    This is why various tech sites, and organizations, have come up with test suites that use actual programs to test machines, rather than use the highly misleading SPEC, and other synthetic test suites, which rarely correlate to real world performance.
  • Reply 373 of 399
    melgrossmelgross Posts: 33,599member
    Quote:
    Originally Posted by trboyden View Post


    Those are good links. But they are also from 2002, and the SPEC tests have been updated, presumably to address those concerns, and Sun is still a member of SPEC so while I'm sure there is no all inclusive test that can give 100% accurate results, I'm sure what we do get out of them is at least pertinent and representative of the overall performance. Unfortunately Apple is not a member of TPC, so we have no idea what those tests would expose as results. The validilty of TPC's results are questionable based on this line from your link to their website:



    Those companies have some say in how the SPEc standard is written. Otherwise, it doesn't mean much.



    Quote:

    It also appears TPC's main focus is on server hardware, I didn't see any desktop hardware listed in their results. Also TPC is a competing testing provider so I'm not surprised to see them disclaiming SPEC's results. Their focus is also different, they look at transactional data transfer rather than pure hardware performance. SPEC's tests are run via C at the low system level, TPC's tests use higher level application interfaces such as Windows COM and Websphere app server. That's alot of system overhead there to get accurate results.



    As I said, it's difficult to find a link for more mainstream users, but it doesn't matter, because the concept is the same.



    The point is that different OS's, different software, and different hardware designs all have a great influence on performance. SPEC is not considered to be more than testing for raw cpu and close memory subsystem performance.



    One of the main problems is that actual real world requirements rarely use the computer in the way SPEC tests. In that sense, SPEC is no better than any other synthetic benchmark, and there are many.



    Go to any site that test systems for games (the easiest to find), and you will see SPEC, Futuremark, and sometimes others being used. But then they go to the game tests themselves, and they don't usually correlate to the synthetic test suits. they always make note of that on the sites, but say that people like to see them, so they use them. They also say they use them as a point of interest, BECAUSE they rarely correlate, as a warning about synthetic testing.
  • Reply 374 of 399
    jeffdmjeffdm Posts: 12,953member
    Quote:
    Originally Posted by trboyden View Post


    How you figure, from the SPEC.org website:



    CPU2006



    Designed to provide performance measurements that can be used to compare compute-intensive workloads on different computer systems, SPEC CPU2006 contains two benchmark suites: CINT2006 for measuring and comparing compute-intensive integer performance, and CFP2006 for measuring and comparing compute-intensive floating point performance.



    But I get you, unless its test results you post, they don't count. Gotcha.



    It's still a benchmark though. It can use compilers, settings and tweaks that most developers don't use. It even involves languages that most developers don't use, there's a lot of FORTRAN in there. FORTRAN is only used for specialized programs. I don't think it's used for any media applications.



    Quote:

    Post some links about the credibility of SPEC tests, I'm curious to see what people have said, I did a Google search and couldn't find any negative comments.



    It's not that the tests aren't credible, you need to make sure the benchmark you choose reflects what you're going to do with it. That's a core problem.



    http://www.gcocco.com/bench/cpu2k.html#should_i_buy
  • Reply 375 of 399
    Quote:
    Originally Posted by melgross View Post


    I decided to go to the SPEC organizations site, and see what they really have to say.



    Interestingly enough, even to them, SPEC is just a secondary method to see how your machine rates.



    From their site, two quotes:







    and:







    This is why various tech sites, and organizations, have come up with test suites that use actual programs to test machines, rather than use the highly misleading SPEC, and other synthetic test suites, which rarely correlate to real world performance.



    When I read that, I interpret that as saying YMMV. It's normal for people to give disclaimers, it's called covering your a$$. SPEC came up with a test suite that trys to take into consideration as many processes and calculations that can be compared equally among hardware systems. Are they perfect, probably not. But like they said, even though testing your application in your environment is the only way to be sure it will run as you expect, that type of testing is subjective at best and can't be used compare against other systems without actually aquiring them and running the same tests on that different hardware. You'll also run into differences between how the various OSes handle similar functions. That is why it is important to have tests like SPEC that test at the hardware level to eliminate those differences.



    Calling SPEC's tests misleading without data to refute them is just ignorance.



    All SPEC is saying is don't based your purchasing decision on just their data - they are not responsible if you do and it doesn't work out for you - test it for yourself with your apps and your data. That's just commonsense advice. I wouldn't buy something without testing it out first or getting a demo system to play with. That's just stupid.
  • Reply 376 of 399
    melgrossmelgross Posts: 33,599member
    Quote:
    Originally Posted by trboyden View Post


    When I read that, I interpret that as saying YMMV. It's normal for people to give disclaimers, it's called covering your a$$.



    Now who's interpreting?



    Quote:

    SPEC came up with a test suite that trys to take into consideration as many processes and calculations that can be compared equally among hardware systems. Are they perfect, probably not. But like they said, even though testing your application in your environment is the only way to be sure it will run as you expect, that type of testing is subjective at best and can't be used compare against other systems without actually aquiring them and running the same tests on that different hardware. You'll also run into differences between how the various OSes handle similar functions. That is why it is important to have tests like SPEC that test at the hardware level to eliminate those differences.



    Calling SPEC's tests misleading without data to refute them is just ignorance.



    All SPEC is saying is don't based your purchasing decision on just their data - they are not responsible if you do and it doesn't work out for you - test it for yourself with your apps and your data. That's just commonsense advice. I wouldn't buy something without testing it out first or getting a demo system to play with. That's just stupid.





    You're stuck on this, aren't you?



    I provided TWO highly respected sites that said what I said, but YOU choose to disregard them, because what they say, is not what YOU want to read! And then you say I didn't provide any data, even though they make those statements, and I provided the Macworld DATA, which you refuse to regard.



    You refuse to look at tests with programs you might use, but love tests that show nothing about how those programs, or even a computer, will actually perform in your daily use.



    I guess there's no point in continuing, if you won't even listen to what the SPEC organization says.



    You simply want to justify your choices, and you don't want to see anything that disabuses you of your idea about it.
  • Reply 377 of 399
    Quote:
    Originally Posted by JeffDM View Post


    It's still a benchmark though. It can use compilers, settings and tweaks that most developers don't use. It even involves languages that most developers don't use, there's a lot of FORTRAN in there. FORTRAN is only used for specialized programs. I don't think it's used for any media applications.







    It's not that the tests aren't credible, you need to make sure the benchmark you choose reflects what you're going to do with it. That's a core problem.



    http://www.gcocco.com/bench/cpu2k.html#should_i_buy



    I don't see any FORTRAN in the CPU2006 tests, it's says its all C or C++, and everything uses that for performance critical applications: http://www.spec.org/auto/cpu2006/Doc...ibquantum.html



    And like I said in my other post, I do agree your best reference is to test the application on your own hardware, just like SPEC recommends. These results are simply for a pure HARDWARE performance comparison between various systems. They are vaild and are relevant when taking into other system configuration options like front-side bus which is very important for getting all that speed to your memory and hard drive and video cards, etc... Front-side bus is probably one of the more critical factors in the "real world speed" of a computer. This was very apparent in various models of G4s and G5 with different front-side busses. The one with the higher front-side bus always seemed twice as fast. There was a big difference in speed feel between the 800 MHz bus G5 and the 1 GHz bus G5's. Some of it was processor but most of it was front-side bus.
  • Reply 378 of 399
    tenobelltenobell Posts: 7,014member
    The one main part I think is missing from the MBA is internal 3G support. Since this is designed to primarily be a wireless device it seems like an obvious oversight. Perhaps Apple will bring 3G to the MBA when they are placing large orders for the 3G iPhone.
  • Reply 379 of 399
    Quote:
    Originally Posted by melgross View Post


    Now who's interpreting?



    SPEC came up with a test suite that trys to take into consideration as many processes and calculations that can be compared equally among hardware systems. Are they perfect, probably not. But like they said, even though testing your application in your environment is the only way to be sure it will run as you expect, that type of testing is subjective at best and can't be used compare against other systems without actually aquiring them and running the same tests on that different hardware. You'll also run into differences between how the various OSes handle similar functions. That is why it is important to have tests like SPEC that test at the hardware level to eliminate those differences.



    Calling SPEC's tests misleading without data to refute them is just ignorance.



    All SPEC is saying is don't based your purchasing decision on just their data - they are not responsible if you do and it doesn't work out for you - test it for yourself with your apps and your data. That's just commonsense advice. I wouldn't buy something without testing it out first or getting a demo system to play with. That's just stupid.





    You're stuck on this, aren't you?



    you refuse to look at tests with programs you might use, but love tests that show nothing about how those programs, or even a computer will actually perform in yourdaily use.



    I guess there's no point in continuing, if you won't even listen to what the SPEC organization says.



    Who's stuck on this? Your trying to compare application testing with hardware testing and trying to justify that application testing gives more accurate results when it's really comparing apples to oranges!



    MacWorld's type of testing while relevant on the systems they purchased on the OS that they are running, with the applications they are testing is only relevant within those test parameters. It has absolutely no bearing on how Adobe CS will run on Windows on a Dell with similar hardware specs. There are too many operating system differences to have an accurate comparison. That why there are SPEC HARDWARE tests to make a relevant comparison between similar hardware configurations.



    Just because Adobe CS may run well on a iMac you have, doesn't mean it will run the same on my same iMac taking into consideration software configuration and hardware manufacturing differences.



    Again who is stuck on this???
  • Reply 380 of 399
    solipsismsolipsism Posts: 25,726member
    Quote:
    Originally Posted by TenoBell View Post


    The one main part I think is missing from the MBA is internal 3G support. Since this is designed to primarily be a wireless device it seems like an obvious oversight. Perhaps Apple will bring 3G to the MBA when they are placing large orders for the 3G iPhone.



    I am wanting this feature too.
Sign In or Register to comment.