Resolution independence in Leopard confirmed by Apple

1246710

Comments

  • Reply 61 of 184
    shetlineshetline Posts: 4,695member
    Quote:
    Originally Posted by donebylee


    Wouldn't Photoshop have a problem with resolution independence? ...Especially at 100 % magnification which is supposed to be 1 pixel = 1 pixel display



    The first question you have to ask yourself here is what's so important about this 100% mode, and why do you use it?



    The main reason is to get a good look at an image without scaling artifacts getting in the way, to see what's there in the image as the image really is, and to see how big the image will look when rendered one-pixel-to-one-pixel on a display, as will be the typical case when you display that image on the web. For now, that means the 100% mode is still a useful thing.



    In the future, when we have really hi-res displays (300 dpi, maybe even 200 dpi will look good with good anti-aliasing behind the scenes), mapping to native pixels won't matter much for getting a good look at an image. Any scaling artifacts will be minor, will hardly make any difference in evaluating the visual quality of an image, and if your audience is likely to be viewing the image as a scaled instance on a high-res display, you might as well be seeing the image under the same conditions it will normally be viewed under.



    If you want to manipulate individual pixels, chance are you'll be better off working with a magnified (thus scaled) image anyway.



    I suspect that there will always be some way to address native display pixels directly, even when resolution independence becomes the norm, when pixels matter only as units of information in pixel-mapped source material like photos. Addressing native display pixels is still going to be important for some time yet, but we need to be laying the groundwork for resolution independence now.



    Someone mentioned 150 dpi as an example for a screen resolution. My personal opinion is that 150 dpi is a terrible screen resolution for most desktop uses. 150 dpi makes pixels too tiny for a lot of unscaled pixel-mapped material. Used as a native resolution, with traditional resolution dependent apps and media, 150 dpi results in too much of what you want to look at ending up as far too tiny and squint-inducing. However, a 150-dpi screen is still not sharp enough to treat as resolution independent with good results -- scaling is going to look fuzzy and blurry.



    For most people, displays start to get too squinty when you get much beyond 110 dpi. For those who need reading glass, but who don't want to wear them, this problem starts even lower than that. I'm guessing (never having seen such a display) that somewhere around 200 dpi would be the low-end for effective, high-quality resolution independent display use. By those standards, screen resolutions in the range from over 110 and under 200 are a dead zone, simply not very useful for computer interfaces.
  • Reply 62 of 184
    Quote:
    Originally Posted by shetline


    The first question you have to ask yourself here is what's so important about this 100% mode, and why do you use it?



    Though all of this is fine on a display, remember some of us still deal with print as a final output and unless they come up with resolution independent paper, editing at 100% will remain important as least as long as paper is used to disseminate information.



    But I do see how this could make any project destined for a digital final product much easier. In theory, web graphics could be extremely low resolution files that take advantage of the OS's ability to extrapolate the pixels necessary to upscale their appearance.



    Do you know if this would apply to video as well? Certainly would have an extremely beneficial impact on distributing video products if resolution could be lowered and then reinterpreted, or up-sampled, by the displaying machine.
  • Reply 63 of 184
    shetlineshetline Posts: 4,695member
    Quote:
    Originally Posted by donebylee


    Though all of this is fine on a display, remember some of us still deal with print as a final output and unless they come up with resolution independent paper, editing at 100% will remain important as least as long as paper is used to disseminate information.



    What you'd probably want for print use, at least some of the time, is for 100% to mean "actual size". That has nothing to do with mapping image pixels directly to screen pixels unless your screen and your image dpi are exactly equal. Also, unless you're still using a clunky old dot matrix printer, most printers these days do much better than 72 dpi or any other common current screen resolution. If I'm printing an image that's supposed to be 3"x3" when it comes out on paper, and my source image has 300 dpi resolution, the image is then going to be 900x900 pixels. "100%", as it currently means in Photoshop, is going to make this image much bigger than 3"x3" on your display, using up a fairly large amount of screen real estate.



    Quote:
    Originally Posted by donebylee


    In theory, web graphics could be extremely low resolution files that take advantage of the OS's ability to extrapolate the pixels necessary to upscale their appearance.



    Whoa there! Only if you get your "theory" by watching too much CSI. In the real world, you can't zoom in on seven smudged pixels and extract a readable license plate number, and certainly not a license plate image so sharp you can determine the species of the bugs splatted around the letters and numbers.



    Garbage in -> Garbage out. The ability to upscale an image and simulate details not seen in the original image is extremely limited, and it's always a form of mathematical guesswork. You can't create information that simply isn't there in an original image. The best you can hope for is a plausible, eye-pleasing simulation of what a might have been found in higher-res source material, and that won't even take you as far as doubling the size of an image very effectively.
  • Reply 64 of 184
    placeboplacebo Posts: 5,767member
    Quote:
    Originally Posted by donebylee


    Am I missing something here?



    Wouldn't Photoshop have a problem with resolution independence? If Leopard let's me set-up my display as taking advantage of 150ppi resolution screen then what is Photoshop to do with it? Especially at 100 % magnification which is supposed to be 1 pixel = 1 pixel display -- wouldn't that mean that a 72 ppi photo would take up roughly half the screen real estate that the same photo would take up on a 72 ppi screen? It would seem that editing a 72 ppi icon for a website would become very difficult on a screen that was running at 150 ppi.



    If I don't get it, please explain it to me.



    Adobe would probably add in an option for simulating the scaling of graphics on a 72dpi display to make designer's lives easier.
  • Reply 65 of 184
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by Socrates


    I think you misunderstood his question. He meant, when using the accessibility zoom function on the screen, will text still look pixelated when blown up, and the answer is that no, it won't because it will be rerendered as a vector at higher rez rather than just expanding the prerendered bitmap.



    So in fact the answer to his question is yes, this will help make zoomed text clearer.



    Socrates



    Even now, you can choose to smooth when zooming. It make a vast improvement in the appearence of text, and lines.
  • Reply 66 of 184
    Quote:
    Originally Posted by donebylee


    Though all of this is fine on a display, remember some of us still deal with print as a final output and unless they come up with resolution independent paper, editing at 100% will remain important as least as long as paper is used to disseminate information.



    But what does "editing at 100%" mean in that case? If your paper output is a 1440 dpi laser printer, then 100% may mean 1440 pixels per screen inch. And since the screen is only ~100 dpi, you'll really be working very "zoomed out". Or, 100% could mean 1 picture pixel = 1 screen pixel, which would really be very "zoomed in" compared to the paper output. Toss in resolution independence and my brain really starts to hurt: This picture will be printed at 1440 dpi, my screen is 150 dpi but I want it scaled to 72 dpi.



    Quote:

    But I do see how this could make any project destined for a digital final product much easier. In theory, web graphics could be extremely low resolution files that take advantage of the OS's ability to extrapolate the pixels necessary to upscale their appearance.



    I don't think that's accurate. I believe web browsers make some assumptions about the pixels per inch of the display. So, pictures for the web should probably assume 72 ppi and let the browser and OS handle making sure those images get displayed at the right size. Making the images with fewer pixels would just result in ugly jaggies. Now, going to something like SVG would make things more interesting.



    Quote:

    Do you know if this would apply to video as well? Certainly would have an extremely beneficial impact on distributing video products if resolution could be lowered and then reinterpreted, or up-sampled, by the displaying machine.



    I believe h.264 already does this.



    - Jasen.
  • Reply 67 of 184
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by Amorya


    You have the right idea, just an incorrect assumption. "100 % magnification ... is supposed to be 1 pixel = 1 pixel display" -- that assumption was true before resolution independence but is no longer true.



    What happens is an image will look the same physical size (e.g. 1 inch wide) regardless of the pixel density of your display. So if your screen is running at 150 ppi, the photo will look the same size as on a 72 ppi screen, but will be using twice as many pixels to display. Currently, on the aformentioned 150 ppi screen, the photo would display at half the size as on a 72 ppi screen.





    No, no, no!



    You do NOT want this to affect image files of any kind when working on them in a graphics, video, or any kind of program at all.



    Raster files must NOT be modified by the OS in ANY way.



    We work on these images down to the pixel level. Sometimes to the sub-pixel level. We need to have that file untrashed by the OS.



    You are only correct when using movies, or photo playback to watch later on. When WORKING ion these files we need them to be as is at all times.



    100% must be one pixel on screen equal to one pixel in the file.
  • Reply 68 of 184
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by donebylee


    As to displaying the image in its assigned size, will this require additional information that is not currently encoded in the file?



    If a file is designated as being 72x72 pixels and it is then displayed on a resolution independent screen, does the file need to contain additional information to tell the OS to display as a 1x1 inch file, or is that the assumption (72 pixels = 1 inch) built into resolution independence?



    Could this cause any significant overhead processing with older apps? If the OS is having to run these calculations on the fly, would this add to its processing load, i.e. more Rosetta style slow-downs?



    Thanks for the info.



    Just totally forget what he says there. It's wrong for editing, only for viewing.
  • Reply 69 of 184
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by JeffDM


    You are right, many image formats don't contain the physical size that I am aware, so they would have to assume 72 unless otherwise told. My installations of Photoshop assumes 72 unless I tell it otherwise, on a per-image bases. Also, the display's size information is apparently unreliable so if the user wants accuracy, they would have to punch that information in too.







    I don't think it would put a major load on the CPU, it might be done in the GPU.



    It's not important for editing. we know exactly what we need. There is no way the OS can make assumptions about that.



    Everyone has a different monitor. If we make an image size for display, we have to determine just what size monitoe, as to resolution we will perfect our image for.



    for print, we need to do precise sizing.



    It will still have to be done manually.
  • Reply 70 of 184
    shetlineshetline Posts: 4,695member
    Quote:
    Originally Posted by melgross


    100% must be one pixel on screen equal to one pixel in the file.



    That would be ridiculous if you've had an old 72 dpi image, and you tried to view that image on a future 300 dpi video display. On such a display, that kind of pixel mapping would be damn near useless.



    You sound almost as if you're saying that scaling the image up to show it to you at a reasonable size on a high-res display is going to trash the original image data. Why would that be the case? As long as the original image is kept, and any changes made to that image respect its original resolution -- there'd be absolutely no reason to force a resampling to a different DPI, apart from how the image, and how the on-going editing process of that image is displayed to you.



    After all, if you need to get down to editing at the pixel level, you generally have to zoom in quite a bit until your image pixels are big chunky blocks on the display. It's not as if Photoshop actually converts the image into those big blocks anywhere except on the display, and it's certainly not the case that Photoshop re-creates your smaller image by taking those monster blocks and scaling them back down.
  • Reply 71 of 184
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by shetline


    The first question you have to ask yourself here is what's so important about this 100% mode, and why do you use it?



    The main reason is to get a good look at an image without scaling artifacts getting in the way, to see what's there in the image as the image really is, and to see how big the image will look when rendered one-pixel-to-one-pixel on a display, as will be the typical case when you display that image on the web. For now, that means the 100% mode is still a useful thing.



    In the future, when we have really hi-res displays (300 dpi, maybe even 200 dpi will look good with good anti-aliasing behind the scenes), mapping to native pixels won't matter much for getting a good look at an image. Any scaling artifacts will be minor, will hardly make any difference in evaluating the visual quality of an image, and if your audience is likely to be viewing the image as a scaled instance on a high-res display, you might as well be seeing the image under the same conditions it will normally be viewed under.



    If you want to manipulate individual pixels, chance are you'll be better off working with a magnified (thus scaled) image anyway.



    I suspect that there will always be some way to address native display pixels directly, even when resolution independence becomes the norm, when pixels matter only as units of information in pixel-mapped source material like photos. Addressing native display pixels is still going to be important for some time yet, but we need to be laying the groundwork for resolution independence now.



    Someone mentioned 150 dpi as an example for a screen resolution. My personal opinion is that 150 dpi is a terrible screen resolution for most desktop uses. 150 dpi makes pixels too tiny for a lot of unscaled pixel-mapped material. Used as a native resolution, with traditional resolution dependent apps and media, 150 dpi results in too much of what you want to look at ending up as far too tiny and squint-inducing. However, a 150-dpi screen is still not sharp enough to treat as resolution independent with good results -- scaling is going to look fuzzy and blurry.



    For most people, displays start to get too squinty when you get much beyond 110 dpi. For those who need reading glass, but who don't want to wear them, this problem starts even lower than that. I'm guessing (never having seen such a display) that somewhere around 200 dpi would be the low-end for effective, high-quality resolution independent display use. By those standards, screen resolutions in the range from over 110 and under 200 are a dead zone, simply not very useful for computer interfaces.



    I agree with everything you have said except for the actual rez of the display.



    First of all, I don't believe we will be seeing more than 200ppi displays for quite some time.



    But, I don't see why you think it would be necessary in the first place.



    If we were to use rez independence to make the features appear SMALLER than they otherwise would, then yes.



    But, that is not the case. We would want to have them larger. In that case we don't need super hi rez displays. In fact, 110 displays are perfectly fine for that.



    I'm using a 24" Sony 900W crt display running at 1920 x 1200 right now. The truth is that almost no one can see the finest detail this display can image without moving right up to the screen and squinting.



    It's a bit easier with an LCD because the pixels are a bit sharper. But still.



    I used, for a brief while, the IBM display that had about 200ppi, and no one could see the detail without almost going nose to nose with the screen, or using a magnifying glass.



    I don't really see more than 150ppi being useful for anything else than medical and military use.



    I haven;'t seen a good explanation yet for a very high rez display in any kind of general computing.
  • Reply 72 of 184
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by donebylee


    Though all of this is fine on a display, remember some of us still deal with print as a final output and unless they come up with resolution independent paper, editing at 100% will remain important as least as long as paper is used to disseminate information.



    Yes



    Quote:

    But I do see how this could make any project destined for a digital final product much easier. In theory, web graphics could be extremely low resolution files that take advantage of the OS's ability to extrapolate the pixels necessary to upscale their appearance.



    Maybe. Low rez images will look just as bad when blown up as they do now when the initial rez is too low. As long as the original is not below 50% of the final size, it will work. Below that, the pixilation can not be gotten rid of, and the image will simply appear to be staircased, and blurry.



    It's a delicate line to walk. I'm concerned that many will not understand that.



    Quote:

    Do you know if this would apply to video as well? Certainly would have an extremely beneficial impact on distributing video products if resolution could be lowered and then reinterpreted, or up-sampled, by the displaying machine.



    Go to the Quicktime player, and put a video on. Then watch it at 2x size. See how it looks. You can try full screen as well. Make sure it's a small file, perhaps 320 x 240, to get the full effect of the next steps.



    Then go to the Windows menu and select "Show Movie Properties". Select whatever they are calling the video file, which could be called MPEG1 Muxed, for example. When you select the proper file, you will be presented with a new bottom screen. Select "Visual Settings". At the bottom right of the window, select "High quality", and view the video again.



    That will be the equivalent of what you will get if the OS does this for you in Rez Independence.
  • Reply 73 of 184
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by jasenj1


    But what does "editing at 100%" mean in that case? If your paper output is a 1440 dpi laser printer, then 100% may mean 1440 pixels per screen inch. And since the screen is only ~100 dpi, you'll really be working very "zoomed out". Or, 100% could mean 1 picture pixel = 1 screen pixel, which would really be very "zoomed in" compared to the paper output. Toss in resolution independence and my brain really starts to hurt: This picture will be printed at 1440 dpi, my screen is 150 dpi but I want it scaled to 72 dpi.



    While we can do "soft proofing" on a monitor, if everything is set up correctly, with proper color management, there will never be a one to one correspondence between a monitor screen and print, unless, someday, "digital Ink" displays will have four color balls to assist.



    Rez is something that we will never be able to accurately simulate. The two methods of displaying the image are simply too far apart in their technologies.
  • Reply 74 of 184
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by shetline


    That would be ridiculous if you've had an old 72 dpi image, and you tried to view that image on a future 300 dpi video display. On such a display, that kind of pixel mapping would be damn near useless.



    I don't understand where you're coming from here. When we do pixel by pixel editing, we don't want a "simulated" pixel as you seem to want.



    If the pixel is too small in the image, we go to 200%, or 400% or higher. I'm nor sure that you understood what I was saying.



    Quote:

    You sound almost as if you're saying that scaling the image up to show it to you at a reasonable size on a high-res display is going to trash the original image data. Why would that be the case? As long as the original image is kept, and any changes made to that image respect its original resolution -- there'd be absolutely no reason to force a resampling to a different DPI, apart from how the image, and how the on-going editing process of that image is displayed to you.



    Because scaling in the OS would be useless for its purpose if it did not interpolate what it was scaling. Otherwise, it would be no better than simply lowering the rez of your monitoe to get a larger size image.



    I'm sure you agree to that.



    So, what I'm saying is that we do not want the OS to scale our images that we are editing. PS does that nicely by allowing us to go to well over 1000% magnification, without interpolation of the image.



    Quote:

    After all, if you need to get down to editing at the pixel level, you generally have to zoom in quite a bit until your image pixels are big chunky blocks on the display. It's not as if Photoshop actually converts the image into those big blocks anywhere except on the display, and it's certainly not the case that Photoshop re-creates your smaller image by taking those monster blocks and scaling them back down.



    Thats' EXACTLY what I've been saying!!!



    I don't understand where your disagreement is coming from.
  • Reply 75 of 184
    shetlineshetline Posts: 4,695member
    Quote:
    Originally Posted by melgross


    First of all, I don't believe we will be seeing more than 200ppi displays for quite some time.



    On that I agree. I sure would be nice if 200 dpi became common and cheap in the near future, however.



    Quote:

    But, I don't see why you think it would be necessary in the first place... I don't really see more than 150ppi being useful for anything else than medical and military use.



    I haven;'t seen a good explanation yet for a very high rez display in any kind of general computing.



    Hell, even for day-to-day use, I'd love a 200+ dpi screen so that fonts looked better. As things stand right now, in the 72-110 dpi screen resolution range, fonts either look sharp and pixelated, or smooth and blurry. Italics are often hard to read and look like crap either way. I'd like sharp and smooth, with good-looking italics, all at the same time.



    For print work, hi-res displays would be a godsend. You could get a much better idea of what your final results on paper are going to look like based on the on-screen appearance than you can with current display technologies.
  • Reply 76 of 184
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by shetline


    Hell, even for day-to-day use, I'd love a 200+ dpi screen so that fonts looked better. As things stand right now, in the 72-110 dpi screen resolution range, fonts either look sharp and pixelated, or smooth and blurry. Italics are often hard to read and look like crap either way. I'd like sharp and smooth, with good-looking italics, all at the same time.



    For print work, hi-res displays would be a godsend. You could get a much better idea of what your final results on paper are going to look like based on the on-screen appearance than you can with current display technologies.



    I couldn't see any advantage to the IBM monitor that was 200ppi.



    The detail was far too small to appreciate.



    When you get to text sizes of about 6 points on the s reen, it's too small for most people to read anyway, without getting right up to the glass. The same thing is true for print. My monitor shows 6 points just fine, though some serifs are lost on thos small sizes. But, I wouldn't want to have to read it. Anything smaller that 8 points is considered to be too small for readability, though not legibility, which is different. 8 points is quite sharp right now.



    150 ppi would be a big step over 110. I can't see 200 being of real use.
  • Reply 77 of 184
    shetlineshetline Posts: 4,695member
    Quote:
    Originally Posted by melgross


    Thats' EXACTLY what I've been saying!!!



    I don't understand where your disagreement is coming from.



    I'm likewise confused by how you're saying whatever it is that you're trying to say.



    I think we both understand that the screen representation of an image has no bearing on how software like Photoshop actually manages and manipulates the internal representation of an image. I only mentioned the issue before because I was confused about where you're coming from and I wanted to clarify this point.



    Imagine our hypothetical 300 dpi display, and we're working on a 3"x3" 72 dpi image. If we display this image at "100%" in the current Photoshop sense of that sizing, the on-screen view will be under 3/4" on a side... even on the small side for a thumbnail image, and certainly of no use for editing or checking image quality.



    I know you'll agree that for almost any practical use, we need to scale this image up. I'm getting the sense, however, that for some reason you're thinking that only exact integer pixel scaling will do. If so, why? In what we might call "actual size" mode, the image would obviously be 3"x3" on the display (so long as everything is calibrated properly, the OS knows the real resolution of the display, and no additional scaling factor, like Exposé's window shrinkage is being applied). Each source image pixel will map to a square chunk of screen real estate, which, on average, will be about 4.17 pixels tall and wide.



    Would you feel better if the image were mapped to exactly 4x4 pixel squares instead? If so, why? Other than the fact that the image would look 4% smaller than actual actual size, what else, for practical purposes, would have changed? The image would still be too small for pixel editing. The visual character of the image wouldn't change much, because with display pixels so tiny it's unlikely your eye could detect any scaling artifacts from an otherwise non-integer multiple scaling.



    If we're going to magnify even more, scaling artifacts matter even less. Who cares if a jumbo magnified pixel, occupying a half-inch square on your display, has a few slightly-blended 300 dpi pixels at its edges or not?
  • Reply 78 of 184
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by shetline


    I'm likewise confused by how you're saying whatever it is that you're trying to say.



    I think we both understand that the screen representation of an image has no bearing on how software like Photoshop actually manages and manipulates the internal representation of an image. I only mentioned the issue before because I was confused about where you're coming from and I wanted to clarify this point.



    Imagine our hypothetical 300 dpi display, and we're working on a 3"x3" 72 dpi image. If we display this image at "100%" in the current Photoshop sense of that sizing, the on-screen view will be under 3/4" on a side... even on the small side for a thumbnail image, and certainly of no use for editing or checking image quality.



    I know you'll agree that for almost any practical use, we need to scale this image up. I'm getting the sense, however, that for some reason you're thinking that only exact integer pixel scaling will do. If so, why? In what we might call "actual size" mode, the image would obviously be 3"x3" on the display (so long as everything is calibrated properly, the OS knows the real resolution of the display, and no additional scaling factor, like Exposé's window shrinkage is being applied). Each source image pixel will map to a square chunk of screen real estate, which, on average, will be about 4.17 pixels tall and wide.



    Would you feel better if the image were mapped to exactly 4x4 pixel squares instead? If so, why? Other than the fact that the image would look 4% smaller than actual actual size, what else, for practical purposes, would have changed? The image would still be too small for pixel editing. The visual character of the image wouldn't change much, because with display pixels so tiny it's unlikely your eye could detect any scaling artifacts from an otherwise non-integer multiple scaling.



    If we're going to magnify even more, scaling artifacts matter even less. Who cares if a jumbo magnified pixel, occupying a half-inch square on your display, has a few slightly-blended 300 dpi pixels at its edges or not?



    Because, for editing purposes, we must have the actual image we are working on. If that image is scaled in a way that distorts it, which is exactly what is done with an interpolation, then we don't know exactly what that image is, in terms of editing.



    If, for example, I see a small specular highlight that is distracting, and I want to zoom in on it, and modify, or even eliminate it, I can do so with confidence, knowing that what I'm seeing, is exactly what I'm working on.



    But if the image is interpolated so that every change in magnification changes that interpolation, which is exactly what will happen, then I can never be sure that my edit at that magnification will be correct at any other. That would be because the OS is creating pixels that don't exist in the actual file. Every miniscule change in the sizing changes the newly created pixels. I may be editing pixels that don'r exist in the actual file!



    I would have to correct the file, and continually zoom in and out to see what effect my work had done. My edits would appear different at every change of size. That's unacceptable.



    I would think that PS should have the ability to show us how the OS would interpolate the image as well. But that should be our choice, not automatic.



    Sharpening, for example is very critical. When we work on images that will be repurposed, we keep the original file that has been edited. we save several different versions for color. But we save it in the highest rez.



    What we do then is to bring that rez to what we need for that purpose, and then sharpen. We do that for every final size and rez the file will be used,. We do it for every different technology the files will be used for as well.



    So, how do I sharpen a file if PS has already interpolated it for me in a way that I have no control over? That may be ok for internet use where the file is of low rez anyway, and no one expects great quality. But it's impossible for print, or other critical work.



    how do I spec a rez of a file and see exactly what that spec looks like? I can't. PS will change the rez as I order, but the way it liiks on the screen may not replicate that actual rez.



    I'd like to continue this, but I have to leave for several hours.
  • Reply 79 of 184
    shetlineshetline Posts: 4,695member
    Quote:
    Originally Posted by melgross


    I couldn't see any advantage to the IBM monitor that was 200ppi.



    The detail was far too small to appreciate.



    I suppose it all depends what you mean by "appreciate". Maybe we both appreciate different things.



    Quote:

    When you get to text sizes of about 6 points on the s reen, it's too small for most people to read anyway, without getting right up to the glass. The same thing is true for print. My monitor shows 6 points just fine, though some serifs are lost on thos small sizes. But, I wouldn't want to have to read it. Anything smaller that 8 points is considered to be too small for readability, though not legibility, which is different. 8 points is quite sharp right now.



    150 ppi would be a big step over 110. I can't see 200 being of real use.



    I'm not just talking about mere legibility, I'm talking about visual quality and visual character.



    Obviously 12-point fonts are quite easy to read on a computer screen. No issues of basic legibility there. But compare text displayed using a 12-point font on a typical computer display to the same text, in the same font, at exactly the same physical size, printed in a moderately decent quality magazine. They don't look anything alike. The screen font is either chunkier looking than the print font, or blurrier looking if anti-aliasing is switched on, or a bit of both.



    Get resolutions up to 200 dpi and that difference will be greatly diminished. Of course, in the print world, even 200 dpi is crap. But print resolution specifications often relate to binary pixels, pixels without any individual shading, each pixel being a spot where, in on/off fashion, a splash of ink or toner will or will not be applied. When you've got 256 shades for each R, G and B component of every pixel, 200 dpi can look pretty damned good.



    I don't think 150 dpi is quite there. What you really want is for the individual pixels to essentially disappear, to be just below threshold of perception so that scaling artifacts essentially disappear.
  • Reply 80 of 184
    shetlineshetline Posts: 4,695member
    Quote:
    Originally Posted by melgross


    If, for example, I see a small specular highlight that is distracting, and I want to zoom in on it, and modify, or even eliminate it, I can do so with confidence, knowing that what I'm seeing, is exactly what I'm working on.



    But if the image is interpolated so that every change in magnification changes that interpolation, which is exactly what will happen, then I can never be sure that my edit at that magnification will be correct at any other. That would be because the OS is creating pixels that don't exist in the actual file. Every miniscule change in the sizing changes the newly created pixels. I may be editing pixels that don'r exist in the actual file!



    Huh!? What!?



    Current Photoshop... I zoom to 400%. Each pixel now occupies 16 pixels, an 4x4 square.



    Photoshop does not let me edit each of those 16 pixels. I can only edit the underlying single pixel that this big patch of pixels represents. They all change at once, no matter where inside the jumbo pixel I click. I suffer from no confusion or uncertainty about which pixel is being edited.



    Let's zoom to 450%. Now what do we do? We can either have a mix of 4x4, 4x5, 5x4, and 5x5 pixels on the display, or we can have a smoothed-out representation where every pixel's color is exactly represented by at least a 3x3 square of pixels, with edge display pixels being blended in varying proportions with the contributions of neighboring source-image pixels.



    So what? I'm not going to get confused by this and think I can edit those blended display pixels individually without changing the whole underlying source-image pixel, am I?



    Instead of talking in terms of percentages, however, let's imagine the OS is doing the scaling, not Photoshop, we have a 200 dpi display, and we tell the OS to display our image so that each pixel is 1/6" square. In Photoshop terms, this would be 3333.33% magnification, with each pixel mapping to a non-integer 33 1/3 pixels squared.



    Where's the problem? Each pixel's color gets exactly mapped to at least a 32x32 chunk of the display, with an essentially invisibly small single-pixel fringe of blended-color pixels. Where's the confusion?
Sign In or Register to comment.