Apple pushes devs to deliver 64-bit support with new Snow Leopard beta

123457»

Comments

  • Reply 121 of 127
    Quote:
    Originally Posted by melgross View Post


    What, you don't think that's been thought of before?



    It's no solution.

    If you knew something about filming, you would know that you can't stop every time you reach the end of a file.



    There's such a thing called "continuity".



    [...]



    Eto eto... Melgross-san, you sound to my reading here to be a little worked up over that, and your answer doesn't make much sense to me. I will try and elaborate, since you seem to have missed my point.



    The other thing is you sound very accusing in your statement `If you knew something about filming...'



    Tangentially; I will freely admit I don't know a very much about filming; I don't work in the movie industry after all, and the closest I get is recording stuff on my Powershot G9. Given it only shoots in low-def 4:3; I haven't had file-size issues.



    But whilst I may not know too much about filming, I do know a fair amount about some of the graphics codecs, the various ways of storing colour, and the compression methodologies that are used in these. What is more, I would love for you to educate me (and other readers) as to the potential pitfalls my solution would fall into. I like to read and write for educational purposes rather than just stating stuff and assuming the reader knows why it is factual...





    The original problem as I currently understand it was not being able to shoot a movie over a certain length because you ran into the maximum file size that Fat32 can support. If that isn't the case, then our entire discussion on the subject (and the rest of this post) is meaningless since we have had our wires crossed from reply#1. If this is the case, I apologise for misreading/interpreting your post - and please disregard the rest of this one.







    Carrying on...



    My suggestion was (direct quote)



    `One workaround for large files on Fat32 is simply to save multiple smaller files.'



    I was not advocating this is what should be done nor that it was the only thing that could be done. What I was indicating was merely one particular solution that could be employed to get over the limitations.



    As an aside, it seems silly to me to have a camera that will shoot in HD 1020p (by your words) that doesn't do something about this - either writing to a different type of file-system, or by working around the limitations of Fat32.





    So my envisaged solution was that as the camera is writing to disk, the part that actually does the writing could split the output into multiple files as it goes. With the caveat of looking backwards to indexes and your last point(s) of reference that the compression codec requires [a further aside, I wonder why such a professional camera would want to shoot HD straight to such a lossy and difficult to post-edit format, assuming the max file-size wasn't an issue] the actual data is essentially just written out and forgotten. Once you've got more than a certain way through writing (which one assumes one can accommodate in the available 4GB), you simply cut and write what is no longer required to file. If your lookup window is greater than 4GB, then you have a problem (and I suggest you want a different codec).



    Admittedly you do not get a single nice file on disk at the end; hence me saying you would then need a re-stitch tool for the end user (and probably in the camera too) to stream back the information.



    You say: ``There's such a thing called "continuity".'' and you are of course right, there is indeed such a thing as `continuity'. I'm presuming you mean by the statement though that this `continuity' is the reason why the above wouldn't work.



    The question is actually: can you encapsulate the file-io to the point where people continue to see `continuity'. The answer to that question is definitely `yes' - given enough smarts (whether by the above method or another)



    However as we have all already said, it is probably less effort simply to change the underlying file-system to something more appropriate for the size of the files now being generated.



    ``Fine, YOU tell the manufacturers to not use the only current file format that's standard there.''



    Now I find that a bit of a weird non sequitur to make. It isn't my place to be saying that and didn't say that we should. I was suggesting a workaround for the current limitations. As I've already said, I'm aware that it is a standard of sorts. However it is not the `only' format used on flash devices. Also, the layout of the data on top of the file system varies from camera to camera, so it is hardly like a given flash card will always work when taken straight out of one camera and put into another without re-formatting. There's no particular reason to stay with it other than it being easier (cheaper) to do so.



    ``It's not as though they aren't aware of the problem''



    I'm sure they are; I wasn't implying that they weren't.



    ``We hope to see a solution around the end of the year''



    Who are `we'? Everyone likes to see solutions though...



    ``But it will have to come in on a new generation of equipment''



    Well, it would be weird in this day and age for manufacturers not to use it as a way to sell more product
  • Reply 122 of 127
    melgrossmelgross Posts: 33,599member
    Quote:
    Originally Posted by MisatosAngel View Post


    Eto eto... Melgross-san, you sound to my reading here to be a little worked up over that, and your answer doesn't make much sense to me. I will try and elaborate, since you seem to have missed my point.



    The other thing is you sound very accusing in your statement `If you knew something about filming...'



    Tangentially; I will freely admit I don't know a very much about filming; I don't work in the movie industry after all, and the closest I get is recording stuff on my Powershot G9. Given it only shoots in low-def 4:3; I haven't had file-size issues.



    But whilst I may not know too much about filming, I do know a fair amount about some of the graphics codecs, the various ways of storing colour, and the compression methodologies that are used in these. What is more, I would love for you to educate me (and other readers) as to the potential pitfalls my solution would fall into. I like to read and write for educational purposes rather than just stating stuff and assuming the reader knows why it is factual...





    The original problem as I currently understand it was not being able to shoot a movie over a certain length because you ran into the maximum file size that Fat32 can support. If that isn't the case, then our entire discussion on the subject (and the rest of this post) is meaningless since we have had our wires crossed from reply#1. If this is the case, I apologise for misreading/interpreting your post - and please disregard the rest of this one.







    Carrying on...



    My suggestion was (direct quote)



    `One workaround for large files on Fat32 is simply to save multiple smaller files.'



    I was not advocating this is what should be done nor that it was the only thing that could be done. What I was indicating was merely one particular solution that could be employed to get over the limitations.



    As an aside, it seems silly to me to have a camera that will shoot in HD 1020p (by your words) that doesn't do something about this - either writing to a different type of file-system, or by working around the limitations of Fat32.





    So my envisaged solution was that as the camera is writing to disk, the part that actually does the writing could split the output into multiple files as it goes. With the caveat of looking backwards to indexes and your last point(s) of reference that the compression codec requires [a further aside, I wonder why such a professional camera would want to shoot HD straight to such a lossy and difficult to post-edit format, assuming the max file-size wasn't an issue] the actual data is essentially just written out and forgotten. Once you've got more than a certain way through writing (which one assumes one can accommodate in the available 4GB), you simply cut and write what is no longer required to file. If your lookup window is greater than 4GB, then you have a problem (and I suggest you want a different codec).



    Admittedly you do not get a single nice file on disk at the end; hence me saying you would then need a re-stitch tool for the end user (and probably in the camera too) to stream back the information.



    You say: ``There's such a thing called "continuity".'' and you are of course right, there is indeed such a thing as `continuity'. I'm presuming you mean by the statement though that this `continuity' is the reason why the above wouldn't work.



    The question is actually: can you encapsulate the file-io to the point where people continue to see `continuity'. The answer to that question is definitely `yes' - given enough smarts (whether by the above method or another)



    However as we have all already said, it is probably less effort simply to change the underlying file-system to something more appropriate for the size of the files now being generated.



    ``Fine, YOU tell the manufacturers to not use the only current file format that's standard there.''



    Now I find that a bit of a weird non sequitur to make. It isn't my place to be saying that and didn't say that we should. I was suggesting a workaround for the current limitations. As I've already said, I'm aware that it is a standard of sorts. However it is not the `only' format used on flash devices. Also, the layout of the data on top of the file system varies from camera to camera, so it is hardly like a given flash card will always work when taken straight out of one camera and put into another without re-formatting. There's no particular reason to stay with it other than it being easier (cheaper) to do so.



    ``It's not as though they aren't aware of the problem''



    I'm sure they are; I wasn't implying that they weren't.



    ``We hope to see a solution around the end of the year''



    Who are `we'? Everyone likes to see solutions though...



    ``But it will have to come in on a new generation of equipment''



    Well, it would be weird in this day and age for manufacturers not to use it as a way to sell more product



    Ok, I'll preserve you're entire post here for the sake of clarity.



    The first thing we have to remember is that we're just talking about still cameras that also have the ability to shoot video of varing IQ levels.



    This FAT32 problem affects all of these cmeras, from the very cheapest compacts, all the way up to the 5D mkII.



    The reason for this is clear. These are still cameras. The industry has standardized on FAT32 as an improvement over the 16 bit formatting used before. As such, it isn't ideal for video.



    In 1080p with the high IQ in the Canon, that gives a 12 minute segment. That's simply not enough for many purposes.



    While we would shoot even shorter segments with much scripted material, where we could predict a beginning and an end to a particular sequence, thats not true for much work.



    Many times the camera simply has to roll. What happens when we get to the end of the segment? That's a major problem.



    As I mentioned, the companies are well aware of this problem, and are working on it. It's expected that they will move to 64 bit formatting by 2010. Of course, that leaves all the current cameras out.



    Yes, I get upset at some things. It always seems so simple to suggest the obvious answer. But there is tens of billions riding on these answers, and the companies whose fate depends on it are spending vast amounts of money to solve the problems.



    It's not so easy as to suggest we shoot in shorter segments. And as I said, its not as though we haven't figured this out. We have no choice after all.
  • Reply 123 of 127
    Ah, I see we still have got wires crossed - my apologies.



    To clarify: I was never suggesting you (the end-user) shoot in shorter segments - I was suggesting that in the camera back-end soft(firm)ware, there is a greater abstraction between the file-system and the incoming data from the video.



    In other words, you keep `shooting' and the backend (invisibly to the part of the camera dumping raw data out of sensor and then compressing it) splits up the actual compressed output stream as it arrives into parts small enough to fit within the 4GB limit.



    Then, when you want to replay on the camera, a stitch thread (if you like - probably not a thread in reality) glues the file back together from the parts on the files volume to a stream to send to the lcd screen (or attached video output device). Likewise when you grab the `file'(s) onto your desktop computer, there is a stitch tool to glue them back into a single cohesive file.



    I also apologise for giving the impression that it would be `easy' (especially by using the word `simply') - I was merely suggesting it was possible. The ultimate answer (as we both note) is to move to a less lame file system, and that may actually be easier in the long run.
  • Reply 124 of 127
    melgrossmelgross Posts: 33,599member
    Quote:
    Originally Posted by MisatosAngel View Post


    Ah, I see we still have got wires crossed - my apologies.



    To clarify: I was never suggesting you (the end-user) shoot in shorter segments - I was suggesting that in the camera back-end soft(firm)ware, there is a greater abstraction between the file-system and the incoming data from the video.



    In other words, you keep `shooting' and the backend (invisibly to the part of the camera dumping raw data out of sensor and then compressing it) splits up the actual compressed output stream as it arrives into parts small enough to fit within the 4GB limit.



    Then, when you want to replay on the camera, a stitch thread (if you like - probably not a thread in reality) glues the file back together from the parts on the files volume to a stream to send to the lcd screen (or attached video output device). Likewise when you grab the `file'(s) onto your desktop computer, there is a stitch tool to glue them back into a single cohesive file.



    I also apologise for giving the impression that it would be `easy' (especially by using the word `simply') - I was merely suggesting it was possible. The ultimate answer (as we both note) is to move to a less lame file system, and that may actually be easier in the long run.



    It would be very nice it your suggestion could work. But it can't. These are camera real time OS's, and are pretty simple. They don't have the ability to do anything other than what they are designed to do. To do as you suggest, even if it were possible, which I've been assured it is not, would mean the entire rewriting of the OS's that each camera manufacturer uses. There are dozens of these in current use.



    Video is just a small part of what they have to control, an additional feature, not a mainstay.



    Camcorders work entirely differently. They don't treat their tape or SS memory as a computer storage medium as still cameras must. Therefor, they can record for as long as required.



    But the downside to this is that you can't take the storage from a camcorder and put on your desktop to search like you can with a flash card from a still camera.



    This is also why most camcorder manufacturers use a flash card for stills, and treat it like still cameras do.
  • Reply 125 of 127
    dfilerdfiler Posts: 3,420member
    There is definitely a problem with FAT32 and file sizes. Melgross's camera example is a perfect example.



    But 64bits isn't necessarily the solution. Plenty of file systems allow larger files while running on 32bit code. Granted, if I were a camera manufacturer, and looking to ditch FAT32, I'd probably want to switch to a file system with the most usable life in front of it.



    From a more theoretical perspective, I'd say that ntfs is at fault here, not FAT32. FAT32 was fine in its day. But ntfs comes with so many strings attached (many of them legal) that it hasn't caught on outside of microsoft products. We've stuck with an old technology not so much on its merits, but because the microsoft approved replacement hasn't panned out for 3rd party use.



    It seems to me that cameras would be a perfect example of the type of device which might benefit from staying 32bits for the immediate future. Given the horsepower these devices have to work with, the lower overhead of 32bit computing is quite appealing. I for one, would love if my cameras were quicker.
  • Reply 126 of 127
    @ melgross - thanks for your reply - yes, the proliferation of little RTOS's in the embedded space is interesting in itself. One wonders in a few years whether they will all move to running a cut-down linux, symbian [aka Nokia-OS these days], WinCE or darwin... Of course, that would involve them moving to 32bit which a lot of them aren't ready for yet.



    Quote:
    Originally Posted by dfiler View Post


    There is definitely a problem with FAT32 and file sizes. Melgross's camera example is a perfect example.



    But 64bits isn't necessarily the solution. Plenty of file systems allow larger files while running on 32bit code. Granted, if I were a camera manufacturer, and looking to ditch FAT32, I'd probably want to switch to a file system with the most usable life in front of it.



    From a more theoretical perspective, I'd say that ntfs is at fault here, not FAT32. FAT32 was fine in its day. But ntfs comes with so many strings attached (many of them legal) that it hasn't caught on outside of microsoft products. We've stuck with an old technology not so much on its merits, but because the microsoft approved replacement hasn't panned out for 3rd party use.



    It seems to me that cameras would be a perfect example of the type of device which might benefit from staying 32bits for the immediate future. Given the horsepower these devices have to work with, the lower overhead of 32bit computing is quite appealing. I for one, would love if my cameras were quicker.



    There is a computer science quote that says that every problem can be solved with another level of indirection. The corollary to it is that everything can be sped up by removing one.



    Personally I find it very interesting how things move in the embedded space where there is the continual fight between offering the user a fully-fledged desktop-os style experience and at the same time trying to provide as long a battery life as possible...



    Should the file-system go ZFS (or some other), or should it embrace another proprietary Microsoft `standard'? What provides just enough to satisfy current user demand whilst still being the cheapest (and most power effective) to run with? Having now starting to get bitten by the FAT32 scalability, will manufacturers try to look a little longer term? Who knows...



    The (high-power) embedded space tends (evidence shows over the last six years) to lag behind the laptop space by about 3 years more or less. Therefore (if this continues to hold true) we can expect in 3 years for your average smart-phone or similar to be at where our laptops are today. Of course, laptops are only just now going 64bit. Interestingly, the embedded space has the advantage of always playing catch-up - therefore it can produce the same sorts of hardware 3 years later that run at vastly less power than the chips three years ago. This is not just from taking advantage of die shrinks as alluded to by Hiro, but also just because generally the embedded space chip-sets are designed with lower power in mind rather than (in Intel's case) high performance.



    As has been stated before, there's absolutely no need to have a 64bit capable CPU to run a decent file system. Whilst I'm sure embedded will go 64 bit - at least in some areas - in this not too distant future, I'll be interested to see if `truly' embedded devices like cameras jump on that band-wagon.



    After all, when you look at an `embedded' device such as the iPhone, it really challenges you to define what `embedded' means...
  • Reply 127 of 127
    melgrossmelgross Posts: 33,599member
    Quote:
    Originally Posted by MisatosAngel View Post


    @ melgross - thanks for your reply - yes, the proliferation of little RTOS's in the embedded space is interesting in itself. One wonders in a few years whether they will all move to running a cut-down linux, symbian [aka Nokia-OS these days], WinCE or darwin... Of course, that would involve them moving to 32bit which a lot of them aren't ready for yet.







    There is a computer science quote that says that every problem can be solved with another level of indirection. The corollary to it is that everything can be sped up by removing one.



    Personally I find it very interesting how things move in the embedded space where there is the continual fight between offering the user a fully-fledged desktop-os style experience and at the same time trying to provide as long a battery life as possible...



    Should the file-system go ZFS (or some other), or should it embrace another proprietary Microsoft `standard'? What provides just enough to satisfy current user demand whilst still being the cheapest (and most power effective) to run with? Having now starting to get bitten by the FAT32 scalability, will manufacturers try to look a little longer term? Who knows...



    The (high-power) embedded space tends (evidence shows over the last six years) to lag behind the laptop space by about 3 years more or less. Therefore (if this continues to hold true) we can expect in 3 years for your average smart-phone or similar to be at where our laptops are today. Of course, laptops are only just now going 64bit. Interestingly, the embedded space has the advantage of always playing catch-up - therefore it can produce the same sorts of hardware 3 years later that run at vastly less power than the chips three years ago. This is not just from taking advantage of die shrinks as alluded to by Hiro, but also just because generally the embedded space chip-sets are designed with lower power in mind rather than (in Intel's case) high performance.



    As has been stated before, there's absolutely no need to have a 64bit capable CPU to run a decent file system. Whilst I'm sure embedded will go 64 bit - at least in some areas - in this not too distant future, I'll be interested to see if `truly' embedded devices like cameras jump on that band-wagon.



    After all, when you look at an `embedded' device such as the iPhone, it really challenges you to define what `embedded' means...



    Devices like cameras are very different from devices such as the iPhone/iTouch.



    The camera isn't intended to be a computer in its own right, and will likely not run any other apps, at least in the near term. Because of that, the OS's are much too simple to do much anything else. I used to be able to reprogram the RTOS in my E-6 film processor, but not with these obviously. The only reason why they even went to 32 bits was the addressing problem as flash card got bigger. Otherwise, they couldn't have used cards bigger than 2 GB. We hit that limit a few years ago, and it took the manufacturers a couple of years to move FAT32 to their entire lines, from the cheapest compacts to the highest pro machines. Same thing with UDMA cards. Even now, only a few of the better D-SLRs can use them effectively.



    Next year we might see SATA card interfaces for more speed and a better fit with the much larger flash cards coming out shortly.



    While is is possible to have larger than 4 GB files with a 32 bit OS, it's complex, and isn't something that really works well. We won't see that in cameras. We will see 64 bit addressing for the cards at least. It's really not such a big deal you know. This is easy to do. The cameras, being self contained systems, will suffer none of the problems that general purpose computers may have with 64 bits, though even that's exaggerated.
Sign In or Register to comment.