It's called UNIX. It's software, not hardware, obviously.
The second version of UNIX was the first operating system to be written in a high level language - C. The plan was that C was an incredibly easy language to write a compiler for using standard tools, so when confronted with a new hardware platform you only needed a compiler, and UNIX would run on it. This worked, and it worked very well. This revolution, by the way, started off in 1972. It's not new.
What you want is an agnostic platform, that is to say an agnostic interface, that abstracts away any implementation details. Software is better suited than that to hardware because it can more easily be adapted via different runtimes, or a new C compiler, while hardware is what it is. There are two ways to do this: An operating system exists to abstract hardware into a consistent interface, which is how a BSD running on a Sun box looks and acts like a BSD running on a PC looks and acts like BSD running on a Mac. This is application-level compatibility. The other way is document level compatibility. The internet works on this principle - if a machine speaks TCP/IP, then who cares how? It's all set to talk to everything else on the internet. In this view, not only the hardware is abstracted away; the applications are. This is (ideally) also how the Web works.
One problem is that applications and systems vendors don't like being abstracted away. It's not in Microsoft's interest to have Word save files in a document format that J. Random Word Processor can read and write to, just as it was not in Netscape's interest to be content with implementing standard HTML. Any vendor wants to include a value-add that breaks compatibility precisely to lock you into their solution.
Now, look at it from the other side: Theories and ideals aside, there cannot be a clean separation of hardware and software without a least common denominator. If you want to move beyond that minimal functionality, you're on your own. Similarly, the less you have to assume, the more checks you have to make and the more dependencies you have to introduce, which necessarily and adversely affect the simplicity and the consistency and the quality of the interface - that kind of code is very difficult to debug. So if you want the user to be guaranteed a larger and more consistent set of capabilities than the lowest common denominator offers, you have to wed the operating system to the hardware, and design each one with the other in mind. In the consumer space, you have to take applications into account as well, because the end user isn't going to use ioctl() to write to their DVD burner. This is the biggest reason behind Apple using AIOs and controlling their hardware: They are guaranteeing both their end users and their software developers that certain capabilities will be there.
Now, how about file formats? Well, although the pervasiveness of modern networks have changed the rules somewhat, there is a basic and simple logic that a file format should be tailored the application that uses it. Is it so bad that Photoshop uses its own PSD format to store Photoshop-specific data when the end result will almost invariably be exported to a TIFF, or a JPEG or a PDF? The problem with writing to a standard file format is that first of all, you have to be sure that it does what you need it to do, and second of all you have to write filters that convert the data from the way you represent it in your application to the way the file represents it on disk - and this is another layer of complexity, another hit on performance, another source of bugs. That's not even getting into the fact that any standard of any complexity has at least one ambiguous area, and so it's very possible for two applications to both implement a standard without seeing eye to eye.
And so here we all are.
Now, I'm going to scoot this over to General Discussion...
Now, look at it from the other side: Theories and ideals aside, there cannot be a clean separation of hardware and software without a least common denominator. If you want to move beyond that minimal functionality, you're on your own. Similarly, the less you have to assume, the more checks you have to make and the more dependencies you have to introduce, which necessarily and adversely affect the simplicity and the consistency and the quality of the interface - that kind of code is very difficult to debug. So if you want the user to be guaranteed a larger and more consistent set of capabilities than the lowest common denominator offers, you have to wed the operating system to the hardware, and design each one with the other in mind. In the consumer space, you have to take applications into account as well, because the end user isn't going to use ioctl() to write to their DVD burner. This is the biggest reason behind Apple using AIOs and controlling their hardware: They are guaranteeing both their end users and their software developers that certain capabilities will be there.
Now, how about file formats? Well, although the pervasiveness of modern networks have changed the rules somewhat, there is a basic and simple logic that a file format should be tailored the application that uses it. Is it so bad that Photoshop uses its own PSD format to store Photoshop-specific data when the end result will almost invariably be exported to a TIFF, or a JPEG or a PDF? The problem with writing to a standard file format is that first of all, you have to be sure that it does what you need it to do, and second of all you have to write filters that convert the data from the way you represent it in your application to the way the file represents it on disk - and this is another layer of complexity, another hit on performance, another source of bugs. That's not even getting into the fact that any standard of any complexity has at least one ambiguous area, and so it's very possible for two applications to both implement a standard without seeing eye to eye.
And so here we all are.
Now, I'm going to scoot this over to General Discussion...
Comments
It's called UNIX. It's software, not hardware, obviously.
The second version of UNIX was the first operating system to be written in a high level language - C. The plan was that C was an incredibly easy language to write a compiler for using standard tools, so when confronted with a new hardware platform you only needed a compiler, and UNIX would run on it. This worked, and it worked very well. This revolution, by the way, started off in 1972. It's not new.
What you want is an agnostic platform, that is to say an agnostic interface, that abstracts away any implementation details. Software is better suited than that to hardware because it can more easily be adapted via different runtimes, or a new C compiler, while hardware is what it is. There are two ways to do this: An operating system exists to abstract hardware into a consistent interface, which is how a BSD running on a Sun box looks and acts like a BSD running on a PC looks and acts like BSD running on a Mac. This is application-level compatibility. The other way is document level compatibility. The internet works on this principle - if a machine speaks TCP/IP, then who cares how? It's all set to talk to everything else on the internet. In this view, not only the hardware is abstracted away; the applications are. This is (ideally) also how the Web works.
One problem is that applications and systems vendors don't like being abstracted away. It's not in Microsoft's interest to have Word save files in a document format that J. Random Word Processor can read and write to, just as it was not in Netscape's interest to be content with implementing standard HTML. Any vendor wants to include a value-add that breaks compatibility precisely to lock you into their solution.
Now, how about file formats? Well, although the pervasiveness of modern networks have changed the rules somewhat, there is a basic and simple logic that a file format should be tailored the application that uses it. Is it so bad that Photoshop uses its own PSD format to store Photoshop-specific data when the end result will almost invariably be exported to a TIFF, or a JPEG or a PDF? The problem with writing to a standard file format is that first of all, you have to be sure that it does what you need it to do, and second of all you have to write filters that convert the data from the way you represent it in your application to the way the file represents it on disk - and this is another layer of complexity, another hit on performance, another source of bugs. That's not even getting into the fact that any standard of any complexity has at least one ambiguous area, and so it's very possible for two applications to both implement a standard without seeing eye to eye.
And so here we all are.
Now, I'm going to scoot this over to General Discussion...
Originally posted by Amorph
Now, look at it from the other side: Theories and ideals aside, there cannot be a clean separation of hardware and software without a least common denominator. If you want to move beyond that minimal functionality, you're on your own. Similarly, the less you have to assume, the more checks you have to make and the more dependencies you have to introduce, which necessarily and adversely affect the simplicity and the consistency and the quality of the interface - that kind of code is very difficult to debug. So if you want the user to be guaranteed a larger and more consistent set of capabilities than the lowest common denominator offers, you have to wed the operating system to the hardware, and design each one with the other in mind. In the consumer space, you have to take applications into account as well, because the end user isn't going to use ioctl() to write to their DVD burner. This is the biggest reason behind Apple using AIOs and controlling their hardware: They are guaranteeing both their end users and their software developers that certain capabilities will be there.
Now, how about file formats? Well, although the pervasiveness of modern networks have changed the rules somewhat, there is a basic and simple logic that a file format should be tailored the application that uses it. Is it so bad that Photoshop uses its own PSD format to store Photoshop-specific data when the end result will almost invariably be exported to a TIFF, or a JPEG or a PDF? The problem with writing to a standard file format is that first of all, you have to be sure that it does what you need it to do, and second of all you have to write filters that convert the data from the way you represent it in your application to the way the file represents it on disk - and this is another layer of complexity, another hit on performance, another source of bugs. That's not even getting into the fact that any standard of any complexity has at least one ambiguous area, and so it's very possible for two applications to both implement a standard without seeing eye to eye.
And so here we all are.
Now, I'm going to scoot this over to General Discussion...
Your explanation is awesome! Thanks for the
education. It is interesting to see things,
not as I as a layman think they may be, but
from the perspective of a programmer or
designer.
Thanks much,
-MacintoshMan