NYC restauranteur Danny Meyer looks to upend hospitality industry with Apple Watch

2»

Comments

  • Reply 21 of 22
    auxio said:
    auxio said:

    "ResyOS"? An app isn't an operating system. LOL.
    Not picking on you, but here's a web server running on an iPhone 6+.   
    And again, a web server is not an operating system.  It's an application which relies on the underlying operating system to provide it the computing resources it needs to communicate with other machines using HTTP (access to networking hardware, memory to store network and state data, CPU time to execute, etc).

    What is an operating system if not a specialized app to maximize the use of hardware resources.
    Aside from the stated goal of 'maximizing' use of resources (which may or may not be the actual goal of the OS -- it all depends on what the intended purpose of a system is), I'd say it's simply using different terminology for the same thing.

    For modern personal computer systems (desktop computers, laptops, phones, tablets, etc), it's more common to view the OS as a layer in an overall system, and applications as a layer which sit above the OS.  But for special purpose systems like the embedded systems in cars, the OS very much is just an application (those layers are combined).


    Agree on both points!


    What would you call a computer system that doesn't have an operating system?  

    Unless the function that system is designed to perform is purely handled by the hardware components it's built from (e.g. driven solely by input to those components, or similar), I'd call it a bunch of computer components with no function.

    I don't think I'd go quite that far -- many of the things that an OS does: scheduling, priorities, data access, interrupt handling, etc. were performed externally to the hardware -- by humans.

    Once the program is booted, obviously it's sending commands directly to the hardware to perform operations so, you're right, it's acting like it's own operating system at that point (manipulating/managing hardware resources directly).  It's an operating system specialized to the particular function your program is designed to provide.

    The 1401 was a serial, character, variable-length instruction machine.  Instruction length was denoted by an extra bit in each memory location -- called a word mark.  Pressing the Load button caused:
    1. the hardware to read a single card into a fixed area of memory
    2. branch to the first position of the area (representing column 1 of the card)

    The program instruction in column 1 would set a word mark to denote the beginning of the next instruction and so on, at some point an instruction would save the app program content stored in the card area to another area memory... rinse and repeat until the last program content was saved.

    Then, the program would branch to the first location in the saved program area...  literally pulling itself up by the bootstraps!   God help you if you dropped the deck of program cards!

    A modern server, such as the Swift Kitura Web Server handles most of the processing and I/O programmed within the server app. It does use APIs from the OS's SDK, but it assumes responsibility for starting/stopping threads/processes, communicating with services, accessing external resources, scheduling/queueing/handling requests and responses, etc.  If little else is running on the hardware, the server app kind of assumes the role of an operating system -- but at a higher level of abstraction.   This is especially emphasized when the hardware runs multiple server apps -- each in its own virtual machine -- maybe each specializing in one or more of:  http serving;  database access;  file I/O;  remote services access, etc.
    Well, any fairly large application needs to establish it's own structure for use of system resources (where data is cached, how large the cache(s) can grow, how many threads/processes/concurrent operations to use, etc).  So if you're going to use that as the definition of an OS, then most large applications would qualify an OS.


    I kind of agree!  But a server like Kitura, is a fairly small app 3.3 MB when idle -- yet quite powerful.

    However, if the application is still subject to the structure of a lower-level hardware resource management system (i.e. can't directly send instructions to the CPU and have them executed immediately, can't directly read/write to any valid memory address, etc) then it's not an OS by the definition I know.  Your earlier web server example is most certainly subject to the resource management done by the underlying OS.


    Yes!  But, just as the OS provides a level of abstraction over the hardware -- a capable server can provide a higher level of abstraction over the OS.


    One final thought:  Apple considers Swift to be everything from an application programming language, a high-level System programming language to a low-level scripting language.  It appears that IBM plans to replace all the disparate server-side stuff: web app server, app server, packaging, scripting, etc (all written in different languages - with different interfaces) with with equivalents written in Swift or not-needed because of the power of Swift.
    Well, obviously Apple can do with Swift code what they like since they have full control over the operating system.  So they could, in theory, create some structure whereby Swift code bypasses the OS APIs and kernel and directly controls the hardware.  However, for everyone else using Swift, it's an application programming language until such time as Apple says otherwise (provides that structure to external developers).  I can imagine that they might open up kext (kernel extension/system) development using Swift to external developers, but there's no way they'd open up direct access to their hardware (except for possibly to select developers they work closely with -- thinking of hardware component creators).


    A stated Apple/IBM goal  is to make it it easy and efficient to use the same programming language, Swift, client-side and server-side -- and reuse the same code where applicable.

    If they are successful, with this level of abstraction, most application developers will neither know or care whether the hardware, the OS, or the higher abstraction is providing the capability.

    In the Kitura example, IBM has a macOS tool called IBM Cloud Tools for Swift:  ICT (IBM pronounced Iced Tea) which assembles a package of GitHub components into a project (Kitura) and stores it on your Mac as Swift Xcode source.

    You can  run it locally if that satisfies your needs.  

    It's a step up for developers who can:
    • use the Xcode simulators and Tools for testing and debugging
    • run a client-side app on an iDevice or a Mac -- side by side with the server-side app
    • often share common or complementary code in a single, powerful programming language

    If you wish, with one click you can deploy it to IBM's cloud.

    And you can run as many versions of each as you wish -- on the Mac, the Cloud or both!

    Whew!  All that still begs the question: Who's Got the Data?  Do you have the Data?  I had the Data last week, but I don't have it now!

    Hey Siri: "Who's got the Data?"


    BTW, you can try ICT and Kitura on the Mac free -- you don't need an IBM Bluemix (Cloud) account.

    http://cloudtools.bluemix.net/?snip=true

    edited September 2016
  • Reply 22 of 22
    auxioauxio Posts: 2,763member
    What would you call a computer system that doesn't have an operating system?  

    Unless the function that system is designed to perform is purely handled by the hardware components it's built from (e.g. driven solely by input to those components, or similar), I'd call it a bunch of computer components with no function.

    I don't think I'd go quite that far -- many of the things that an OS does: scheduling, priorities, data access, interrupt handling, etc. were performed externally to the hardware -- by humans.

    That's what I meant when I said 'driven solely by input to those components'.  That input could be electrical signals or human interaction (which is converted to electrical signals).


    The 1401 was a serial, character, variable-length instruction machine.  Instruction length was denoted by an extra bit in each memory location -- called a word mark.  Pressing the Load button caused:
    1. the hardware to read a single card into a fixed area of memory
    2. branch to the first position of the area (representing column 1 of the card)
    The program instruction in column 1 would set a word mark to denote the beginning of the next instruction and so on, at some point an instruction would save the app program content stored in the card area to another area memory... rinse and repeat until the last program content was saved.

    Then, the program would branch to the first location in the saved program area...  literally pulling itself up by the bootstraps!   God help you if you dropped the deck of program cards!

    And here I thought my first experiences with loading computer programs via cassette tapes (which could take 30-60 minutes) was archaic.  How far we have come...


    I kind of agree!  But a server like Kitura, is a fairly small app 3.3 MB when idle -- yet quite powerful.

    Fair enough that it doesn't necessarily have to be a large application -- just an application which needs to create a layer of resource management abstraction.

    I'm generally designing applications where the purpose is to allow an end-user to accomplish a task (not enable other developers to build something on top of them), and so I only tend to create such abstractions when I have to deal with system resource limitations.  And even then, I tend to look for things the SDKs I'm using provide (e.g. scoped autorelease pools as a simple way to manage memory usage in Obj-C code written prior to ARC) before creating my own structures.


    Yes!  But, just as the OS provides a level of abstraction over the hardware -- a capable server can provide a higher level of abstraction over the OS.

    True.  Which is pretty much the whole history of computing science: building successive layers of abstraction which, in turn, enable more complex and powerful systems to be created.


    A stated Apple/IBM goal  is to make it it easy and efficient to use the same programming language, Swift, client-side and server-side -- and reuse the same code where applicable.

    If they are successful, with this level of abstraction, most application developers will neither know or care whether the hardware, the OS, or the higher abstraction is providing the capability.

    Until something goes wrong.  Then tracking down where the problem occurred can potentially be a nightmare.  I've seen firsthand the lengths cloud system support teams have to go to track down the root cause of problems in cloud-based applications.  But I agree that being able to easily build applications which function the same way and access data everywhere is quite powerful.


    In the Kitura example, IBM has a macOS tool called IBM Cloud Tools for Swift:  ICT (IBM pronounced Iced Tea) which assembles a package of GitHub components into a project (Kitura) and stores it on your Mac as Swift Xcode source.

    . . .

    If you wish, with one click you can deploy it to IBM's cloud.

    And you can run as many versions of each as you wish -- on the Mac, the Cloud or both!

    Whew!  All that still begs the question: Who's Got the Data?  Do you have the Data?  I had the Data last week, but I don't have it now!

    Hey Siri: "Who's got the Data?"

    And when using it on the subway: "Siri not available.  Connect to the Internet".

    Oops, I didn't cache all of the data/logic I need locally -- guess I can't use my app until I hit the open spots on the subway line.  And definitely not in remote areas.  What about schools with no budget for infrastructure/maintenance (spotty Wifi at best)?

    I get that in the future, there'll be internet access everywhere (the future that has been talked about since the advent of computer networking).  And probably in Silicon Valley that's already a reality for the most part, but there's a huge cost problem to be overcome in many areas of the world.  Simply put: will there be enough paid usage in those areas to offset the cost of establishing and maintaining that service (or does the government have enough budget to fund it)?

    And sure, it is possible to design some cloud-based applications with such scenarios in mind (offline use).  But the vast majority of cloud-based apps I've used simply stop working when the data stops (or only work in limited ways).  Perhaps Kitura helps solve those problems more easily, I'd have to take a look and see.

    EDIT: Looked at Kitura -- I don't really see how it can support devices running Android and Windows well.  Seems like the app would need to run entirely on the server side and be accessed via a browser.  Which is fine for some types of apps, but not all.

    edited September 2016
Sign In or Register to comment.