Apple's Swift Playgrounds 1.5 will teach kids how to write code for Lego, drones & robots
Apple on Thursday revealed that with a version 1.5 update on June 5 -- the beginning of WWDC 2017 -- Swift Playgrounds will include new material teaching people how to write programming for drones, robots, and similar electronics.

The company is working with several electronics makers to incorporate the content, namely Lego, Parrot, Sphero, Ubtech, Wonder Workshop, and Skoog. In-app projects will, for example, guide users through using Sphero's Sprk+ robot in a real-world version of "Pong."
The iPad app will also serve as a blank slate for more advanced coders, letting people build custom controls. People flying a Parrot drone will be able to specify yaw, pitch, and roll changes.
Swift Playgrounds has long been able to communicate with external hardware through Bluetooth, and some parties like Wonder Workshop have already been taking advantage of this. The 1.5 update should, however, make the feature more prominent.
Swift is an open-source programming language that works across Apple platforms. The company has been eager to foster adoption by both developers and students -- on May 24, it introduced a free year-long course available through the iBooks Store, which will even be taught at some U.S. colleges and high schools starting this fall.
Apple is likely unveiling the Playgrounds update now to clear the table for WWDC. The company is expected to make its first hardware announcements there since 2013, including new iPads, refreshed MacBooks, and possibly a rumored Siri home speaker.

The company is working with several electronics makers to incorporate the content, namely Lego, Parrot, Sphero, Ubtech, Wonder Workshop, and Skoog. In-app projects will, for example, guide users through using Sphero's Sprk+ robot in a real-world version of "Pong."
The iPad app will also serve as a blank slate for more advanced coders, letting people build custom controls. People flying a Parrot drone will be able to specify yaw, pitch, and roll changes.
Swift Playgrounds has long been able to communicate with external hardware through Bluetooth, and some parties like Wonder Workshop have already been taking advantage of this. The 1.5 update should, however, make the feature more prominent.
Swift is an open-source programming language that works across Apple platforms. The company has been eager to foster adoption by both developers and students -- on May 24, it introduced a free year-long course available through the iBooks Store, which will even be taught at some U.S. colleges and high schools starting this fall.
Apple is likely unveiling the Playgrounds update now to clear the table for WWDC. The company is expected to make its first hardware announcements there since 2013, including new iPads, refreshed MacBooks, and possibly a rumored Siri home speaker.
Comments
This is the part where you need to create an Arduino type hobbyist device programmable from the iPad/iPhone/Mac in Swift using a subset/superset of Xcode and iOS. It would be useful for a hobbyist to create motion control, robotic systems, and acquire data.
Thank you in advance.
Exactly, In years past they have cluttered up included this sort of announcement in the WWDC keynote. So this could be sign that Apple has a ton of stuff to present. Or they just realized that developers don't really care about this sort of announcement.
But I think there is probably some other really great stuff planned for WWDC that there's no need to announce this there. Hopefully, at least.
So, does that mean I could convert SDK to swift for my Sphero and create a custom app to control it from there?
It wouldn't be easy to make Mac apps on iOS and full development capability could screw up the OS or dump large files somewhere they can't be removed easily. It couldn't be a straight port of XCode as a lot of temp build files go into the home folder. These would have to be kept inside a sandboxed location on iOS.
It would be very useful to be able to work on apps with just an iPad though, even if it was just parts of an app that got synced to a code repo and the work was continued on a Mac at home. It would be faster to develop the UI of an iOS app on iOS itself as the touch capability is there and you can tell how the layout works on the display. It can simulate the smaller iOS screen sizes with scaling.
Having the left sidebar visible all the time uses up some space but it can be hidden. The two right-most columns look just like having apps in split view on an iPad and Storyboards only needs to be open when doing the UI. Playgrounds for iOS is cut down to more of the essential UI parts:
When you compile and run with iOS XCode, it could collapse the UI and open the executable in a split view panel in order to debug the app with debug output in the other panel. In XCode on the Mac, the UI has more elements visible at a time because the space is there to use. Pretty much none of the title bar UI elements need to be on iOS.
If you are targeting iOS then you have to run the app on the device anyway so being able to do that straight from the device saves having to tether the device and authorize it from the Mac. You also have to match a lot of things like Mac OS version plus XCode version plus iOS version. You hit compile, it has to build it then copy it over the wire to install it and then you pick up the device to use it, it crashes and you have to go back to the Mac to debug it and repeat.
There are a number of benefits to doing it all on the iOS device. As mentioned earlier, it already has a touch UI so building and testing the UI and layout would be faster. It makes debugging faster because you can fix problems on the device without going back to the Mac and copying a fixed version over. It also allows people who are learning on their iOS devices to go the whole way of building and using their own apps:
http://www.macworld.com/article/3147736/ios/heres-what-kids-really-think-of-apples-swift-playgrounds-learn-to-code-app.html
Some people only have iPads and Windows machines. Once they are done with Playgrounds, they are out of luck until they can afford a Mac. A Mac isn't much more expensive that an iPad Pro but it's another device to buy in full on top of the iPad when the iPad is powerful enough to do it. There's no reason an iPad can't do everything productive that a Mac can.
When it comes to the pointer input, it's more efficient for working with text to have one, the same is true for all text editing apps. Having a touch flap on the keyboard cover might suffice here that you pull out:
That could have a cushioned top so that pressing into it is like 3D touch. Perhaps it would be best being a separate pad to be able to switch left/right and replace if it gets worn out but that would require its own battery, charging and bluetooth. If they had a magnetic attachment on the back of the iPad, that could charge it if it was separate. A 3D sensor can do a similar job either in the iPad or down the side of the keyboard. There might be a way to use the main keyboard for it by locking key input like how the software keyboard works, although dragging over the physical keys isn't ideal nor is having to lock/unlock key input each time. When using the pad, I'd expect it to show each finger on screen as a circle and they'd disappear when not touching the pad. The system would remember the location they were in and resume on touching the pad again. This would make selecting and editing text easier.
Imagine having a split view with sidebar and code open. One finger on the pad shows a semi-transparent circle on-screen and you move it over the sidebar to choose a file. The file name can highlight when the circle is over it. Press into the cushion to select and the circle can become more opaque while pressing. The file would open. Two fingers on the pad can show two circles when scrolling. This also gives a visual reference for the anchor point when rotating and scaling. To move the text pointer, like with the file selection, drag to the location and then press to set it. Selecting text can have a keyboard key that you hold down so no more having to resize the selection handles. You can hold the selection key on the left and drag over the items to select (mostly lines at a time). Release the key to stop selecting. This allows selecting multiple portions of text at a time like separate functions in a code document and it can highlight selected items like a magic marker.
This can work with a core filesystem as you can drag the pointer off the side of the screen to bring in a file view (like MacOS notifications). Saving images from Safari for example, you can be on a web page, do a selection highlighting all the images you want, copy them, drag over to the right to open the file viewer, open a collection and save selected images. Multiple finger swipe can open the multi-tasker with the touchpad, swipe over to an image app then open the image collection and use some of the images saved from Safari.
There's no need to show all the sub panels at once. Because things like pinch-zoom are easier on iOS, you can scale UI elements up/down as needed and you can have a lot more contextual settings. All of the settings and hierarchy panels can be removed to leave just the UI layout view, which you can zoom in and out of and select elements as needed. When you select an item, there can be larger buttons to open size options, another to open the library panels.
The 4-tab library panel at the bottom right can even be replaced with a single plus icon to add something. When a text window is active, you can only add code snippets. It knows which language you are using so it would limit the snippets to the relevant kind and you can type in the search to narrow it down. When a Storyboard is active, tapping plus would popup icons for controller/button/view/toolbar and tap one to get the relevant items and again it can limit the results based on the active selection.
XCode for the Mac would be better being simplified because there's far too many settings in every panel, most of which people will leave at the default settings anyway. They can have a preference somewhere to show expanded options. Build settings for example has so many options that it makes it hard to find the settings you do need often like provisioning options. That whole build area should be cut down to just the essentials by default: target architecture and provisioning profile, 1 dropdown ideally for each. Have a settings panel somewhere with a set of checkboxes to enable the other options. The items visible by default should be options that people change most. If a setting is only changed or used by <1% of developers, there's no need to clutter everybody's UI with it. Those developers can enable the extra options they need.
XCode's app templates would be helpful if they were also more specific. A lot of people make games for example so they can have templates for sidescrollers, first-person shooters, racing games and as soon as you pick a template, it's a fully working game but all textures and models are basic. There can be online libraries of paid content like models, textures, audio that people can use in their apps. Someone building a racing game can get the template, go to the content library and get some trees, buildings, cars, audio, individually or for a pack, either free or paid and drop them into their template.
Part of what makes software development so slow is people reinventing the wheel for every app. There are millions of apps that deal with common categories of content, Apple can check the most common UI structures and app types across the store and provide templates to build those kinds of apps e.g if a restaurant or hairdresser wants a booking system, they can have an app template for it as well as an online component like an iCloud database where they can manage their content, possibly controlled via FileMaker (extra $99 per year on top of developer license).
Profiling can be made easier too. The point of profiling is to get apps to perform better but profiling usually gives incomprehensible traces of things that don't help fix the problems. The vast majority of the time, people need to know how long is code taking to run and how much memory is being used by what. There can be one button that checks how an app is performing and it can highlight the code that is running too slow and the objects using too much memory.
It benefits everyone for software development to be more accessible. It's like how computers don't need to be controlled by the command-line. When you have an abstract UI, you can do tasks more easily, which can improve productivity as you aren't getting blocked by having to learn an unintuitive system. There was a site built recently to help learn coding linked with the Wonder Woman movie:
https://www.madewithcode.com/projects/wonderwoman
That's just drag and drop code blocks and is very accessible to people because there's only high level concepts to think about - characters, actions, variables etc. That's what software development should be for most people.
It might actually be an idea for Apple to put together a team of inexperienced developers who want to make apps to describe how they want to build apps rather than have developers try to simplify traditional workflows. They would focus on trying to get the time down from starting at zero to a deployable app for inexperienced developers and fix every hurdle along the way.
The following is an example of where software development becomes complete nonsense:
https://medium.com/@johnsundell/handling-non-optional-optionals-in-swift-e5706390f56f
That whole page is talking about what is the best way to handle potentially nil data and suggesting crashing the app might be ok. Allowing software to crash is the most user-unfriendly approach. This is the kind of thing that holds back software development because they try and make developers individually figure out problems that shouldn't be theirs in the first place. When an app gets into an invalid state, it should tell the user what's going on. When an app hangs, it should interrupt its own execution and allow the user to take control of it rather than rely on the OS to kill it. Apps should behave like operating systems, they don't require rebooting every time something goes wrong. When something goes wrong, there should be a facility to inspect what went wrong, recover from it and report it. The software architecture itself should be fault-tolerant and handle the problem and not put that burden on developers to have to architect their own solutions.
They can have a rollback system. Mark data that's not supposed to be nil or out of bounds and if it isn't handled, roll the execution cycle back to before it went wrong and tell the user that doing what they just did caused something to go wrong, trace the rollback and send it to the developer. Same goes for memory errors, or infinite loops. Apple's mantra of 'simplicity is the ultimate sophistication' is lacking in software development environments.