dewme

About

Username
dewme
Joined
Visits
400
Last Active
Roles
member
Points
8,305
Badges
2
Posts
3,399
  • Editorial: WSJ Jony Ive story scoffed at by Apple experts, delicious to critics

    I'm glad to see you hit the industrial design topic very hard. This is one of Apple's core competencies and goes largely unappreciated by those who don't understand full scale product development. Too many neophytes and clueless pundits believe that all the magic occurs in design studio and just automagically transforms into plastic wrapped products neatly packed into tidy white boxes with an embossed Apple logo on them. The great product designers, people like Jony Ive and companies like Apple, don't separate design and operations into walled off enclaves with throw-it-over-the-wall handoffs. They continuously iterate over all phases of the product development cycle from concept, through several design-for attributes, including industrial design, manufacturability (DFM), testability, serviceability, etc.

    Viewing design and operations as separate and independent activities is so 1970s. Just look at where software development practice has ended up after 25+ years of believing that design centricity was king. We had structured design, followed by object oriented design, followed by component based design, with GoF design principles thrown in, and some SOLID principles to go with it. Where did all this design-is-king get us in terms of software maturity? How does decades worth of integrated processors produced with horrible security holes, blue screens of death on grandma's laptop, multi-million dollar space probes cratering Mars, an entire generation of defense programs that couldn't get to step 1 because of the prospect of unmanageable defect rates. Maybe design was not really King, but more like a Jack - or a deuce. So where did software development go to soothe their burned egos? DevOps - or development operations, the unification of development (including design of course) and operations. There's reason that companies like Apple can handle huge beta programs, deliver new builds nearly continuously, and address field issues in hours or days versus what used to be weeks or months. Putting together software builds and releases used to be really big deals, now they are routine single click operations with high fidelity traceability as to where all the pieces and parts came from and what test case provided the required verification and validation. Sure, bugs still get out, but when you look at the volume of code that is shipping today versus the defect rates, the improvement is astounding versus the bad old days when design was thought to be our savior.

    So yeah, operations is just as important as design, but it's also inseparable from design. 
    tmayapplesnorangescornchipimergingeniousdedgeckoradarthekatFileMakerFellerfastasleepp-dogjdb8167
  • Apple increases credit for returning DTK to $500 following developer outcry

    wood1208 said:
    Apple should have offered DTK at lower price and let them keep it. Not sure what Apple will do with returned DTK unless rip off processor,memory,etc from it and use in Macbooks products because of component shortages.
    The DTKs were leased equipment, not purchases. In all likelihood, the lease payments are fully tax deductible business expenses for companies who run their development as a business and not a hobby. 

    I do realize that in the current mindset of universal entitlement, anything that goes against one’s personal wishes and desires, regardless of anything else, is viewed as an offensive move by an overlord. This lease program was setup by Apple under the expectation of it being adults dealing with other adults at a business level. 

    Apple knew, going in, that they needed to get these DTKs back, for whatever reason, and structured the terms and conditions of the business arrangement to increase the likelihood that lessees would return Apple’s property to them under the terms that were stipulated in the agreement. Apple has not deviated in the slightest amount from following through on their part of the agreement. They are trying to be adults.

    Hey, I like extra cheddar as much as the next guy, but it does bother me that a great number of people in our society, all the way up to the highest levels of power, are basically children stuffed into adult sized bodies. Anything they don’t like is instantly viewed as a personal affront and categorically labeled as an offense, and of course, they’re now the victim. Business agreements and keeping your word don’t seem to matter. If I’m not happy, it must be wrong. 

    I guess I’ll blame it all on us, the Baby Boomer generation, who have never quite gotten past the “Baby” part of our generational contribution. Now we’re sadly passing it along to subsequent generations who know how to weaponize “whining at scale” across the social media mobosphere until they get their way. It’s nice that Apple, as an indulgent parent, is letting junior have extra cookies just to shut him up, but it’s also a sad commentary on where we are as dysfunctional semi-adults. 
    ukrunromar moralesacejax805stompymacplusplusroakeflyingdpjdb8167macguibageljoey
  • Amazon introduces native Mac instances for AWS, powered by Intel Mac mini

    Rayz2016 said:
    pembroke said:
    I’m not a developer and don’t really understand the implications for the end user. For example, if I have a complex video that requires heavy computing power to render, will it help me directly? Can I render across 10 machines rather than the one sitting in front of me to reduce the time to finish significantly? 

    Someone will hopefully correct me if I'm wrong, but I don't think so, no.

    That would need some changes to the software you're running to handle the distributed processing. I imagine in the first instance, this is going to be used by folk who want to deploy microservices in scaling environments.

    I think @StrangeDays will know a bit more about this.

    Yes and no. The yes part: This is a platform as a service (PaaS) flavor of cloud computing where you purchase access to additional Mac machines (and supporting AWS infrastructure) for a period of time to do with whatever your needs dictate. Yes, if you have a big job or process pipeline to execute that would benefit from distributing it across multiple machines, you’ll need to have a way to divide the work up, parcel it out to multiple machines, and aggregate the results. There are several software development automation tools on the market that are explicitly designed for this purpose, e.g., Jenkins, XCode Server, and some development shops build their own automation.

    That’s only one type of distributed processing use case, but there are many others especially in modern software development processes, like having a pool of dedicated compile machines, build machines, automated test machines, configuration management machines, deployment machines, etc., in a pipeline such that every time a block of code gets committed to a development branch from an individual developer it is compiled/built, integrated, unit tested, and regression tested against the entire software system that it is a part of. Each of the machines participating in the individual tasks along the pipeline feeds its output to the next machine in the process. These pipelines or processes run continuously and are event driven, but can also be scheduled. Thus the term continuous integration (CI) and continuous delivery (CD).

    There is nothing preventing an individual user from purchasing access to a cloud based machine (Mac, Windows, Linux, etc.) to do whatever they want to do with it. The inhibitor tends to be the cost and terms of service, which are typically geared towards business users with business budgets. The huge benefit of PaaS cloud computing tends to be the ability to acquire many machines in very short order, and the ability to add/delete many machines nearly instantaneously as your needs change, i.e., elasticity. Try asking your IT department to spin up 125 new Macs overnight if they have to order physical machines from Apple or a distributor. They will laugh, you will cry.

    The no part: If you need to deploy micro services you may not need complete, dedicated machines to deploy your micro services. You may just need a container to run your service, which could be a shared computing resource that is hosting several “tenants” running completely isolated in their own container. If you are building out the service hosting platform as part of your solution, then sure, you could have cloud based machines for your homegrown hosting platform, but this reverts to the previous use case that I mentioned.

    I don’t get the “bizarro” comment from Blastdoor. This type of PaaS cloud computing has been in very widespread use for a very long time, with Amazon’s AWS and Microsoft’s Azure being two of the major players. You may want to see if AWS offers a test drive to get a feel for how it works. Microsoft used to and may still allow you to test drive their Azure PaaS solution. There’s nothing at all bizarre about how it works. You’re sitting in front of a Mac with full access to everything you can get to via the keyboard, mouse, and monitor. Instead of sitting under your desk it is sitting in the cloud.

    Nothing bizarre looking here: https://aws.amazon.com/blogs/aws/new-use-mac-instances-to-build-test-macos-ios-ipados-tvos-and-watchos-apps/

    My only concern with the Mac version of EC2 is that it relies on VNC for remoting the desktop. VNC is not as secure or as performant as RDP that is used for Windows remoting. Any way you cut it, Amazon providing support for Mac in AWS is a very big deal and brings Apple another step closer to competing at the enterprise level against Windows and Linux. 
    GG1Rayz2016dysamoriaroundaboutnowwatto_cobra
  • Elon Musk uses iPhone email bug to illustrate the importance of software innovation

    All software seems to follow a common lifecycle model over time. As more bugs are addressed by more and more developers who were not part of the original design team it starts to accumulate a lot of cruft and quick fixes and workarounds to meet release deadlines. This accumulation of cruft, crud, and crappy shortsighted quick fixes is collectively known as “technical debt” because the current software team has literally taken out a bad loan to buy a bunch of shitty workarounds that some future team of maintenance developers is going to have to pay for, and pay for at loan shark interest rates.

    When developers occasionally grow a little piece of spine they get up in front of management and talk poignantly about the dire need to pay down some of their technical debt, perhaps using “refactoring” or redesign, or god forbid, rearchitecting of the current code base. At some point in the spiel the management team challenges them with something to the effect of “so you’re saying you want us to spend a bunch of man-years of development resources, a boatload of money, and so many millions of dollars, and so many months of schedule to give us a refreshed code base that does pretty much what the old code  does, but without the hanging chads and dingleberries?”  At that point the little piece of developer spine turns to jelly with a “well yeah, pretty much.” So much for grandiose plans. The end result is that not only is the trash can of technical debt kicked further down the road, but the development team is tasked with adding a bunch of new money-making features on top of the shaky foundation that is like a rickety bridge waiting to collapse. Of course the new features introduce more technical debt and have to work around the shortcomings caused by the underlying technical debt. 

    It’s rather easy for a startup like Tesla to feel emboldened by their software prowess because they haven’t had time and customer volume enough to suffer the indignities that accumulate over time when you’re serving a billion customers around the world and have the second and third generation removed teams poking into a business-critical code base that is handed down to them, a code base that is tied to business revenue that has to keep flowing no matter what. Designing new software is usually fun and rewarding. Maintaining existing software is usually a grind and a thankless struggle. Technical debt is like a slow growing cancer, but as long as the supply of band-aids, duct tape, and baling wire is cheaper in the short term than excising the tumors they’ll keep adding on more layers of bandages until they are forced to blow it all up and start over again. Or maybe buy a bunch of software and people through an acquisition.        
    pscooter63radarthekathydrogenFlaSheridnjdb8167spice-boysirbryaninTIMidatorStrangeDaysFileMakerFeller
  • Why Thread is a game-changer for Apple's HomeKit

    gatorguy said:
    Thread was a very underappreciated invention IMO. I'm honestly surprised it's taken so long to gain traction. Outside of Nest devices I'm not aware of other high-profile devices using it so good on Apple including it with the new Mini. Perhaps that means Nest devices may soon be Homekit friendly. 

     Home automation should be much more straightforward than it has been and Thread will be a major part of making it so.
    The main benefit of Thread is that it is IP based (IPv6 in fact), while ZWave Plus and Zigbee each employ their own non-IP protocol that requires a gateway/bridge to connect their unique protocols to IP based clients and other nodes. The use of IP has major benefits with respect to things like device discovery, device identity, connection management, and routing because everyone is speaking the same language. Bridging across protocols always introduces more complexity and delays because the intermediary (gateway/bridge) has to be a fully functioning node on both networks at the same time. It's far more than simply language translation, it involves things like managing communication timeouts, node health status reporting, maintaining routing tables, etc,. or basically doing everything a node has to do to be a good citizen on a network - times two.

    Having IP everywhere simplifies everything and positions Thread very well for the emerging IoT challenges and opportunities.   

    All three of these networks are mesh based, have device profiles to allow auto-configuration, employ self-healing techniques, have accommodations for optimizing their use with battery operated devices, support range extension, and have encrypted communication, so none of those things are really differentiators. The special sauce for Thread is the use of IP and the fact that it builds on 6LoWPAN, which has some degree of maturity.

    I must add that one recurring challenge with device networks is that there are always too many standards from the customer's perspective. Despite claims of being a "game changing" new standard, the new standard rarely replaces all of the existing standards. Instead, it typically fragments the market even further. Every single time I've seen a new "one standard to replace all standards" come along, which I've experienced several times over a few decades of being involved in similar standards, the end result is that standard A and standard B do not go away or get replaced by standard C. You end up with standard A and standard B and standard C. In fact, standards A and B will even update and improve their standards over time to make them less vulnerable to being obsoleted. The end results is usually: Dear customer - please pick one, and can I interest you in a nice gateway? 


    flyingdptenthousandthingsp-dogroakerundhvidunsui_grep
  • Apple Silicon M1 Mac mini review - speed today and a promise of more later

    The 16GB of RAM is a deal breaker for me.
    My 2020 iMac has 64GB of RAM which I figure will last for 5+ years.
    RAM handling is a bit different with Apple Silicon. While if you're hitting swap space on that 64GB often now, it won't help you, how the M1 is handling RAM basically leads to a "8GB is the new 16, and 16 is the new 32" situation.

    We'll see more with time. I don't expect the 16GB limit to remain on whatever comes after the M1.


    As someone working as a software developer/engineer/whatever since years before appleinsider.com existed, I can state with perfect certainty that if you need 32 GB RAM because of the size of your data, running it on 16 GB RAM and thinking you’ll get comparable performance as running it with 32 GB RAM defines “wishful thinking” because you’ll be swapping horribly, and you’ll be limited to I/O speed and latency for the swap drive.

    Where a fast SSD may make even 8 GB RAM seem much more efficient than you’d expect is where you have an application doing data processing in a linear address/array order, and the processing that’s done takes at least as much time as I/O for input and processed output data, as then that can be implicitly handled via the regular swap file, or more explicitly a memory-mapped file, which maps a file into main memory as needed in the virtual address space in a relatively small window of the physical memory address space.  As soon as you have applications following pointers in data structure in a random memory access pattern (this happens in the majority of applications the majority of the time, especially once there have been enough memory allocations and releases of memory) there isn’t an SSD currently available that’ll make 16 GB RAM remotely as efficient as having 32 GB RAM, as you’ve then taken what would (with enough RAM) be a compute-bound problem and translated it into an I/O-bound problem.
    I agree that if you have an application, or a collection of applications running concurrently, that you have verified through testing and profiling cannot run within its operational requirements without a full 32GB of memory, regardless of the nature of the paging medium, yes indeed, you'll need the full 32 GB. I will also say that if I were up against that kind of hard limit with zero wiggle room, I probably wouldn't be running that application on a 32 GB Mac mini to begin with. That represents a pathological use case with no engineering margin to account for the variabilities that are always present when using a general purpose, multiprocessing, time slice scheduled, shared resource computing system like Mac and macOS.

    In every instance that I've encountered with super tight memory constants I've reverted to using a custom, statically allocated, fixed block memory manager with custom defragmenter rather than the one provided by the operating system. My point here is only to say that if you truly need a certain memory capacity with zero tolerance you're not going to settle for any of the nondeterminism that a general purpose operating system like macOS introduces. macOS, like Windows, Unix, and Linux have way too much indeterminism to run them right up agaist a hard limit of any kind - not just memory capacity. 

    A more common approach, and one that is probably more indicative of Mac mini hosted solutions and what Mike is alluding to, would be to say that a particular app configuration runs optimally with a certain amount of memory available to the operating system but will run, perhaps sub-optimally, but not go belly up and crap itself, if the memory load-out is less than optimal. In fact, this is the overriding assumption about most every Mac (and Windows and Unix/Linux app) running an operating system that has virtual memory management, process isolation, paging, with RAM backed by secondary storage (slower), and with the process memory address space mapped independently of where it actually resides (RAM or disk).

    It remains to be seen how apps perform on the M1 versus Intel processors with various amount of memory and where memory requirements are variable versus being hard and fixed. You're right about the hard & fixed use cases, but I have a strong feeling that for the vast majority of apps that are tolerant of variability in the amount of physical RAM, which is almost all apps that are widely used. the Apple Silicon equipped Macs are going to run at comparable performance levels with less RAM and at better performance levels with the same amount of RAM as their Intel counterparts. I say this because Apple had total control over the use of ALL memory on its SoC and didn't have to defer to anyone else's design choices. Plus, when you're hit with paging, it's not like the old days of trading off nanoseconds for milliseconds, it's paging to SSDs that are orders of magnitude faster than hard disks.

    Only testing will reveal the truth, but I'm putting my chips on Apple Silicon coming out ahead. 

     


    seanjJWSCAlex1Nhmlongcochiamacplusplusroundaboutnowwatto_cobra
  • CBP defends seizure of OnePlus Buds, claims trademark violation

    avon b7 said:
    They are digging a bigger hole for themselves.

    They claimed they were counterfeit Apple products. 

    They clearly aren't. Not in packaging or content. 
    Yep, never admitting an obvious mistake isn’t a good look. They’re making a big deal out of something that should have just been an “oops, my bad.” This should have been a “no big deal” situation with a one-hour Twitter half-life. 

    If the CPB has suddenly become the savior of Apple’s trademark virtue, I suggest they slide on over to Amazon.com and search for “fully wireless earbuds.” Not only will they find several sub-$50 AirPod knock-offs that make the seized OnePlus easily pass the sniff test, they’ll find several AirPod Pro knock-offs as well that are knocker-knock-offs than the OnePlus units. All of these are sitting somewhere in Amazon fulfillment centers, warehouses, shipping containers, etc., with import control traceable documents that’ll lead the CBP to easy fodder for their next seizure reveal party. Low hanging fruit ripe for picking.

    Heck, this one even puts an Apple logo in their ad...


    polymniamuthuk_vanalingamwatto_cobraronndysamoriaBeatsFileMakerFeller
  • Windows on Apple Silicon is up to Microsoft, says Craig Federighi

    seanj said:
    Old timer that I am, I remember the days before the WinTel duopoly dominated the computer industry. We had machines running 68000, ARM, SPARC, Alpha, etc, and a consequence we had year on year real innovation and advances; the completion that the free market promotes. 
    Whereas we’ve seen stagnation with Intel’s CISC x86 architecture for over a decade, a fact that Apple recognised. The irony of course is that things have come full circle as Apple was one of the original investors in ARM back in the late 1980’s.
    Hopefully exciting days ahead!
    ... and the first "working version" of Windows NT, which is present in Windows 10's DNA, ran on the Intel i860 - which is a RISC chip. Talk about full circles...

    I'm not losing any sleep about running Windows on Apple Silicon. If I was currently traveling and doing all of my work on a laptop/notebook that has M1 Apple Silicon then I'd care a little bit about Windows compatibility (for running in VMWare). But for desk-bound use, if I really need to run Windows I'll just buy a small form factor (SFF) Windows box, preferably with a contemporary AMD cpu, and plug it into the second port on my desktop monitor and get a Bluetooth keyboard and mouse that support multiple computers. In fact, I'm already using all of these pieces (keyboard. mouse, and monitor) that support multiple computers so all I really need is a SFF Windows box. If I want to live large I'll get a NUC, but there are many choices across a broad spectrum of prices, from HDMI stick PCs to low cost NUC knock-offs. Easy peasy.
    rezwitsAlex1Nhippotmayrazorpitwatto_cobra
  • Minnesota the latest to introduce bill that allows developers to bypass App Store billing

    "the tech giants would be forced to allow them in their digital storefronts"

    That's some very scary Big Brother massively intrusive sh**. Would Walmart, Target, Kroger, etc., allow the government to force them to stock certain products on their shelves even if the retailer did not want to carry such products? This basically amounts to a government takeover of private property.
    StrangeDaysAnasazi59darelrexiloveapplegearmike1jcs2305d_2GG1thtbaconstang
  • Apple has taken steps to eradicate mysterious malware strain

    lkrupp said:
    mknelson said:
    Funny, this reminded me of some of the early Mac viruses like MDEF and WDEF. They could spread very easily (Desktop file to Desktop file) but didn't have a malicious payload. They just took up space, which could cause problems on you tiny floppies!

    This one seems very sophisticated. It'll be interesting if they ever find out who was behind it and what the purpose was/is.
    Well, the article says Apple has revoked the developer certificates of the author so Apple might have some idea of who/what/where. So how did this bad actor get a certificate in the first place. Was someone asleep at the wheel at Apple? 

    How exactly would Apple identify a bad actor ahead of time? Do they check a different box on their developer account :

    [    ] I am not a Bad Actor
    [ x ] I am a Bad Actor

    The whole notion of revocation is that something that was initially granted is now taken away. Until proven otherwise Apple assumes you're not guilty.

    A not uncommon scenario is that a certificate is compromised by someone other than the entity that the certificate was granted to, so not all revoked certificates are associated with the developer who was granted the certificate.
    killroystevenozroundaboutnowwatto_cobra