- Last Active
danielchow said:This was probably why Apple is discontinuing their current hardware because it’d be a waste of resources to keep manufacturing the Airports when a new hardware standard is going to come out (?)
The comment section shows that the author of the article has not made the cause for the issue clear enough.
Processors today use PCIe to connect everything from Storage controllers, high speed I/O like USB, Thunderbolt and other stuff including SD-Card readers and Graphic Cards (or discrete GPU in case of macbook). And on each processor model there is only a specific number of these lanes available. This means that since Apple has to rely on Intel on this issue, they can't simply change this behavior by putting in a second controller, it's simply not possible because there is no dual core i7 Chip with enough lanes. Thats why Apple has chosen to use the PCH-supplied PCIe-Lanes.
PCH is an additional chip (which is integrated directly in the processor package on some models) which connects to the processor via a seperate high speed bus and it gives you many things included USB, keyboard support, audio and storage interface and additional (about half speed) PCIe-Lanes in one single chip. What I think is the case is that Apple is using these lanes because the processor itself does not supply enough PCIe-lanes.
So obviously, there is nothing to be added from a non-technical viewpoint, because it's not about power consumption or other marketing decisions, it's simply because Intel does not make the chips Apple needs for four Thunderbolt 3 full speed capable ports.
sergioz said:I don’t think 5G will be super popular with cellphones in the beginning. As we develop new experiences and tech, demand will rise, but right now I don’t see why you would need access to 1 gigabit connection in you pocket? Plus as tricky 5G as a technology, even when it becomes widely available, it’ll be like the icing on the cake to have it.
The headroom that 4G technologies gave to 3G is exhausted, mobile broadband networks now face increasing cell congestion where there was plenty of capacity 3 or 4 years ago.
Continuing in existing cell topologies is no longer feasable, as in bigger cells, the ratio of available frequency spectrum (conclusively the available data rate) against subscriber count is rapidly going against unmaintainability of services in a useful manner and we are approaching a barrier which defines the maximum information density one can achieve with given parameters of a cell in traditional 4G.
5G is taking a different approach. Rather than creating big cells that have many clients share the same frequency spectrum like 4G and older technologies did, the concept of 5G is to create much, much smaller cells which have a much smaller range.
The obvious disadvantage is, that it's necessary to create more cells and it is much harder to coordinate this big amount of cells. Clients need to switch cells much more often and always maintain multiple connections to multiple cells nearby, which is its own challenge when it comes to energy consumption and the real-time requirements for the 5G backbone as all cells must cooperate.
But, as you might have understood from the explanation, the advantage of this approach is, that every cell can use the frequency spectrum it uses to transmit data in its much reduced range much more effectively, since less subscribers are active in each cell, providing much lower latency, much, much higher data rates as well as better quality of service to each customer.
The thing is, and many will be upset about this is, that the core benefits of 5G technology only apply to use-cases (or environments) where high cell congestion is an issue (e.g. airports, universities, cities, indoor areas). In areas outside of high population people will not see much change.
I hope I could address your concerns accurately.
I believe apple might consider switching to AMD processors (e.g. ZEN 2); or maybe an unannounced APU with integrated VEGA graphics, that would sound interesting from an Apple perspective. Getting the operating system scheduler in place takes its time since the desktop ryzen chips (or ryzen pro if you want, they are the same) are effectively 2 NUMA nodes on a single die; Other than that I believe there is no reason to keep us waiting.
avon b7 said:avon b7 said:There's absolutely no evidence that this was a working microphone that was eavesdropping on anyone and let's be clear that Alphabet is the one that announced the update that enabled the microphone, not a blogger that discovered nefarious activity. For those looking for a conspiracy you'll have to look harder. This is no different from countless other tech companies that don't disclose inactive HW for a variety of reasons.
A team of people were involved in designing, testing and producing the hardware. It is reasonable to think that some of these people would have used the finished product or given it to friends and family. It is unreasonable to assume that none of these people saw that a key (and consumer facing element - even if inactive) got missed on the spec list or in the product documentation.
Also, this feature will have been in internal testing for a while before getting the go ahead to go live which would have provided more opportunities to catch the slip up.
I'm with you that I don't see anything nefarious but it should have got caught and clarified earlier IMO.
As I stated, this isn't uncommon and if you don't trust Google then Nest Secure was never an option for you anyway.
How many products do we have on our person and in our homes with microphones? From security cameras to personal digital assistants to PCs to phones to my Apple Watch I can think of at least 8 off the top of my head. And while I trust Apple to not spy on me the bigger risk will always be exploiting a bug as we recently saw with FaceTime Group Chat.
If I was running a company as valuable as Alphabet and I wanted to spy on people I wouldn't do it with an undisclosed, active microphone that could be found, I'd blatantly disclose the microphone (as all our CE already have) and then I'd have backdoor "bugs" built-in that people in-the-know could exploit so there's a level of deniability by the company. We accept bugs in SW and we accept that companies say "oopsie"and then close these holes once discovered.
myhohawong said:If I am Mark Zuckerberg, I will block all iOS users from using Facebook, even with Safari, immediately. Let's see which company will go bankrupt first.
mjtomlin said:lkrupp said:My 27” iMac 14,2 (late 2013) is getting long in the tooth but I will not wait until the latter half of 2020 to find out if Macs are moving to ARM. The 2019 iMac may be my last Intel Mac but I don’t really care.deegee1948 said:Great hire! This guy has some big chops, AMD, INTEL, and ARM! Just watch what’s coming!
Possible in-house x64 based CPUs?
wizard69 said:prismatics said:Don‘t expect 10 nm to come anytime soon for devices that are more complex than dual core ultra low power, maybe quad core. The defect density in 10 nm will never get to levels where it is viable to replace 14 nm.
Of course Intel will keep saying that their process is progressing and healthy, but it makes absolutely no sense from an economical point of view.
Nobody that is digging deeper into that matter believes what Intel is saying.
10 nm will never make Intel any money. However, it would have if we lived in a reality where AMD didn’t exist any longer.
By the way, ASML is the single company that remains which can produce tools for the upcoming processes; It's same for everywhere else where you have costs that are rising. If Intel folds too much in the next years, they will lose much of their manufacturing capability as new nodes are exponentially rising in cost.
Meanwhile, Intel continues to believe, or rather must communicate to investors, that the self-aligned Quad Patterning 10 nm process will be the way, which is, factually incorrect. By the way, Intel starts EUV at 7 nm, and I expect them to be back in game when they can release the process beyond risk production.