- Last Active
sflagel said:bageljoey said:I have been happy with my Series 3 Apple Watch. I was planning on skipping 4 even though I like the better screen.
...but should I risk my life to save a few hundred dollars?
Afib causes blood flow from the top chambers of the heart to be inefficient, and not empty as well. There is a tiny pouch adjacent to one of the chambers called the left atrial appendage. Normally, the blood gets squeezed out of the pouch with every heartbeat. With afib, the blood doesn’t empty out all the way. Blood that isn’t moving can clot. A clot in the pouch can later dislodge and go to the brain to cause a stroke, or can go to other organs and cause serious damage. Clots can from in other areas of the heart, but this is the “classic” example.
The increased risk of stroke is based on several factors, and varies from person the person with afib. The risk is usually a few percent per year. People with afib get put on blood thinners and other medications to reduce this risk.
Afib can cause problems in other ways: low blood pressure, low energy, reduced ability to exercise, heart failure, etc., especially if if the heart rate with afib is very fast.
If your watch or other source says you have afib, you should always get it checked out by a physician immediately, for the reasons above, and because it’s usually easy to treat.
Source: I’m an ICU doctor who sees this all the time.
As a physician, working at a hospital system that utilizes the “Cadillac” of the electronic medical records (EMR) software, I can tell you that the machine learning and “automation” has a long, long way to go.
The “automation” part of the machine learning basically looks for key words in my documentation as well as looking at lab results and diagnoses that have been entered by humans to try to match patterns and suggest additional medical issues. The suggestions are wrong probably 70 percent of the time, and have been already documented 25 percent of the time (but the software doesn’t recognize this).
The few times that the suggestions are “relevant,” I’m getting asked to clarify something like which bacteria is responsible for causing a pneumonia. How the f*%k should I know? Most of the time respiratory cultures don’t grow the culprit bacteria; if it DOES grow, it usually takes a few days before there is enough to identify, and we document that anyway.
I’ve found that the AI portion of this 100’s of millions of dollars EMR is more of a distraction than anything else. I don’t know any physician who doesn’t simply ignore it.
On a related topic, I would point out that the documentation that a physician dictates must meet an EXTREMELY complex set of criteria, covering tons of nonsense bullet points that change based on the patient and various diagnoses. Dictating an entire note as a single recording into a watch would bypass all the advantages that the crazy expensive EMR software grants you, such as automating certain parts of the note (insertion of lab values, test results, etc.). The article also doesn’t say whether it’s (A) SIRI transcribing (which works extremely poorly when medical terminology is involved), (B) Custom dictation software such as Dragon Medical is involved, or (C) A human transcription service types this up.
Lastly, to those of you that think computers are “good” at reading studies such as XRays, EKG’s, etc, I can assure that while this may be true in science fiction movies, these things suck in real life. The automated EKG interpretation algorithms have been progressing for decades, but still suck. They get some things right, but get just as much wrong. We aren’t going to see these things nearly as good as humans anytime in the foreseeable future.
genovelle said:tedz98 said:The general public has no understanding of what a file hash is. So the techies at Apple have no understanding of how the general public perceives what they are doing. They just think Apple is scanning their phone. I’m not a huge Bill Maher fan, but I agree with him here. It’s a slippery slope that Apple is embarking on.
BREAKING NEWS - U.S. Robotics, long-time makers of the Courier Series V.Everything V.Everywhere, have upset the industry with the surprise announcement of a new standard, Z-modem.LTE, with additional protocols Z-modem.CDMA and Z-modem.GSM. In a completely new approach to CDMA access, U.S. Robotics utilizes analog tower hops through the CMDA network onto on LTE backbone, eliminating virtually all dependence on IP held by Qualcomm, Inc. Further shocking the mobile communications industry, U.S. Robotics has assured skeptical markets and investors that they now have the very solid investment support, "of one of the most powerful players in this industry."
Reports just earlier this year indicated that parent company, UNICOM Systems was attempting to position itself for a buyout, but had no serious interest. While no official change to financial guidance has been offered, shares were up over 90% in trading after market-close amidst rumors that Apple, Inc. has purchased a 15% stake in the company.
Calls for comment to Qualcomm, Inc. were not immediately returned.
At least one major blogger for WSJ.com opined that, if true, this Apple/U.S.Robotics deal could sound a death-knell for Qualcomm, Inc., and by extension considering Apple's reported stake in UNICOM Systems, could bode poorly for competing platforms who would either need to stay on a sinking ship, or pledge fealty to company partially owned by their chief competitor. While the industry did not take quite as dire a view of Qualcomm, Inc's future if the rumors held true, it was clear that game had changed, Qualcomm, Inc. had found itself somewhere outside the playing field, likely being regulated to a chip foundry rather than an IP licensing corporation.
OutdoorAppDeveloper said:Hopefully the main focus of Apple's AR glasses will not not be games. The real value of AR glasses is that they can give the user augmented intelligence. Using AI and computational vision, AR glasses could give you a better understanding of the world around you. A simple example would be the ability to zoom in to see distant objects. Computer vision has exceeded the ability of human eyesight for quite a while now. Glasses that let you see in the dark, find objects you are looking for, look around corners or behind solid objects would quickly become essential items. Tagging of objects and locations becomes possible as well. You could select Yelp reviews and see what people are saying about a restaurant you are looking at. You could select a nature channel and see information about each plant or animal you can see. Like all new technology, AR will have its down sides as well. It is not clear how you build AR glasses without some kind of camera built in and walking around with an always on camera is still not socially acceptable.
Your overall points about the technology are valid, and I don’t disagree with them, but let us pay proper respects to human eyesight.
Human eyesight is a fantastically incredible mechanism with so much “processing” and “features” that us physicians can only being to fathom it the most basic parts of it. The eyeball is only a small part of human eyesight. It lets the light in, and does some simple analog tricks to separate out frequency and different fields of your vision. It does this by refracting parts of the image onto parts of the retina, with different receptors (many lay people hear about rods and cones, but that’s the tip of the iceberg for “processing” in the eyeball). The information then gets essential converted to digital. There are several layers that the image overlays, and different nerve fibers carry this off to different sectors, criss-crossing being the eyes like some complex freeway exchange, and racing through so many ancillary processing systems on their “optic” processing that boggles the mind. This images can activate thousands of different brain responses before the “seeing” processing in the back part of the brain even takes place, depending on what is coming in through the eye. While I refer to this as an image, the processing utilizes far more information taken in through the eye than what you may think of as an “image.” As this information is being processed, it is analyzed for a fantastic amount of information before an image is produced, best we can tell. These centers can identify basic threats to safety, process your position in 3D space, feeds realtime movement information to your balance centers and separately to your multiple threat systems, depending on what “pre-processing” has so far taking from the input. All this happens in the more basic centers far before you “see” the images. There are far too functions here to even brush the surface. There are many, many thick tomes on the topic that cause sleepless nights to medical students, specialized neurologists, scientists/researchers, and the like. Those are just basic processing features. Then, when your brain sends a process image to your higher brain, the spectrum of possibilities of further processing and responses bloom almost into the infinite. All this happens in the tiniest fraction of time before you even get the chance to consciously register the image. Eyesight processes longitudinally over time as well, not just “snapshots.” Eyesight is processing of fluid data streaming in continuously, not just “frames.” These things are really remarkable, and we barely understand enough to even know that they are there to research. Human eyesight can process something you have NEVER seen before and stratify a threat level and cross categorize it using so many variables that you can react reasonably to it within a fraction of a second without having a damned clue what it is.
Computerized “vision” is mostly made up of really cool, but really simple tricks that do things like shift wavelengths or simple magnification that allow us to see things that we would not normal register. This shifted light information is then dumped into the eye for the real processing. Identification of objects using “machine learning” is another really amazing trick, but isn’t one-millionth as incredible as human eyesight and processing. The computer’s “sight” is extremely fundamental, and an algorithm essentially pattern-matches something in an image to a known database. These mechanisms are getting very sophisticated, and they seem like magic to some people.
While I LOVE the things that AR are bringing our way, I would argue that computer vision has not exceeded the ability of human eyesight, and probably never will. What it will do is augment our vision by dropping some extra data into the eyeball that human eyesight will process. I think that we will see all the things you discussed coming into play in the very near future, and I’ll be one of the first to get on board, especially if it’s an Apple product that I know will be very well supported.
As a physician myself, I have to wonder if the context of these images was interpreted correctly.
This guy was a medical oncologist. I have not tried to find the story, but I wonder if he is a pediatric medical oncologist. Sometimes these doctors take photos of lesions, wounds, other issues on patients and later post them to the chart, etc. If the affected areas are near breasts or genitals, I could see how these could be mistaken by a lay person for sexually-exploitative images. These chart photos are may be distasteful to lay people, but they are important to monitor progress of these areas. He could have left Cloud backup of photos turned on, and that's how they ended up on iCloud. If THAT is the scenario, then a perfectly good man just had his career destroyed by an overzealous tech industry that didn't stay in their own lane. Even if this is not the case, it's a possibility that could occur to someone else, destroying the very person that is trying to save those kids' lives.
All that being said, it's possible that these photos had nothing to do with patient treatment and were purely for sexual gratification of some twisted human mind. If that's the case, then he needs some very serious mental help, and deserves what's coming.
For those that are about to point out that a doctor shouldn't be using a personal phone for those kinds of things, try looking into what it takes to get a formal by-the-bureaucracy authorized photo of these things. It's damned near impossible, and can take days for approval, which is useless in monitoring fast-moving treatments such as surgery or certain types of radiation therapy where things change daily or sometimes hourly.
And yes, high-end EMR's such as EPIC have applications for your phone that allow you to take photos directly to the chart without storing them on the phone. But less expensive EMR's don't necessarily have this option.
After making this post, I spent some time looking for some information that was more informative than "exploitative images" or "pornography," which could be subject to interpretation. I found a copy of the initial criminal complaint at: https://padailypost.com/wp-content/uploads/2021/08/file0.43959436616036.pdf
This complaint makes it very clear that the image he uploaded to Kik was indeed sexually explicit, depicting adult penetration of a prepubescent female. In other words, this guy has one sick mind, and has destroyed his own career.
chasm said:ihatescreennames said:How long until it turns out that snippets of audio are being reviewed for quality by an off-shore third party?
Ironically, medical staff cannot access their OWN medical records easily at all until the medical encounter (such as a hospitalization) is over. They then have to go to Medical Records and request paper copies unless they have been to a facility that allows access to the records on-line. However, a 2-3 day hospitalization will generate 1-2 hundred pages of information (mostly automated crap). The on-line access give you access to only the tiniest portion of your hospital data. Getting it through medical record gives you theoretical access to virtually everything.
I wear a mask all day. Face ID is great, but useless with the mask. The return of Touch ID would be SO NICE...
I would like to see security options that could allow requirements for both Touch and Face ID (and possibly passcode), combinations of these, or none of them (security off).
I also want to see a “poison finger” option where, if a certain finger is used on the fingerprint scanner, the phone is disabled and wiped.
dipdog3 said:All-Purpose Guru said:Their current painful solution, Walmart Pay, tries to offer convenience for using a debit card, which is not subject to the high prices of credit cards
Your statement is wrong. You can use a credit card with Walmart Pay, do it all the time.
It’s also not a “painful” solution. It’s somewhere in between using a physical credit card and Apple Pay.
The biggest issue with Walmart Pay is that you have to open up the Walmart app & click on Walmart Pay to use it. Once you scan the QR code, it’s actually faster than Apple Pay. On the positive side, all of your receipts are stored in the app, so saves paper and makes for easy returns.
The clear advantage for Apple Pay for me is the privacy, including the tokenized card numbers so that The store doesn’t have them to lose, and they can track my purchase because they don’t know who they sold them to. Almost like paying cash.