Apple reportedly acquires developer behind burst photo app SnappyCam

124

Comments

  • Reply 61 of 91
    solipsismxsolipsismx Posts: 19,566member
    philboogie wrote: »
    Wow, talk about prior art¡

    Lol. Good thinking of you, 'back in the day'.

    It's too bad what we imagine can't also be what we can make.
    Amen to that. But realistically I don't think this will happen anytime soon. Just look at what a professional camera costs that can do over a thousand fps. Plus I believe there to be many factors dealing with such a vast amount of data, like, of who am I kidding, these factors you know already!

    1) It would be a lot more data per second but Apple could just reduce the time allowed for slow motion grabs. Plus I think next year we'll see the NAND capacity double for the current price point.

    2) Apple has a pretty solid track record of taking something only available at a professional level and making it a commodity feature.

    3) What is the 4MB of RAM doing on the A7 chip? Could that be for processing images for burt mode and/or the slow motion camera? The most complex iPhone 5S still image I have taken is only 2.9MB.
    Perhaps they could 'speed up' the process of capturing a high fps by only recording the changes to each photo after the first fully saved one. They can then re-create the following shots by replacing the pixels that differ from the first shot. Kind of a 'the more the subject changes the slower the fps are going to be'.

    That's an interesting technique and perhaps patentable. If two images can be quickly compared and the differences show less than a certain percentage of difference start with a new image and/or it could section off an image into quadrants then do micro comparisons since only small portions of an image seem to have any real change when shot in rapid succession.
  • Reply 62 of 91
    gatorguygatorguy Posts: 21,102member
    philboogie wrote: »

    Perhaps they could 'speed up' the process of capturing a high fps by only recording the changes to each photo after the first fully saved one..

    That would be like P-Frames
  • Reply 63 of 91
    1. It's important to me to clarify the facts.
    2. I'm honestly not jealous. Just passionate.
    3. Tons of people on the forums have already emailed me for promo codes and I have sent them out. Those people seem pretty happy to me.
    4. I don't believe writing assembly code is necessary unless the results prove it was worth the effort. I think that is relevant to the devs on here.
    5. Fast Camera is not a competitor to Snappy Cam. Fast Camera is the de facto standard in burst photo apps with much higher revenue and user numbers. It's my obligation to comment on an article that suggests SnappyCam was the fastest or best.
  • Reply 64 of 91
    philboogiephilboogie Posts: 7,457member
    solipsismx wrote: »
    It's too bad what we imagine can't also be what we can make.

    Oh, that's good line!

    1) It would be a lot more data per second but Apple could just reduce the time allowed for slow motion grabs. Plus I think next year we'll see the NAND capacity double for the current price point.

    Wouldn't the data throughput be the bottleneck before storage capacity?
    3) What is the 4MB of RAM doing on the A7 chip? Could that be for processing images for burt mode and/or the slow motion camera?

    Anand had this to say:
    The most visible change to Apple’s first ARMv8 core is a doubling of the L1 cache size: from 32KB/32KB (instruction/data) to 64KB/64KB. Along with this larger L1 cache comes an increase in access latency (from 2 clocks to 3 clocks from what I can tell), but the increase in hit rate likely makes up for the added latency. Such large L1 caches are quite common with AMD architectures, but unheard of in ultra mobile cores. A larger L1 cache will do a good job keeping the machine fed, implying a larger/more capable core.

    ...which doesn't address your question, and I don't know.
    The most complex iPhone 5S still image I have taken is only 2.9MB.

    You're in sunny CA, right? Go stand under a tree and take a photo of the leaves on a sunny day. The large amount of detail will make the size of a photo larger. Like so:

    400500


    Or a park:

    700

    The point being the grass here; not the nuns.
    That's an interesting technique and perhaps patentable. If two images can be quickly compared and the differences show less than a certain percentage of difference start with a new image and/or it could section off an image into quadrants then do micro comparisons since only small portions of an image seem to have any real change when shot in rapid succession.

    I believe they do something similar with the 'reduce motion' in iMovie/FCP. Calculating which pixels (in the middle) overlap the next frame.
  • Reply 65 of 91
    solipsismxsolipsismx Posts: 19,566member
    gatorguy wrote: »
    That would be like P-Frames

    I thought that was for the prediction of motion, not specifically seeing where changes have taken place between frames, but perhaps P-frames are the only reasonable way to save time and processing.
  • Reply 66 of 91
    philboogiephilboogie Posts: 7,457member
    gatorguy wrote: »
    philboogie wrote: »

    Perhaps they could 'speed up' the process of capturing a high fps by only recording the changes to each photo after the first fully saved one..

    That would be like P-Frames

    Exactly! (or B- or I-frames, iForgot)

    700
    "A sequence of video frames, consisting of two keyframes (I), one forward-predicted frame (P) and one bi-directionally predicted frame (B)."

    http://en.wikipedia.org/wiki/Video_compression_picture_types
  • Reply 67 of 91
    solipsismx wrote: »
    philboogie wrote: »
    Wow, talk about prior art¡

    Lol. Good thinking of you, 'back in the day'.

    It's too bad what we imagine can't also be what we can make.


    Something about reach and grasp!



    Amen to that. But realistically I don't think this will happen anytime soon. Just look at what a professional camera costs that can do over a thousand fps. Plus I believe there to be many factors dealing with such a vast amount of data, like, of who am I kidding, these factors you know already!

    1) It would be a lot more data per second but Apple could just reduce the time allowed for slow motion grabs. Plus I think next year we'll see the NAND capacity double for the current price point.

    2) Apple has a pretty solid track record of taking something only available at a professional level and making it a commodity feature.

    3) What is the 4MB of RAM doing on the A7 chip? Could that be for processing images for burt mode and/or the slow motion camera? The most complex iPhone 5S still image I have taken is only 2.9MB.
    Perhaps they could 'speed up' the process of capturing a high fps by only recording the changes to each photo after the first fully saved one. They can then re-create the following shots by replacing the pixels that differ from the first shot. Kind of a 'the more the subject changes the slower the fps are going to be'.

    That's an interesting technique and perhaps patentable. If two images can be quickly compared and the differences show less than a certain percentage of difference start with a new image and/or it could section off an image into quadrants then do micro comparisons since only small portions of an image seem to have any real change when shot in rapid succession.

    Apple has a new algorithm called InertiaCam in FCP 10.1.

    You can retime a video or video from individual frames (image sequence), when you slow the video, the algorithm examines each frame and generates intermediate frames, as needed, based on the amount slowed.

    On a Mac it is very fast and very good quality.

    I suspect, that with the A7 and some RAM, that this could be done by iMovie on an iDevice...

    ...Maybe A8 or A9 tho...

    Edit:

    The big question is: Do you really need to do this in real-time on the iDevice? I can envision some uses where the answer is yes... But for most, capturing video at 60 or 120 fps, then retiming/smoothing after-the-fact would be fine.

    The neat thing, the new 64-bit architecture, Apple could leave the choice to the user.
  • Reply 68 of 91
    solipsismxsolipsismx Posts: 19,566member
    Something about reach and grasp!

    I see three primary components to success in anything: aptitude, assets and aspiration. Aptitude and assets could be grouped together into a sole category called ability, one being internal to ones mental and/or physical traits, and the other being external like having the resources from money and/or being in a family or larger societal structure that allows certain type of achievements to be made more easily. But regardless of how much ability (or aptitude or assets) you have you still need that underlying aspiration to make it happen. Without that impulse there is no success. I wish I had that unstoppable quench for changing the world; for having a dream and stopping at nothing to see it come to life.
  • Reply 69 of 91
    ash471ash471 Posts: 705member
    philboogie wrote: »
    This is surprising. I would expect Apple to be knowledgable enough to create the same tech on their own. Something doesn't click. Great news for Mr. Papandriopoulos though. (yes, that was a copy/paste action)
    I'm not surprised. Big companies a generally not better at innovating. Apple has a few employees working on photo compression. If those two people don't think of the solution, then it doesn't get developed at Apple. In contrast, there are hundreds of thousands of other engineers with diverse experiences that could make the invention. The odds greatly favor outside development. BTW, the increased likelihood of invention outside of the market leader is the reason the patent system is so important to innovation.
  • Reply 70 of 91
    philboogiephilboogie Posts: 7,457member
    solipsismx wrote: »
    I see three primary components to success in anything: aptitude, assets and aspiration. Aptitude and assets could be grouped together into a sole category called ability, one being internal to ones mental and/or physical traits, and the other being external like having the resources from money, being in a family or larger societal structure that allows certain type of achievements to be made more easily, but regardless of how much of ability (or aptitude or assets) you have you still need that underlying aspiration to make it happen. Without that impulse there is no success. I wish I had that unstoppable quench for changing the world; for having a dream and stopping at nothing to see it come to life.

    Very well written. The ease you do this with, seemingly, is inspirational for me to better my English. And my writing style for that matter.
    ash471 wrote: »
    philboogie wrote: »
    This is surprising. I would expect Apple to be knowledgable enough to create the same tech on their own. Something doesn't click. Great news for Mr. Papandriopoulos though. (yes, that was a copy/paste action)
    I'm not surprised. Big companies a generally not better at innovating. Apple has a few employees working on photo compression. If those two people don't think of the solution, then it doesn't get developed at Apple. In contrast, there are hundreds of thousands of other engineers with diverse experiences that could make the invention. The odds greatly favor outside development. BTW, the increased likelihood of invention outside of the market leader is the reason the patent system is so important to innovation.

    That's a well reasoned and valid post there; thank you.
  • Reply 71 of 91
    macbook promacbook pro Posts: 1,605member
    philboogie wrote: »

    Perhaps they could 'speed up' the process of capturing a high fps by only recording the changes to each photo after the first fully saved one. They can then re-create the following shots by replacing the pixels that differ from the first shot. Kind of a 'the more the subject changes the slower the fps are going to be'.

    solipsismx wrote: »

    That's an interesting technique and perhaps patentable. If two images can be quickly compared and the differences show less than a certain percentage of difference start with a new image and/or it could section off an image into quadrants then do micro comparisons since only small portions of an image seem to have any real change when shot in rapid succession.

    You appear to be referencing either fractal compression or JBIG2 which are, unfortunately, rather mathematically intensive. Fractal compression may become a reality in mobile devices as we approach 2020 perhaps even possible with the development of an Apple A10 processor should Apple be able to continue their (nearly) exponential rate of hardware development.

    I predicted several years ago that medical imaging would begin to incorporate fractal compression by 2020 which has not as yet occurred. The published studies comparing discrete cosine transform and fractal image compression in medical imaging generally conclude that fractal image compression offers greater compression with resultant nearly visually indistinguishable but lower quality images (as defined by lower peak signal-to-noise ratio).
  • Reply 72 of 91
    dickprinterdickprinter Posts: 1,060member
    Quote:

    Originally Posted by SolipsismX View Post





    It's too bad what we imagine can't also be what we can make.

    To me, that sounds like the antithesis of a Steve Jobs quote.

  • Reply 72 of 91
    philboogiephilboogie Posts: 7,457member
    either fractal compression or JBIG2

    Wow, learn something here almost every day. Thanks for this, interesting reading!
  • Reply 74 of 91
    nhtnht Posts: 4,494member
    Quote:

    Originally Posted by i4software View Post



    The question isn't why Apple bought Snappy Labs but why Snappy Labs decided to sell.

     

    Well that's not really a question is it?  It's not just the money, it's the access and prestige.  Work for Apple a few years and then, whatever.

     

    Yes, you lose your company but as a one man company it's just your rep as a dev anyway.  His revenue was likely far lower than yours given what you posted.

  • Reply 75 of 91
    solipsismxsolipsismx Posts: 19,566member
    So many great comments in this thread.
  • Reply 76 of 91
    macbook promacbook pro Posts: 1,605member
    philboogie wrote: »
    Wow, learn something here almost every day. Thanks for this, interesting reading!

    I know very little of JBIG2 myself though as JBIG2 is not suitable for medical imaging due to the potential for sentinel events caused by substitution errors. In fact, I will notify my clients tomorrow of this issue about which I just became informed.
  • Reply 77 of 91
    philboogiephilboogie Posts: 7,457member
    In fact, I will notify my clients tomorrow of this issue about which I just became informed.

    Funny how this sharing thing works, heh?
  • Reply 78 of 91
    macbook promacbook pro Posts: 1,605member
    philboogie wrote: »
    Funny how this sharing thing works, heh?

    I have been described as a "wonk" by a former manager. Personally, I don't really learn much from Apple Insider; I really appreciate the community. Although I haven't posted any comments of significant length recently I continue to investigate evidence of future developments at Apple. I can't begin to tell you how impressed I am by the concurrent development efforts at Apple. Frankly, I should have seen True Tone flash coming considering the indications I just didn't imagine that Apple had a contingency plan when their supplier failed.

    In a desire to understand the specifics of JBIG2 I did some research and immediately learned of the issues with JBIG2 which may have me overly concerned at the moment.
  • Reply 79 of 91
    Quote:

    Originally Posted by mdriftmeyer View Post

     

     

    This is something people in this country who profess free market capitalism is real that don't seem to grasp [Same folks who say the guberment can't create jobs], when it comes to technology advancing.

     

    Most advances come from Academic/Federal/Government joint ventures with 99% of the funding coming from the Central Government.


     

    Maybe not that much but government-funded research is an essential part of the mix that made America great. The rest (used to be) forward thinking companies which realised *basic* blue-sky scientific research offers more long-term bang for your buck than anything else. In Australia we have CSIRO etc. but no equivalent to the Xerox PARC or Ford Dearborn scientific research labs. 

     

    Anywho, this is from the abstract to one of Dr JP's PhD student papers in a peer reviewed journal. You'll see why Apple might be hoping for more bang for their buck than just improved image processing. 

     

    "We address the problem of achieving outage probability constraints on the uplink of a code-division multiple-access (CDMA) system employing power control and linear multiuser detection, where we aim to minimize the total expended power. We propose a generalized framework for solving such problems under modest assumptions on the underlying channel fading distribution.  Unlike previous work, which dealt with a Rayleigh fast-fading model, we allow each user to have a different fading distribution. We show how this problem can be formed as an optimization over user transmit powers and linear receivers, and, where the problem is feasible, we provide conceptually simple iterative algorithms that find the minimum power solution while achieving outage specifications with equality..."

  • Reply 80 of 91
    minicaptminicapt Posts: 219member
    Quote:

    Originally Posted by SolipsismX View Post





    I agree that it could get abused quite easily but I see no harm in AI creating a blanket rule only after there is a lowered ratio in the coefficient of friction upon a given graduated plane. image

    And if the plane is disinclined, or has yet to graduate?

     

    Cheers

Sign In or Register to comment.