sloaah

About

Username
sloaah
Joined
Visits
29
Last Active
Roles
member
Points
143
Badges
0
Posts
31
  • Final Cut Pro for Mac and Final Cut Pro for iPad 2 have grossly different features

    If the two suites aren’t fully cross-compatible then that’s a major issue and they shouldn’t have the same name. I would expect to be able to open a Mac FCP project on iPad, but at the moment I can only import it in the other direction. 

    Personally I hope the iPad version begins to grow closer to the Mac version, since I see Apple being much more active about FCP for iOS development than they are with the Mac side, which has been relatively stagnant for a while. Long term I’m sure they plan for a FCP for Vision Pro - in which case feature parity between Mac and iOS versions becomes more important. 
    9secondkox2FileMakerFeller
  • Canon: No camera can truly capture video for Apple Vision Pro

    yeah, I’m missing something here too. The canon virtual 3-D lens they talk about when coupled with an R5 will do four megapixels for each eye at 30 frames for second. What happens if you show 30 frames per second on Apple vision that’s rendering it 60? Seems like it would still just be fine.

    I'm a filmmaker and have worked in VR in the past, so I can give some insight.

    The reason why the resolution is apparently so high is because this is for 180VR films. The videos occupy half of a sphere (180º). Though the Apple displays are 3.6k horizontally, that's at roughly 105º FoV; so 3.6k/105*180 = 6.2k resolution per eye. 

    If you're recording both frames on one sensor, which is how it's done on the Canon Dual Fisheye lens (and which is the easiest way to keep the lenses to an inter-pupillary distance of 60mm (roughly the distance between our eyes), then you need a resolution of 12.4k (Horizontal) x 6.2k (Vertical) = 77MP. There is also some resolution loss given the fact that the fisheyes are not projecting onto the full sensor – they are project just two circles side by side on a rectangular sensor – so I would imagine 100MP would be roughly right to retain resolution across the scene.

    As to frame rate. Cinema is 23.98fps and 180º shutter, which means that the shutter is actually closed half the time and open the other half of the time. It leads to a certain strobing which subconsciously we associate with the world of cinema. Nobody really knows why this is so powerful, but maybe it helps remove us a bit from the scene so our brains associate it more as something we're observing rather than us being part of. Tbh I'm not really sure.

    But with immersive video, we want to do the opposite. Rather than emphasise detachment, we want to emphasise immersion. And so we want to shoot at a frame-rate which is roughly at the upper end of what the human eye can discern, removing as much strobing as possible. That means roughly 60fps. The fact that there are two frames being shown, one for each eye, doesn't alter this equation. It still needs to be 60fps per eye.

    The Canon dual fisheye on an EOS R5C produces two image on an 8k frame. The two images render to a single image which is half of 8k. This suggests two synced 8k cameras could work and that it doesn’t all have to occur on a single sensor as is suggested in that statement. 
    That is true, but it is difficult to get the lens spacing to match the 60mm inter-pupillary distance that I mentioned. If you remain constrained to this distance, then a single sensor is the most effective way to achieve this, because you don't have any dead space between the sensors and thus you can maximise sensor size. It can also ensure that you don't have any sync drifting between left and right eyes, which can be a tricky problem to solve.

    In theory you could presumably also create some sort of periscope system so that the two sensors can be entirely detached; but I imagine this would be very costly.

    Looking at the BTS shots of the Apple cameras, they interestingly don't follow this inter-pupillary distance rule. Nor does the iPhone 15 Pro for that matter. The Vision Pro isn't available in my region, so I haven't had a chance to see what these spatial videos look like, but I wonder if there is some interesting computational work happening to correct for this. That sort of computational photography work – which essentially repositions the lenses in software by combining the image data with depth data – is definitely implemented in how the Vision Pro does it's video pass-through, where the perspective of the cameras at the front of the headset are projected back to where the user's eyes are.

    If there is a computational element going on here, then that's hugely interesting because a) it effectively solves this issue with needing to use one sensor, and b) it opens up intriguing possibilities of allowing a little bit of head movement, and corresponding perspective changes (i.e. true spatial video rather than just 3D video – or what is called 6DoF in the industry).
    badmonkcg27damonbpooleAlex_Vailoopedchasmwatto_cobra
  • Apple Vision Pro followup could be 18 months away

    This is purely speculative, but I would have imagined the next Vision Pro would be launched alongside or after the release of the M5 chip… end of 2025 or even 2026. 

    With everybody treating this version as Series 0, Series 1 would need to iron out a lot of the short fallings of the current headset, as well as any supply issues. 18 months just doesn’t seem enough time for substantive technological progress to do that.
    badmonkwilliamlondon
  • M3 Ultra Mac Studio rumored to debut in mid-2024 -- without a Mac Pro

    I much prefer macOS over Windows but unless a person really needs Mac specific software they are much better off using something like a HP Z8 Fury G5 which offers much more freedom for high end applications.
    LOL no. 

    With the money needed to spec that out like a Mac Studio, it’s better to just get an actual Mac Studio or Mac Pro. 

    There are still professional situations where an M3 Max isn’t up to scratch. Such as with 3D animation, where high end rigs can have up to 1TB of RAM and multiple GPUs (even if they aren’t run in SLI having a free GPU whilst one is working can be helpful). 

    Additionally the cost of updating a GPU on a rig each year is less than the cost of buying a new Mac Studio each year. 
    watto_cobra
  • Kuo reiterates 120 mm tetraprism camera coming to iPhone 16 Pro

    Agreed on all the comments about 120mm not being useful. 70-90mm is the ideal portrait range; even up to 105mm in some situations. 120mm is too tight for portraits but also not useful for wildlife etc.

    i actively chose the 15 Pro over the Pro Max to avoid the 120mm. Initially marketing was persuading me in the other direction, but having used the Pro Max, the 120mm is just an underwhelming camera - too tight and (to my eyes) quite noisy.
    muthuk_vanalingambeowulfschmidtdewme