Last Active
  • Mac Studio review roundup: Incredible speed, that not everybody needs

    aderutter said:
    Incredible speed? Really...

    Blender 3.1 Benchmark. Blender Benchmark is superior to Cinebench IMHO as it tests multiple scenes with various complexity across CPU and GPU.

    M1 Ultra - 1132
    M1 Max  - 706

    RTX 3090 5552

    I guess Apple wasn't using this benchmark suite for their performance graphs.

    It’s also very biased against Apple Silicon as other GPUs prefer small tiles but ASi prefers larger tiles.

    Better to compare Redshift rendering times imho.
    You clearly are speaking from a position of ignorance as Blender doesn't use tiles anymore CyclesX is a progressive renderer.

    Redshift is not at all well optimised on Ampere hardware, so use that if it makes you think Apple Silicon is closer in performance but it isn't, Apple Silicon GPU performance is woeful for 3D rendering. Looking at these representative benchmarks not even the Mac Pro will get close to an 18 month old GPU. With the replacement coming at the end of the year which promises 2x the performance of the 3090 unless Apple has something coming like massively more GPU cores then the Mac Pro will be yet another dud for 3D work.
  • Mac Studio review roundup: Incredible speed, that not everybody needs

    Incredible speed? Really...

    Blender 3.1 Benchmark. Blender Benchmark is superior to Cinebench IMHO as it tests multiple scenes with various complexity across CPU and GPU.

    M1 Ultra - 1132
    M1 Max  - 706

    RTX 3090 5552

    I guess Apple wasn't using this benchmark suite for their performance graphs.
  • Apple's Mac Studio launches with new M1 Ultra chip in a compact package

    cgWerks said:

    Thanks for the in-depth background and information!

    Yeah, when I did a lot of CAD/3D work in the past (I spent over a half-decade heavily into it, then more as a hobby since), I was using Electric Image Animation System (dating myself there! I used to hang in forums with Alex Lindsay, people from ILM, people working on the Matrix, etc.) and a Spatial ASIS-based 3D CAD app (Vellum Solids -> Ashlar Cobalt -> Concepts Unlimited -> ViaCAD now. Same product.) I was doing mostly architectural design (in a mechanic sense, though!) and visualization.

    My scenes were HUGE and complex. I modeled every detail, so it wasn't like game design or even a lot of 'illustrative' architectural stuff. I was dealing with many millions of polygons (once tessellated for rendering) even back in the late 90s. EIAS was one of a few apps that could even handle that very well.

    I more recently learned Revit to get back into the industry, but who knows what I'll do on my own, as I want to get more into architectural visualization. BTW, EIAS has an update in the works, so I'm quite interested to see what they do. I'm probably going to start playing with Blender, as it has gotten quite advanced. I'm really interested in the huge number of renderers out there these days. Making a choice must be tough!

    But, from your post, it looks like how well the new Macs might do will depend a lot on which type of 3D you do, not just that you do 3D. I wouldn't have considered a lot of that, so thanks much! Great points about how all that shared RAM might be used. I was looking and even the 3080/3090 are in that ballpark of 800 GB/s (a bit above and below, respectively). If the speed is the same, then it shouldn't matter much if it is dedicated or shared RAM, I'd think. It just provides greater flexibility.

    Yeah, for video production, it looks like Apple will own that. And, yeah, at this point I just run one machine, which will be a Mac. But, if I start making money on my own with 3D work, then I'll pick the most money-making machine for that work (and probably use a Mac for everything else).
    Well I started out on Imagine 3d on the Amiga then Lightwave, then C4D. I now use Houdini and a bit of Blender. So who's dating who?

    Apple has a huge team of in house developers who have assisted greatly in porting Redshift, Octane and now Blender's Cycles to Metal. I saw yesterday that Apple is committing huge amounts of development to Blender, they are writing a new Metal backend so Blender runs right on top of Metal for the viewport and EEVEE which is Blender's real-time renderer, Cycles is Blender's photorealistic renderer. I also saw that one of the Apple developers say they expect to get more performance with Cycles in the coming months are they begin to lean more on the M1 architecture. So even for Apple it's an iterative process to getting the best out of the new architecture. BTW, I highly recommend Blender it has blossomed into an incredible App and Apple is playing a big part in its development. Blender is going places and it pays to know it as so many more studios are using it now. 

    It's still very early in the development cycle for M1 compatible Apps and developers are still finding their feet but it won't take long before everything is on M1 and highly optimised.

    As you're interested in Archviz, I would check out Twinmotion, Enscape also Unity and Unreal Engine as I know for a fact many of the biggest architect studios in the world are moving towards real-time rendering solutions because they fit so well for highly stylised and clean renders but you can also iterate so much quicker. The Mac Studio can access a huge amount of VRAM and if you make use of that in your Archviz work it could be quite the USP. To get an nVidia GPU with similar memory you'd have to spend more than a Mac Studio just for the GPU! There's definitely areas of opportunity for Apple Silicon Macs in 3D.

    The video editing market is now completely owned by Apple, they own it so much that a Mac mini performs better than a lot of high end PCs, that's not hyperbole, that's fact!

  • Apple's Mac Studio launches with new M1 Ultra chip in a compact package

    @cgWerks ;

    Forgive me for not using quotes.

    For quite a large chunk of my career 3d rendering was always done on the CPU, I used to have a room stacked with cheap PC render slaves as a render farm to get stuff done. Most 3d artists had the same. Somewhere around 2014 - 2015 there was a buzz around a new renderer called Octane and it ran on the GPU rather than the CPU. It revolutionised 3d for us free lancers because we could replace a stack of PCs with a PC stuffed with GPUs instead. It was transformative and many of my fellow C4D users leapt on the bandwagon. GPUs are highly parallel devices which are good at doing lots of simple tasks together so it was a great fit for 3d rendering and proved an order of magnitude better than CPU rendering. The only downside all the models and textures had to fit in VRAM so there were scene complexity limitations. But these limitations didn't really affect most freelancers as our scenes are typically much less complex than studio productions. High end productions are typically still rendered on the CPU on farm because of RAM limitations. Hollywood scenes are hundreds of GB in size and way beyond the size able to be rendered on a GPU.

    I guess there's two types of GPU rendering to be aware of too, there's the real-time viewport/game engine style of rendering which is becoming more and more popular in Archviz rendering then there's the full photorealistic final frame rendering from Octane, Redshift and Cycles etc that calculates global illumination, accurate subsurface scattering and physically accurate materials etc. Both of these types of GPU rendering have their place and are set on a collision course a few years into the future as hardware gets more powerful.

    The M1 series GPUs don't have ray tracing hardware which is a bit of a disappointment TBH as it would improve performance dramatically. Architectural renderers like Twinmotion even without the ray tracing features of the PC version you can still produce architectural renderings of exceptional quality especially if you're going for a non-photoreal look which is popular in the Archviz game. This could be an area where the M1's large unified memory is a USP as it can hold scenes of much greater complexity than a typical consumer level PC GPU. You have to buy a Quadro to get 48 GB VRAM and connect two together for 96GB which is a very expensive alternative. I think the M1/2/3 Macs will find a place in 3d but not for photorealistic rendering with Octane and Redshift et al at least not for a while.

    Another area where the M series should excel is the real-time performance visuals sector, Macs were the hardware of choice at one stage due to the popularity of Quartz Composer, it was the core tech of a lot of software. Apple allowed Quartz to die and the market moved to PC but now I think Apple Silicon Macs could see this area grow again. That large unified memory, hardware codecs could revolutionise performance visuals or at least allow performers to use a tiny Studio box instead of a massive PC rig. I can't wait to see what the software developers do with Apple Silicon Macs in this area.

    Apple is behind on raw GPU performance and GPU features like ray tracing functions. It may come down the line but right now if you need brute strength then nVidia GPUs are way ahead. The excellent design of the M1 has allowed the flyweight GPUs to hit like a Middleweight in certain situations. There's no reason to rule out Apple punching even further above their weight with future hardware revisions. But definitely more GPU cores over CPU cores are needed for that, there's very few that need more than 10 cores. 20 and 40 CPU cores is a waste of silicon budget for me.

    What Apple has done exceptionally well is to make the M series the absolute king of video editing. The codecs on the SOC mean that the CPU doesn't get involved in decompressing the footage, the CPUs can concentrate on house keeping while the GPUs compute the colour grade, filters and effects and because they all share the same memory it maximises efficiency but there is a limit with the current GPUs. Puget Systems run an excellent Resolve Benchmark system and I bet the Mac Studio and future Mac Pro will be heading those benchmarks in a very short period of time. Only the very highest spec PC which is consuming much more power will come close. If all I did was video work then a Mac Studio would be ideal and I'd be back on Mac fulltime.

    Right now it pays not to be blinkered and use the best hardware for the job. For most 3D work use a PC (Linux if possible) otherwise get a Mac for everything else.
  • Apple's Mac Studio launches with new M1 Ultra chip in a compact package

    I hope in future versions Apple rebalances the CPU core vs GPU core mix. I'm sat here thinking a 10 core CPU + 128 GPU core would've been a much better balance for content creation for Mac Studio. So many applications take advantage of the GPU these days for effects, smoke and fluid sims that having 20 CPU cores seems a waste of silicon especially since traditional CPU tasks have been offloaded to onboard codecs and neural processors.

    If the Mac Pro is 2x M1 Ultra 40 CPU cores and only 128 GPU cores looks even worse balanced, knowing the software I use on a daily basis I wouldn't be able to keep 20 CPU cores busy let alone 40 CPU cores because over the years developers have pushed so much work to the GPU. I think the M1 architecture kind of looks anachronistic and old school. Give me a Mac Pro with 20 cores and 256 GPU cores or even better 512 GPU cores. I just don't see the content creator being that well served by large numbers of CPU cores in 2022.

    I'm sure the Mac Studio and Mac Pro will be kings of Cinebench, problem is no one I know cares about Cinebench scores because everyone is rendering on the GPU. Most creatives I know in the business are typically running 16 core CPUs and heavily invested in GPUs. Those who've got 32 core Threadrippers probably haven't got anything like good value from them.

    I'd be interested if any other content creator has similar views?