Hello everybody
Quick model: If I feed the VFX graph the place and velocity of each particle in each replace manually, would this nonetheless enhance GPU efficiency vs working it on the Shuriken system?
I’m a Cognitive Science Grasp and for my thesis I’m engaged on a venture that makes use of solely particles and forces in Unity (associated to Group VR experiences can produce ego attenuation and connectedness corresponding to psychedelics | Scientific Experiences). The venture must run easily on a Quest 2 and I’m working into GPU limitations once I use the Shuriken particle system with clear particles (sadly they have to be clear for issues to look appropriately). I’m emiting and shifting the particles manually by script, making use of attraction forces to a number of shifting factors in house and computing collisions between particles. Utilizing the IJobParticleSystem strategies I managed to make it CPU performant, however GPU is limiting the quantity of particles I can show whereas preserving FPS up (about 1k particles appears to be the restrict if they’re all on display). Since I compute all motion manually anyway, might I merely feed the place and velocity of every particle to the replace block of VFX graph and nonetheless count on GPU efficiency enhancements? I’ve by no means used the VFX graph, however from what I’ve seen I might use attributes to feed it all of the particle positions and velocities I compute in my script, right? My particles by no means die, I spawn them as soon as at begin after which I hold them alive infinitely, due to this fact I can all the time entry every particle by its index that I protected once I emit them manually by way of script in the beginning.
I’m pleased to supply extra info to anybody
1 Like
VFX Graph just isn’t good for Quest, I restricted myself to solely shuriken because it’s solely engaged on CPU whereas VFX Graph is simply GPU, so I assume it can make issues worse.
Resolution to be learn how to cut back overdraw, so perhaps you can also make these clear particles simply masked with dithering, it could assist quite a bit already. Or attempt to cut back locations the place they overlap one another.
In RealtimeVFX discord there’s one enormous submit about Quest optimalization. In the event you write there to me I can hyperlink you there
Thank your to your response, I discovered this hyperlink to your discord however the invite expired, might you please direct me to the right one? Discord server
nevermind I discovered the energetic hyperlink
In case you have the identical title in discord then I’ve pinged you
When creating VFX for the Quest 2, it’s usually higher to make the most of the GPU over the CPU for higher efficiency. The Quest 2 has a strong GPU, which is designed to deal with intensive graphical processing duties.
The CPU is answerable for dealing with non-graphical duties, corresponding to recreation logic and physics calculations. Whereas the CPU is essential for the general efficiency of a recreation, it isn’t as essential for VFX processing because the GPU.
By using the GPU, you’ll be able to make the most of its parallel processing capabilities to deal with advanced VFX calculations extra effectively. It will end in higher efficiency and smoother gameplay on the Quest 2.
VFX Graph is GPU-accelerated. Which means that the visible results created with VFX Graph can make the most of the ability of the GPU to render extra particles and extra advanced visible results in real-time, which can lead to greater efficiency and smoother gameplay on units just like the Quest 2.
It’s right for platforms like PC/consoles, although not for Quest/mobiles. In case your {hardware} doesn’t assist any good GPU then even tho GPU particles are total extra optimum than CPU ones, they gained’t work nicely on this system.
And Quest works on cellular {hardware}, so it’s no go
Oh, my unhealthy, I believed it was for PC since Quest 2 may be linked to PC, I not often consider VFX in Cellular video games
however if this is the case you then’re right.