Devblog February 2026
General
The January, February, and March 2026 devblogs were written at the same time due to delays.
This devblog focuses on audio systems, room and sound propagation setup, CPU performance issues, and the solutions implemented.
It also covers progress on modular interaction systems, animation handling, and game master tools.
Audio - Finish to Audio Assets Replacement
During this month I finished updating all audio logic and sounds from Unreal to Wwise.
Now every sound is rendered by Wwise.
It’s not just moving a file from Unreal to Wwise, I need to update the code and game logics to make the game modular and having a good integration with Wwise.
For example when Cobble get stepped on, the character script will not call directly the bone crack sound:
- It will send a request to the new body damage system with context.
- The body damage system will check the character config and then call the sound in Wwise with other effects.
Same for the character voices when Cobble get hurt, the musics in the main menu, etc.
Now most of sounds use config files used by small new systems, nothing is hard coded, everything is configurable.
If I want replace the bone crack with a toy squeak that is possible very easily.
It’s very similar to what I did for foot interactions with config files that contain the sounds, FX, footprint, camera shake, etc.
(I spoke about the foot interactions in the January devblog)
Some sound was more tricky to update because they contain their own logic, example when the player rub Talas sole the sound will be modulated by the speed of rub and the surface type (Clean or dirty).
So I need reproduce the same results but with Wwise.
The update from Unreal Sound Cue to Unreal MetaSound was easier than MetaSound to Wwise because it is not the same paradigm.
About the pipe ambience, I had to redo it three times since the update v0.4.7:
- SoundCue to MetaSound
- MetaSound to MetaSound with Soundscape
- MetaSound with Soundscape to Wwise
(Here I just show the SoundCue, the MetaSound and Wwise implementation) 
I really like how Wwise manage the sound triggering.
In Unreal a MetaSound manages the logic inside, with triggers and parameters to play and stop the sound.
With Wwise it is more global, you call event and you can manage the full project with all sounds from an event.
Example with Unreal for the orbs I have two MetaSounds. One for the loop and one for the pickup. When the orb spawn I start the loop sound then when the player pickup the orb I stop the loop with a fade out and play the pickup sound.
With Wwise I have Two Audio Events, one for the spawn and one for the pickup. When the orb spawn I call the spawn event and when the player pickup the orb I call the pickup event. And it’s in Wwise that I manage the logic.
- I set the spawn event to play the loop.
- And the pickup event to stop the loop with a fade out and play the pickup sound.
The Attenuation is very powerful too. I can manage the sound variation as I want with the distance, the occlusion etc.
For example here with the volume I use a low-pass and high-pass curves to simulate the air absorption when the macro walk far from the player.
With that during my tests, I also added debug menus to test the sounds directly in-game. 
Audio - CPU issue in Wwise
After that I did the first tests with Wwise and the game testers and we noticed a critical issue.
In game the CPU usage is criticaly higher than in the Unreal Editor. This produce many audio glitches and creaking that not in the Unreal Editor.
I did most of my test in the Editor so I didn’t notice that issue before.
I started to do simple optimization and fix the common audio bugs. Checking the Wwise profiler, I found that what use the most of the CPU is the sound propagation raycasting and the sound path finding.
(Here in Editor I already peak the CPU limit but that don’t produce issues) 
Wwise doesn’t support baked sound propagation, so it needs to be calculated in real time at intervals.
It also doesn’t use the GPU for raycasting, so the CPU usage is high.
Small note: Raycasting determines whether objects are visible. Wwise uses it to check if a sound is blocked by an object and to calculate the sound path for spatialization and attenuation. This process is much more CPU-intensive than using the GPU, which is optimized for rendering but less reliable for data extraction.
To optimize the CPU usage, I used smaller attenuation radius, virtual sound and many common optimization.
I was able to make it stable in editor under the limit.
But the main issue is that in game the CPU usage is multiplied by ~14 compared to the Editor. So with a stable CPU usage in the Editor it still not stable in the game.
I did many investigation and tests during several days but I was not able to find any explanation for that. The logs and profilers don’t give any clue on what specifically use more CPU, and I can’t check the code source like I’m used to do with Unreal Engine.
I tested with different Wwise versions and Unreal Engine versions and I still have the same issue with the same CPU usage difference between Editor and Game.
To get help I also did a post about it in the Wwise forum but no answer was given. Wwise Forum Post
If a scene take 100.0 us in Unreal Editor, I don’t think it is normal that it take 1400.0 us in game, that is a huge difference.
It still an issue for now. So my only solution was to do extreme optimization, and sacrifice the most of things without impact the sound quality and design.
After some days more of optimization I was able to get a stable CPU usage in game:
- In Standalone Game from the Editor I get around 3% of CPU usage with bigger peaks at 15%
- With the packaged build I get around 25% of CPU usage with peaks at 75% (Under the 80% CPU limit to avoid any audio glitches).
Have to know I’m using Wwise 2024.1.8 with Unreal Engine 5.4.4 for MMVS.
Wwise 2025.1.5 with Unreal Engine 5.7.3 seems to work better but still have this issue. (I added the details in the Wwise forum post if you are curious)
The version v0.4.8 of MMVS will move to the last version of Wwise and Unreal Engine, so I will be able to do less sacrifice on the spatialisation.
Audio - Room and Sound Propagation Redo
Following the CPU investigations, I updated all levels zones and geometry for the sound propagation, the reverb, the occlusion, etc.
Then as you know we discovered with the testers the CPU issue with Wwise.
We also discovered a bug that make some room have leaks, around a room that the player can hear the sound of the inner room with specific positions.
I later discovered that the issue is the tool used to set up the rooms.
I spoke about that in the January devblog. Here all the details:
Unreal Engine have a Brush system (BSP) to set up 3D volumes in a level.
It what recommended by Wwise to setup the room on fly.
But that was a very bad idea to use that for the Cozy House.
In the Cozy House I have many area which intertwine each other, the player is very small and can go in the walls, in the plumbing, under the floor, etc. So I need precise geometry with good modelling.
The Unreal brush is useful for simple shapes, but it became really complicated to manage when we start to do complex shape. And with that Wwise add texts on the face to specify the surface type. It’s useful but it makes the editor very laggy when you have many faces.
For the moment it just an ergonomic issue, I was able to do the full house with that.
But the main issue was that it create very bad geometry that then used in Wwise for the sound propagation.
I discovered that when I started to export the Brush geometry to Blender.
Thanks it possible to use the geometry from a static mesh instead of the brush.
I did some tests with clean geometry and fixed the sound leaks, it also improve the API calls and reduce a bit the CPU usage. (It still multiplies CPU usage by ~14x in-game.)
About the API calls, Wwise is a middleware that run at the same time as MMVS, so the code need transfer the data like the geometry and the sound events with requests and API calls.
From left to right:
- You can see the original brush geometry.
- The exported geometry in Blender with many issues.
- The new geometry with static meshes that I will use for the sound propagation.
As you can see the exported geometry contains many unnecessary polygons, dislocated faces, etc.
In some cases, there are inverted faces too.
So I exported all the geometry to Blender and cleaned it, then I replaced the brushes with static meshes with the clean geometry.
With the static meshes, I lose the ability to specify the surface type for the sound reflection plugins. However, I think it should be possible to do this with materials, so I plan to try it in the future.
Still about optimization, I set up a system to enable or disable entire areas depending on the listener’s position.
For example, the living room is connected to a storage room, the storage room is connected to a plumbing entrance, which itself connects to the entire pipe system.
Living room <-> Storage room <-> Plumbing entrance <-> Pipe system
- When the player is in the living room, it is unnecessary to calculate sound propagation in the plumbing and pipe system, so I disable it.
- When the player enters the storage room, I enable the plumbing entrance so they can hear sounds coming from it, but not from the full pipe system.
- If the player goes into the plumbing entrance or the pipe system, I enable both so they can hear everything. However, I keep the living room and storage room active, so it is still possible to hear Talas’s footsteps from those areas.
- And for simple audio obstacles like the shelf, the arm chair, etc. they are active only when the player is in the room.
Here how it looks:
Currently, all the areas are set up manually.
Animated Text Scrolling
Following the testers’ feedbacks, I added an animated text scrolling feature.
This will fix many localization issues when the text is too long with small screens resolution.
Rescale Pads Cheat
I added rescale pads and cheats to spawn it.
Currently only the host character can rescale using cheat codes or the debug menu.
Now it will be possible to rescale clients character in multiplayer.
You can specify the scale or size target.
It part of the custom minigame tools that will allow people to create their own game rules, so it may possible to spawn it as game master in future.
Vore bug fix and clean
I did a big clean on the animation management scripts related to the Vore and Maw Exhib.
Initially it was to fix bugs when Cobble lay and stand up from Talas Tongue, and it was better to do a big clean for the future interactions than a workaround.
About the vore management I have a better transition system for the animations when I need to blend many system together like here:
- I have the CharacterGrab used to manage Character grab and AI when Talas Keep Cobble in mouth.
- The stuck zone used to attach Cobble to the tongue and manage some animations.
- The situation system to manage the animation state with the struggle, lay, stand up, etc.
- The interaction system that will trigger the gameplay action when Cobble lay or struggle, it also manage his own state depending on the context.
And I have a main script that manage all this and the code communication between all the systems.
Last mounth I spoke about the modular interactions system that I started to set up.
It used to manage the interaction conditions with mini scripts that can be reuse and plug on any interaction point.
This avoid to create a new child class for each interactions, I can plug the conditions I need and I can create new one if I need.
Here for the Maw Exhib I used that new functionalitys and I redid the user interaction state.
With the changes I also moved the interaction points detection in C++.
Before the user interaction state was just a group of data with the interaction points considered as interactable, near or too far.
Now the local player store every interaction points with their own state, Focused, Near, Far or Invisible.
And it seperate from the world that store his own interaction states. Enabled, Disabled or Invisible.
This changes fix the original bugs, fix potential other bugs and make the interaction more robust for the future.
Progress with Modular Interactions
Following the changes with the interaction state, and the modular functionality I kept old code to keep the compatibility with the old interactions.
It common to keep the old code and indicate functions and variables as deprecated during a time.
So the dependency keep functional and if other developers work in the dependency they get notified about the deprecated code that need to be updated.
But in this cas it more tricky, the new interaction state is not compatible with the old one and the code do check in the old and new interaction states to support both systems.
For the prototype and checks is was nice but I preferred to clean everything immediately to keep the code clean and avoid too deprecated stuff.
So I redid all the interactions using the modular system.
During the process I created many interaction conditions scripts that now used a bit everywhere.
- Make the interaction invisible when the player is too small or too big.
- Disable an interaction when the player is full of health.
- Set tooltip or Title depending on the context.
- And many more.
To enter a bit deeper in the technical details, my interaction system I use trackers class to detect all interactions depending on the context.
For example I have a tracker for the desktop user that will detect all interaction points visible by the camera and chose the best one.
I also have Vr Hands trackers that will detect all interaction points near the hands. And I have more specific trackers for the situation system or other stuff.
Every tracker manage on his own the interaction states. So it very modular and flexible for Vr and desktop.
Before when the Player press interact the code checked what system is active and called the tracker associated to this system.
It was hard coded.
Now when the player press interact, the interaction system will automatically detect all registered trackers and will call the interact on trackers that fit with the context. So in VR if the player try to interact with the right hand only the right hand tracker will be called, not the left hand or desktop tracker. The tracker just have to register itself with contextual conditions and that fully modular.
In the video at down left you can see the interaction states of both the players.
The Cobble bed now use a common interaction points with condition modifiers.
The bed script just have a Situation to lay and stand up, and interaction point to lay and an interaction situation to stand up.
- On the interaction point I add:
- “IM_InvisibleWhenVR” to make the bed usable in non VR only
- “IM_SituationNotInUse” to make the bed usable only by one character at a time
- On the situation interaction:
- “IM_SelfIdleInSituation” so the character can only stand up if he is fully layyed on the bed.
All is modular and super easy to setup. That replicated for multiplayer use, and optimized in C++.
What’s nice with modular code is that I can literally disable the full system without issue. I can also use it in other project without dependencies issue to the main project.
Useful for debugging and keep my tech future proof.
Maw Exhib, Game Master, Tool Observer and C++ refactor
I did a new icon for the Maw Exhib and I finally added the game master tool to control when Talas show his maw.
Before only the AI was able to use the Maw Exhib.
I also reworked the base game master script and moved it to C++ the main content related to game master logic is still in blueprint.
But the root logic about what player is game master, what tool is used, what objects is selected or controlled etc is now in C++.
I also prepare a support for multiple game masters in a same game, the root code is now ready for that. :3
(Btw I notice the animations are very rigid in the video, I will maybe rework it a bit before v0.4.7 release)
With the Game Master root code changes I also simplify many stuff. For example in the video when I select the Maw Exhib Tool it show FX where it’s possible to show the maw.
During the level design I setup the points where the AI can show his maw with simple actor scripts.
Before when a tool is selected and used by the local game master, the tool will spawn a custom scripted FX, and despawn it when the tool is deselected.
- For MawExib it a spawn a MawExibFxLocations, the MawExibFxLocations will spawn a single Fx.
- Then at every frame the MawExibFxLocations will detect all possible MawExib situations, calculate the location with meta data set by the Owner actor script and notify it to the Fx that will update the position.
This is complicated because you need to setup specific settings directly on the Tool, you need to have a script for the FX and you need to do many calculations at every frames. 
Now when a tool is selected and used by the local game master.
- I directly add the Fx on the actor script where I want to show the FX.
- And a GameMasterToolObserver on the actor script will detect the use and simply notify it, then the actor code I can make the Fx visible or not.
It’s a way more user-friendly approach, you just need a GameMasterToolObserver, set the tracked tools and setup what happend when receive the notifications. And it more optimized on CPU side.
Not run time detection and calculation, just event based notification.
In the root code it way more complex because I have a full system to manage all interactions between the scripts and replication but this make the use very easy, like I did with the interactions.
See more: Devblogs - WIP Telegram Channel - Discord Server - Support the Game - MMVS Game









