Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Audio Graph (Version 0.2) and Internal Rework (Version 0.3) #13

Open
Ohjurot opened this issue Dec 26, 2020 · 14 comments
Open

Audio Graph (Version 0.2) and Internal Rework (Version 0.3) #13

Ohjurot opened this issue Dec 26, 2020 · 14 comments
Assignees
Labels
enhancement New feature or request

Comments

@Ohjurot
Copy link
Owner

Ohjurot commented Dec 26, 2020

Audio

Sorry for the recent inactivity of this project but my EE university studies are currently quit demanding I didn't have much time. However, I have created a "Audio graph" to illustrate the audio flow currently planned.
You can take a look at it here https://github.com/Ohjurot/DualSense-Windows/blob/audio/Doc/Audio/audio_graph.pdf

Feedback is highly appreciated I have not started the implementation yet!

Rework and Additions

Since audio and some planed features require a more sophisticated allocation and background worker concept I’m planning to rework the internal flow of memory allocation and I/O calls to be more advanced and fully customisable (when things are done you will be able to integrate the API in your Engine's job system; But don’t worry there will be an easy to use DefaultInit() function)
Currently I can't tell you much about the concrete plans because I have not started working on it. But there will be a class IMemoryAllocator and struct API_FLOW_DESC to optionally configure the behaviour of the API.

@Ohjurot Ohjurot added the enhancement New feature or request label Dec 26, 2020
@Ohjurot Ohjurot self-assigned this Dec 26, 2020
@Ohjurot Ohjurot pinned this issue Dec 26, 2020
@KITATUS
Copy link
Contributor

KITATUS commented Dec 27, 2020

Excellent, thanks for the flow graph. This is enough information to at least make a space in the UE4 port for this!

@Ohjurot
Copy link
Owner Author

Ohjurot commented Dec 29, 2020

@KITATUS another thing I should mention is that with the addition of audio a dedicated worker thread is required. As already stated out it will be possible to fully control the flow via the coming API_FLOW_DESC descriptor. I am not familiar with the job / threading system of the Unreal Engine, so If you want to fully integrate the API callings into the engine you need to provide me some information on how this kind of stuff is done in UE (then I can design this system with UE in mind). I suspect UE has a kind of UE4_JOB_DESC (or sth like that) which include a function pointer to static callback which accepts void* set in the struct. This would perfectly fit with my current design in mind... but probably it is a bit more complex ;)

@KITATUS
Copy link
Contributor

KITATUS commented Dec 29, 2020

As much as I hate throwing videos at people, I think this video sums up pretty well how UE deals (read: WANTS you to deal) with tasks / threads in a practical example: https://www.youtube.com/watch?v=0Yyh3oQgonI - I would show some documentation but UE falls off a cliff when it comes to documentation that is actually useful for things like threads and tasks :P Hope it helps!

@Ohjurot
Copy link
Owner Author

Ohjurot commented Dec 29, 2020

@KITATUS The video was good for the beginning and helped me to gain a first insight into the unreal engine concurrent architecture. Integrating should be no problem than. I just requested access to the UE4 source code, I plan to look at it.

@KITATUS
Copy link
Contributor

KITATUS commented Dec 29, 2020

If it helps take some stress from you, I am more than happy to integrate the next release with this memory management in and report back any pain points or changes I had to do to make it work (if any). That way you don't have to crawl through the source code of UE (which from too many years of experience, I warn can be very painful!)

@Ohjurot
Copy link
Owner Author

Ohjurot commented Dec 30, 2020

@KITATUS This would be nice! I poked around in some random files didn’t saw much code comments....

@petersvp
Copy link

Is there anh API to send samples to the controller? Or hints? Or anything related to reverse engineering the audio part? I don't want the library to spawn a worker thread by default but I want to find an interface to send audio data to the controller (the thread is my responsibility).

@Ohjurot
Copy link
Owner Author

Ohjurot commented Oct 31, 2023

The controller is a normal audio device. You should be able to use it with no lib at all. I have a gist for demo: https://gist.github.com/Ohjurot/b0c04dfbd25fb71bc0da50947d313d1b

@DJm00n
Copy link

DJm00n commented Oct 31, 2023

@Ohjurot Neat!

One thing to add: you can select proper audio device by the same DEVPKEY_Device_ContainerId property as in requested HID device:

image

PS: This will work only via usual USB connection. Sending sound to a DualShock4/DualSense via Bluetooth is a way harder - there is no any Audio device present in Windows in this case...

@petersvp
Copy link

When I run the gist code, the gamepad is playing haptics only. I changed the sine wave to something more recognizable as a tone and heard it from my headphones when I set it to the first 2 channels.

If I assign the value to the first two channels however nothing happens. Nothing comes out of the gamepad speaker. It seems that the controller is only responding to channels 3 and 4 using the haptics.

If I use the official devnet library the gamepad itself plays audio from its speaker... But.. It's complicated for obvious reasons.

@DJm00n
Copy link

DJm00n commented Oct 31, 2023

@petersvp yes, this audio device is quadrophonic. Channels 1-2 are responsible for gamepad speaker (or attached headphones), channels 3-4 for haptics feedback. It is by design for usb connection to PC.

@petersvp
Copy link

My issue is that whatever I write to channels 1 and 2 seems ignored. I checked in mmsys.cpl that the volume isn't down to zero or something like this. Can't get any audio from the speaker, only from the haptics.

@DJm00n
Copy link

DJm00n commented Oct 31, 2023

AFAIK additional HID output report request should be sent to switch between speaker or headphones output or no output.

I have found some related code. See command_speaker() and command_volume().

@petersvp
Copy link

Hello, looks like this linux tool actually knows much more...
I added this tovoid __DS5W::Output::createHidOutputBuffer:

	hidOutBuffer[0x07] = 3 << 4; // DS_OUTPUT_AUDIO_OUTPUT_PATH_SHIFT;
	hidOutBuffer[0x05] = 256;

And suddenly everything I do on Windows, everything that comes through MME and WASAPI, every other application started to play through the built-in speaker (I did set the gamepad as the default speaker).

Thanks, now it's just some small work to get simple audio mixer working as part of the library so I can play some small waveforms and haptics through it.

(I am basically writing a C# wrapper for use with Unity without the new Input system and will soon create a fork / new repo here on Github (and on Devnet too))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants