-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
framework laptop 16 hybrid gpu support #101
Comments
Framework laptop has 2 M.2 SSD slots and I plan to install different Linux distros to one of them and use second slot as a storage for builds. Not sure whether I could also get some distros installed to usb keys and booted from there. So far I have tested the 7700S functionality with Fedora 40. |
I will be very interested to see what you find. My 7840U (780M - gfx1103) will operate properly with pytorch on any gfx11xx build, but it randomly halts. Right now I'm just running it by restarting the python script if it exits without an expected return value. Not ideal but it gets me by. |
- initial support for gfx1036 and gfx1103 as a build target - updated also the gfx1010 configuration settings to be more similar in composable kernel and miopen fixes: #101 fixes: #103 Signed-off-by: Mika Laitio <[email protected]>
- initial support for gfx1036 and gfx1103 as a build target - updated also the gfx1010 configuration settings to be more similar in composable kernel and miopen fixes: #101 fixes: #103 Signed-off-by: Mika Laitio <[email protected]>
- initial support for gfx1036 and gfx1103 as a build target - updated also the gfx1010 configuration settings to be more similar in composable kernel and miopen fixes: #101 fixes: #103 Signed-off-by: Mika Laitio <[email protected]>
- initial support for gfx1036 and gfx1103 as a build target - updated also the gfx1010 configuration settings to be more similar in composable kernel and miopen fixes: #101 fixes: #103 Signed-off-by: Mika Laitio <[email protected]>
- initial support for gfx1036 and gfx1103 as a build target - updated also the gfx1010 configuration settings to be more similar in composable kernel and miopen fixes: #101 fixes: #103 Signed-off-by: Mika Laitio <[email protected]>
- initial support for gfx1036 and gfx1103 as a build target - updated also the gfx1010 configuration settings to be more similar in composable kernel and miopen fixes: #101 fixes: #103 Signed-off-by: Mika Laitio <[email protected]>
- initial support for gfx1036 and gfx1103 as a build target - updated also the gfx1010 configuration settings to be more similar in composable kernel and miopen fixes: #101 fixes: #103 Signed-off-by: Mika Laitio <[email protected]>
- initial support for gfx1036 and gfx1103 as a build target - updated also the gfx1010 configuration settings to be more similar in composable kernel and miopen fixes: #101 fixes: #103 Signed-off-by: Mika Laitio <[email protected]>
- initial support for gfx1036 and gfx1103 as a build target - updated also the gfx1010 configuration settings to be more similar in composable kernel and miopen fixes: #101 fixes: #103 Signed-off-by: Mika Laitio <[email protected]>
- initial support for gfx1036 and gfx1103 as a build target - updated also the gfx1010 configuration settings to be more similar in composable kernel and miopen fixes: #101 fixes: #103 Signed-off-by: Mika Laitio <[email protected]>
- initial support for gfx1036 and gfx1103 as a build target - updated also the gfx1010 configuration settings to be more similar in composable kernel and miopen fixes: #101 fixes: #103 Signed-off-by: Mika Laitio <[email protected]>
Initial work is now done and both the integrated M780 (gfx1103) and external 7700S (gfx1102) are More testing with the distro and new Linux 6.10 kernel is however still needed. |
That's great! I have downloaded and installed and am testing now. Seems I am unable to install the official Linux 6.10 kernel, but I am able to use the Linux 6.10 rc4 kernel. Important too, since the auto allocation of shared memory is supported I am getting this warning upon loading the pytorch_lightning module, but it doesn't seem actually affect the processing: I am still randomly coming across a fatal error: Interestingly enough though, this is only occurring in one section of pytorch code and not in another. So I'll have to investigate to see where exactly the differences may be triggering the error |
I received a Framework 16 laptop for testing and development with AMD's cpus and gpus.
So far tested:
This is the first time I am able to test with hyprid gpu's and would like to find ways to test all 3 scenarios:
The text was updated successfully, but these errors were encountered: