On Sat, 2014-11-15 at 03:58 +0100, Tobias Klausmann wrote:
On 15.11.2014 03:25, Roger Luedecke wrote:
On Fri, 2014-11-14 at 16:20 +0100, Tobias Klausmann wrote:
On 14.11.2014 08:01, Roger Luedecke wrote:
On Thu, 2014-11-13 at 22:25 +0100, Tobias Klausmann wrote:
On 13.11.2014 21:25, Roger Luedecke wrote:
On Thu, 2014-11-13 at 17:10 -0300, Cristian Rodríguez wrote: > El 13/11/14 a las 16:54, Roger Luedecke escribió: >> I recently became aware of eudev while attempting to fix a botched >> Bumblebee install. Many users were advised to install eudev to resolve >> certain Bumblebee problems. It looks like the goal is to increase >> hardware compatibility for older hardware and alternate init systems. >> I'm wondering though based on the Bumblebee related anecdotes if it is >> something we should look into. >> > udev has nothing to do with this hack called bumblebee, the problem it > attempts to address "nvidia optimus support" has to be fixed first in > the kernel and then in the X stack, everything else is in the wrong > layer and will cause endless pain. > > I understand the mess with Bumblebee. In my case, after having installed it udev somehow was broken and would fail to load or initialize. In the course of researching "bumblebee udev" I stumbled across eudev. I though that in general (not specifically to Bumblebee) that it could bear a looking at.
if you are not using the proprietary driver(s), especially the NVIDIA one (not sure about the AMDs), you can use DRI_PRIME={0,1} to control offloading to the secondary card. It works just fine for me and with recent kernels which got render-nodes (3.17 +) we can use DRI3 which resolves some performance problems as well. So there is a infrastructure in the kernel and X. The Binary driver(s) just don't make use of it. Actually, from what I understand NVIDIA has an official solution using XrandR that Ubuntu is using.
Oh, if it works the "DRI2/xrandr" method, you may follow the instructions in the DRI2 section here: http://nouveau.freedesktop.org/wiki/Optimus/ Just exchange nouveau with whatever the Nvidia driver is called.
There does seem to be an official solution provided by NVIDIA which based on how Bumblebee killed udev would be a good idea to ship instead of Bumblebee. Following are relevant links.
http://us.download.nvidia.com/XFree86/Linux-x86/319.12/README/optimus.html http://us.download.nvidia.com/XFree86/Linux-x86/319.12/README/commonproblems... http://us.download.nvidia.com/XFree86/Linux-x86/319.12/README/randr14.html
(after a bit of digging:) If i see it right, it has the downfall that you only use the nvidia card all the time, letting the modesetting driver manage your outputs (another driver has to, if the nvidia card has no output connected to it). This just wastes energy in my eyes and with the current state i like to call it a hack ;-) . I think they have to do it as they overwrite libGLX.so. With that done there can only be one working OpenGL stack: Mesa vs. Nvidia. Maybe the vendor independent libs will finally solve this: http://www.x.org/wiki/Events/XDC2013/XDC2013AndyRitgerVendorNeutralOpenGL/li...
Greetings, Tobias
If that is the case, then that is certainly suboptimal. Seems odd they'd create a switching solution that doesn't turn the NVIDIA card off when not in use. I wonder if it would be particularly difficult to add something to switch it off when not in use. Perhaps Ubuntu already did this since they've implemented some kind of applet to manage the switching. The current state of things with Bumblebee is awful. I've never had anything break udev before. -- Roger Luedecke openSUSE Project Member and Advocate since 2011 http://www.opensuseadventures.blogspot.com -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org