![]() ![]() Info: Kernel Version: Linux 4.4.0-83-generic Info: Processor: Intel(R) Core(TM) i7-3770 CPU 3.40GHz /data/obs-studio/themes/Dark.qssĪttempted path. /data/obs-studio/locale.iniĪttempted path. /data/obs-studio/locale/en-US.iniĪttempted path. Is there some differences between the openGL / Xorg rendering and the direct3D one that could explain this huge difference? Is it an OBS-related optimization issue? I've seen some 2-years-old post on Stack Overflow where MadCactus relates his early xorg problems, maybe it has changed since that? I really would like to kind of automate everything and Linux is much more indicated for server-like purposes than Windows.Īttempted path. I'm using 0.19.0.3 on both OSes, and nvidia drivers 183 on Linux and 184 on Windows. ![]() ![]() It results that when streaming, on Linux I get ~22FPS instead of 30. When looking at GPU usage (using "nvidia-smi dmon" command on linux, GPU-Z on Windows) I see on Linux 100% usage without even streaming, but on Windows it is only around 50%, and ~80% when streaming. Oddly enough, everything works well on Windows. So far, the NVENC part works well and I am able to encode my output stream using the video engine from my GTX 1070.īut I have dropped frames even when not streaming, the decoding / filtering / rendering parts alone seem to put my GPU on its knees. I've been playing with OBS for a few days now, and my goal is to setup a Linux machine that receives 4K RTMP streams as pull input (through some RTMP server like nginx or srs) and output it in 4K using NVENC. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |