GlClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) Kernel(vptr,sh_w,sh_h,self.anim)ĭef initializeGL(self): glDisable(GL_DEPTH_TEST) tGeometry(QtCore.QRect(0,0.,self.width,self.height))ĬudaGLMapBufferObject(byref(vptr),self.vbo)īlock = dim3(16,16,1) grid = dim3(sh_w/block.x,sh_h/block.y,1) vice = cuda_CUDA() self.initializeGL gets called automatically # and implicitly creates the OpenGL context QtOpenGL.QGLWidget._init_(self,parent,name) Kernel2.argtypes = Ĭlass CudaGLWidget(QtOpenGL.QGLWidget): def _init_(self,parent = None,name = None): Don*t know about DirectX, as I am not using that.Įlif key = Qt.Key_P: image = abFrameBuffer() CUDA and OpenGL work together very well, there are several examples in the CUDA SDK. Note that if you use Qt, life is very simple and if you are not doing this commercially, you can use the GPL versions of Qt and PyQt - plus my CUDA bindings for Python which you can find at or via http ( /sources/rpms/python-cuda. You may read this yourself with a cudaMemCopy, I guess - minus all color and lighting info. The example below using vertex buffer objects, to store the result of the CUDA computation. But the short summary answer is: yes, you can do screen capture when using CUDA, although not in CUDA itself. You may have to reproduce what grabFrameBuffer does, presumably, as others have pointed out glReadPixels. You can probably substitute your own favorite programming language for that (C/C++/Fortran). ![]() The “screen capture” is a single of code image = abFrameBuffer() within this code and the image saved as PNG. This uses OpenGL (via my own python-ogl bindings - you can substitute PyOpenGL for that) and Qt with the QtOpenGL widget. That would save some CPU time.īelow I attach some Python code (an extended translation of one of the CUDA SDK examples). CUDA should have access to the DirectX capture surface and can perform scale operations and potentially output to a new surface that is saved.
0 Comments
Leave a Reply. |