Am Dienstag, den 09.10.2007, 12:58 +0200 schrieb Matthias Hopf:
On Oct 08, 07 23:53:12 +0200, Syren Baran wrote:
Am Montag, den 08.10.2007, 15:08 +0200 schrieb Matthias Hopf:
I don't see the limitation here. Remember, only talking about the command processor, not the SPUs. And how do you prevent an executing program from accessing system memory? The set_inst_fmt sets a base address. The description about the
MMU. That's what I said, AFAIK the graphics card includes an MMU. I think it can only be programmed from the command processor. set_inst_fmt seems to program a virtual address. Given that the addresses the chip has to be programmed to use already differ from the ones of the PCIe bridge, I tend to believe that it has a read MMU. It also states that it uses a 32bit address range. This 32bit range is virtual, so knowing where $address points to physicly gives know sure knowledgle as to where $address+1 points to. So 0x00000000-0x0fffffff could point to graphics memory while other addresses above to point to system memory. Once this has been set up, how do you prevent a program from accesing these memory locations.
Only a very tiny bit: output programming, and even there some bits are still missing. Think its a bit more.
Not really. Read it. :-( Even pixel clock PLL docs are missing in public ATM. Actually i was refering to the ctm guide. And yes after some grepping i could not find any registers responsible for loading code.
This is only initialization (which you want to do with AtomBIOS anyways, because it very much differs from chip to chip, and there are literally hundreds of them) and output programming.
No information about command tables, no information about memory access, no information about assembler to memory conversion, no information how to initialize the chip.
True, some things are missing. The most relevant part being the initilisation.
Some things? Sorry, I read this differently. You don't even know whether this is the actual command language understood by the graphics card. It could very well be transformed by the CTM layer. The CTM units are descriped. Including the instructions and their numerical values for PE, MC and the CO. The ALU OP codes dont have their numerical values noted. Strangely enough the fglrx package does not include any unusual ELF binaries. Given the information in the guides its easy enough to find
the opcodes numerical values.
Due to your statements i just have to ask. You signed a NDA over the no longer publicly available CTM SDK?
Nope. In that case your knowledge could be very well better than mine. We have an NDA, but I don't really know the CTM SDK, except from SigGraph and public docs.
I found some code samples on sourceforge, no more. But know whats strange. They state the ctm sdk is longer availble publicly since 1. September. Less than one week prior to releasing the 2d specs. Of course that may just mean that amds polishing their new sdk before releasing it to a broader public, but i dont know.
The assembler is the most trivial aspect, and actually unnecessary. Mesa already has a ARB_fragment_program and _shader assembler, and a OpenGL 2.1 GLSL compiler. Unnecesarry? And what are those things supposed to output?
Binary blob. That what the card can actually run. This is to be created by the backend, which is provided by the driver. As I said, we currently have no clue how this command string has to actually look like, just some hints how the commands in this command string *probably* look like.
This "backend" IS the assembler. The ELF header is described in the ctm guide. Even a simple binary should be sufficient to figure the numerical values for the opcodes. Having access to a compiler would be ideal, just compile a single command and look at the executables hexdump.
An assembler for their own CTM language (which I think is what you are proposing, but I might have misunderstood you)
See above. The assembler creates the binary (elf) blob, you call it backend.
would have to do the same, but CTM syntax has no relevance in graphics. Please define "graphics" in this context. Something along the line of "using mesa soft-rendering or a gpu has no relevance on the resulting image"? An open CTM implementation would be nice, of course, but even nicer would be an open CUDA implementation for AMD chips, which probably won't be done by AMD anyways (because it's NVIDIA), and which is much further with respect to machine abstraction etc. Hmm, we are talking about the sdk´s here? Having used neither i dont what the differences are.
But one thing puzzling me now. If the fglrx package does not include an ati specific elf, they must be doing something differently. Maybe embedded in some other file? I´m wondering if this ctm is the wrong track, or some way for serious performance improvements. Didnt someone mention the r300 module might be usuable with minor modifications. Possibly there´s a legacy layer to communicate with the GPU for the now.
Matthias
Syren -- To unsubscribe, e-mail: radeonhd+unsubscribe@opensuse.org For additional commands, e-mail: radeonhd+help@opensuse.org