FPGA Recreation of the GPU/GTE/MDEC
FPGA Recreation of the GPU/GTE/MDEC
Hi ,
I have registered here, launching my little bottle at the sea, hoping for somebody to be able to answer my questions.
I have been working for a few months now on my free time to recreate in FPGA the PSX chips.
While full integration is not done yet, the core stuff is starting to take a good shape (MDEC computation mostly done within timing budget, most of the GTE reworked 3 times to fit the computation performance, and finally getting to see the light at the end of the tunnel).
MDEC and GTE will still probably to keep me busy for this year still...
I am also looking at the GPU specs nowadays, but I will probably have no time to attack the FPGA implementation of this chip in the next 10 to 12 months most likely...
Still, I would like some help from the dev side with somebody able to test the real HW and write/perform some test where the specification are not clear or who can help me finding answers to my questions. My time being limited, it is very time consuming to find (or fail to find) an information that takes 5 mn to design once I have the answer, but eat up my time on trying to find the answer (of course I read the doc, but I am not expert on the platform, do not know the specs by heart, consider me as a beginner when it comes to coding experience on the PSX, etc...etc..). So for now I am doing 'educated guesses' when implementing based on the specs / emulator source code / etc.
I do have a PS1, but no devkit for now and I'd prefer to spend most of my time doing the HW design/implementation than checking things around...
So is there anybody here tempted to join the project and work with me, bringing their experience of the PSX in this endeavor ?
It is not like I will ask many many things and require a lot of test, but sometime have a list of question that could be answered probably very quickly by somebody with a bit of PSX experience. Or at least I could discuss with somebody to give me an opinion about what I am wondering.
Of course proper credit in the HDL source code will be made, and I would actually find nice to release the source code of the tests if code has been created for this.
Discussion via email would be better than using the Board tbh, Discord for real time would even be better.
One example : the No$PSX specs says that any graphic command coordinates are 12 bit signed (-1024..1023) and that if the size of a triangle is >= 512 vertically it is going to be rejected. (difference in Y axis) same for >= 1024 horizontally...
If we add the position offset register, it seems that internally coordinate are not 12 bit anymore but 13 bit width then (because of the addition). Of course, as it is a translation with offsets, the condition should still hold.
I would suspect that internally the difference test is the same but actually on 13 bit. But does the offset command (GP0(E5)) impact that or not... It is not really said clearly in the spec. I would suspect it changes nothing and the chip behave perfectly just using 13 bit instead of 12 internally to compute the differences/perform internal clipping
I also have questions about DMA, MDEC timing, etc... Anyway, I hope will be interested to help and pick up the bottle .
I have registered here, launching my little bottle at the sea, hoping for somebody to be able to answer my questions.
I have been working for a few months now on my free time to recreate in FPGA the PSX chips.
While full integration is not done yet, the core stuff is starting to take a good shape (MDEC computation mostly done within timing budget, most of the GTE reworked 3 times to fit the computation performance, and finally getting to see the light at the end of the tunnel).
MDEC and GTE will still probably to keep me busy for this year still...
I am also looking at the GPU specs nowadays, but I will probably have no time to attack the FPGA implementation of this chip in the next 10 to 12 months most likely...
Still, I would like some help from the dev side with somebody able to test the real HW and write/perform some test where the specification are not clear or who can help me finding answers to my questions. My time being limited, it is very time consuming to find (or fail to find) an information that takes 5 mn to design once I have the answer, but eat up my time on trying to find the answer (of course I read the doc, but I am not expert on the platform, do not know the specs by heart, consider me as a beginner when it comes to coding experience on the PSX, etc...etc..). So for now I am doing 'educated guesses' when implementing based on the specs / emulator source code / etc.
I do have a PS1, but no devkit for now and I'd prefer to spend most of my time doing the HW design/implementation than checking things around...
So is there anybody here tempted to join the project and work with me, bringing their experience of the PSX in this endeavor ?
It is not like I will ask many many things and require a lot of test, but sometime have a list of question that could be answered probably very quickly by somebody with a bit of PSX experience. Or at least I could discuss with somebody to give me an opinion about what I am wondering.
Of course proper credit in the HDL source code will be made, and I would actually find nice to release the source code of the tests if code has been created for this.
Discussion via email would be better than using the Board tbh, Discord for real time would even be better.
One example : the No$PSX specs says that any graphic command coordinates are 12 bit signed (-1024..1023) and that if the size of a triangle is >= 512 vertically it is going to be rejected. (difference in Y axis) same for >= 1024 horizontally...
If we add the position offset register, it seems that internally coordinate are not 12 bit anymore but 13 bit width then (because of the addition). Of course, as it is a translation with offsets, the condition should still hold.
I would suspect that internally the difference test is the same but actually on 13 bit. But does the offset command (GP0(E5)) impact that or not... It is not really said clearly in the spec. I would suspect it changes nothing and the chip behave perfectly just using 13 bit instead of 12 internally to compute the differences/perform internal clipping
I also have questions about DMA, MDEC timing, etc... Anyway, I hope will be interested to help and pick up the bottle .
Hey,
collaboration in PSX hardware projects happens sporadically and randomly.
It would be best if you post your tasks / questions as publicly as possible, and then hope that someone takes them up ;p
Your project requires knowledge from the most dedicated: Emu devs with hardware research focus.
I bet your best chance to reach them will be exactly here on this forum.
Good luck!
collaboration in PSX hardware projects happens sporadically and randomly.
It would be best if you post your tasks / questions as publicly as possible, and then hope that someone takes them up ;p
Your project requires knowledge from the most dedicated: Emu devs with hardware research focus.
I bet your best chance to reach them will be exactly here on this forum.
Good luck!
@rama3
Thanks for taking care and answering me.
What would you think would be best :
1/ Seperate post per tech question ?
2/ Bump this thread ? (A lot easier for me...)
All my question are from a HW designer perspective (and particularly my HW implementation, as I try to optimize the design to be as small and compact as possible), so they may not make sense or seems stupid from a PSX dev point of view but anyway, here is my first question :
[Question 1]
In the GPU specs (No$PSX), it says that if a graphic primitive is larger than 1023 | higher than 511 pixels, the primitive is rejected. So, inside a SINGLE command,
A - In the case of polyline : if ONE segment is out of range, do all the lines segment *AFTER* that are also rejected ?
Do this SINGLE segment is rejected ?
B - Same for quad vertex, if one of the triangle is too big : does only ONE of the triangle is rendered, or is it the command that is simply skipped ? (ie first triangle too big, second triangle rendered or both skipped ?, if second triangle is too big, both skipped or 2nd triangle only (from a HW state machine point of view, those cases MAY be different).
(In the case of Rectangle, no problem : W/H will reject all internal triangles at once)
(Seems to me that it should try to push primitive as much as possible, but multiple state machine implementation are possible)
Thanks for taking care and answering me.
What would you think would be best :
1/ Seperate post per tech question ?
2/ Bump this thread ? (A lot easier for me...)
All my question are from a HW designer perspective (and particularly my HW implementation, as I try to optimize the design to be as small and compact as possible), so they may not make sense or seems stupid from a PSX dev point of view but anyway, here is my first question :
[Question 1]
In the GPU specs (No$PSX), it says that if a graphic primitive is larger than 1023 | higher than 511 pixels, the primitive is rejected. So, inside a SINGLE command,
A - In the case of polyline : if ONE segment is out of range, do all the lines segment *AFTER* that are also rejected ?
Do this SINGLE segment is rejected ?
B - Same for quad vertex, if one of the triangle is too big : does only ONE of the triangle is rendered, or is it the command that is simply skipped ? (ie first triangle too big, second triangle rendered or both skipped ?, if second triangle is too big, both skipped or 2nd triangle only (from a HW state machine point of view, those cases MAY be different).
(In the case of Rectangle, no problem : W/H will reject all internal triangles at once)
(Seems to me that it should try to push primitive as much as possible, but multiple state machine implementation are possible)
-
Administrator Verified
- Admin / PSXDEV
- Posts: 2689
- Joined: Dec 31, 2012
- I am a: Shadow
- PlayStation Model: H2000/5502
Someone was already making the PSX on an FPGA core. There was a guy from Japan doing it. Run a Google search and you'll find him
Development Console: SCPH-5502 with 8MB RAM, MM3 Modchip, PAL 60 Colour Modification (for NTSC), PSIO Switch Board, DB-9 breakout headers for both RGB and Serial output and an Xplorer with CAETLA 0.34.
PlayStation Development PC: Windows 98 SE, Pentium 3 at 400MHz, 128MB SDRAM, DTL-H2000, DTL-H2010, DTL-H201A, DTL-S2020 (with 4GB SCSI-2 HDD), 21" Sony G420, CD-R burner, 3.25" and 5.25" Floppy Diskette Drives, ZIP 100 Diskette Drive and an IBM Model M keyboard.
PlayStation Development PC: Windows 98 SE, Pentium 3 at 400MHz, 128MB SDRAM, DTL-H2000, DTL-H2010, DTL-H201A, DTL-S2020 (with 4GB SCSI-2 HDD), 21" Sony G420, CD-R burner, 3.25" and 5.25" Floppy Diskette Drives, ZIP 100 Diskette Drive and an IBM Model M keyboard.
-
Squaresoft74 Verified
- /// PSXDEV | ELITE ///
- Posts: 310
- Joined: Jan 07, 2016
- PlayStation Model: SCPH-7502
- Location: France
- Contact:
[BBvideo=560,315]https://www.youtube.com/watch?v=KJlxOIi_hQA[/BBvideo]
Hi.
I know pgate1 and I have met him IRL a few month ago actually .
I even convinced him to put his things open source ! But he uses a very obscure HDL language.
To show my commitment to the project you can already look at my GitHub ( https://github.com/Laxer3a/MDEC )
I will update it in a few days with new stuff (particularly documentation about the implementation and small fixes here and there).
I'd say that 90% (95% if optimistic) at least is done for the MDEC implementation itself.
The remaining stuff is not about computation or logic but verification / testing and control signals (stop this or that unit when the FIFO is full/empty, spec related to DMA, this or that unit is busy etc...).
For now I am a bit more busy with other PSX chips but I did have to come back to the MDEC for a few days and wrote the documentation. I plan to update the github soon with the latest doc.
This topic from here : http://www.psxdev.net/forum/viewtopic.php?f=70&t=551
and here : http://board.psxdev.ru/topic/9/
has been very helpfull indeed (while I am not doing exactly the same logic, I do the same computation but you'll see in the doc)
I know pgate1 and I have met him IRL a few month ago actually .
I even convinced him to put his things open source ! But he uses a very obscure HDL language.
To show my commitment to the project you can already look at my GitHub ( https://github.com/Laxer3a/MDEC )
I will update it in a few days with new stuff (particularly documentation about the implementation and small fixes here and there).
I'd say that 90% (95% if optimistic) at least is done for the MDEC implementation itself.
The remaining stuff is not about computation or logic but verification / testing and control signals (stop this or that unit when the FIFO is full/empty, spec related to DMA, this or that unit is busy etc...).
For now I am a bit more busy with other PSX chips but I did have to come back to the MDEC for a few days and wrote the documentation. I plan to update the github soon with the latest doc.
This topic from here : http://www.psxdev.net/forum/viewtopic.php?f=70&t=551
and here : http://board.psxdev.ru/topic/9/
has been very helpfull indeed (while I am not doing exactly the same logic, I do the same computation but you'll see in the doc)
One more comment :
"Your project requires knowledge from the most dedicated: Emu devs with hardware research focus.
I bet your best chance to reach them will be exactly here on this forum."
Actually no it isn't, I would take any help from any psx dev knowing how to use properly the hardware (DMA, registers, timing) and willing to help by writing a few lines of C code and may be a few line in assembly in the worst case.
Most of my question would probably requires from a few minutes to a few hours of work.
I can write the specs of the tests I want to do, but I have no experience nor access to a devkit on a real HW, it doesn't require the person willing to help me to have super subset of skills actually...
But, yes, it requires a will to help getting knowledge, hoping to preserve PSX hardware for the future.
"Your project requires knowledge from the most dedicated: Emu devs with hardware research focus.
I bet your best chance to reach them will be exactly here on this forum."
Actually no it isn't, I would take any help from any psx dev knowing how to use properly the hardware (DMA, registers, timing) and willing to help by writing a few lines of C code and may be a few line in assembly in the worst case.
Most of my question would probably requires from a few minutes to a few hours of work.
I can write the specs of the tests I want to do, but I have no experience nor access to a devkit on a real HW, it doesn't require the person willing to help me to have super subset of skills actually...
But, yes, it requires a will to help getting knowledge, hoping to preserve PSX hardware for the future.
What you are doing sounds quite inefficient - it really would make more sense for you to pick up some psxdev. There are plenty of resources for getting started on here. And a console/xplorer cartridge or serial cable is much cheaper than an FPGA.
Sorry ? My GOAL is to recreate the hardware in FPGA ! Why would I not use a FPGA board !?
I'm just asking for some people if they have info or would like to join and help me when I am stuck on some specification detail when it is not well defined, by writing a piece of code and check it on real HW and help the project...
I simply do not want to do everything myself (including learning the psx tool chain, learning how to dev on the thing, then write the code and not be sure if I did things correctly, etc...etc...), given that HW design is already taking time in the 100s of hours VERY VERY quickly... (took me 250~300 hours I think, just for the MDEC stuff, and it is not fully complete yet, the same C equivalent would be probably in one big day or two)
So if a question can't be answered, the best I'll do is probably "guesstimate" the most logic answer from the limitation of the specs, the possibles different HW designs I can think of, and then implement it and move to the next thing to do, check things are just "workish" and call it a day.
(PS : I actually do not use a FPGA board yet, I design the circuit using verilog and then use Verilator (converter to C++), so I can check easily against a reference C implementation if my computation or logic is correct but also be able to integrate it inside an emulator later on if needed as an example. But when it comes to interrupt, bus and DMA, emulator are far from accurate)
I'm just asking for some people if they have info or would like to join and help me when I am stuck on some specification detail when it is not well defined, by writing a piece of code and check it on real HW and help the project...
I simply do not want to do everything myself (including learning the psx tool chain, learning how to dev on the thing, then write the code and not be sure if I did things correctly, etc...etc...), given that HW design is already taking time in the 100s of hours VERY VERY quickly... (took me 250~300 hours I think, just for the MDEC stuff, and it is not fully complete yet, the same C equivalent would be probably in one big day or two)
So if a question can't be answered, the best I'll do is probably "guesstimate" the most logic answer from the limitation of the specs, the possibles different HW designs I can think of, and then implement it and move to the next thing to do, check things are just "workish" and call it a day.
(PS : I actually do not use a FPGA board yet, I design the circuit using verilog and then use Verilator (converter to C++), so I can check easily against a reference C implementation if my computation or logic is correct but also be able to integrate it inside an emulator later on if needed as an example. But when it comes to interrupt, bus and DMA, emulator are far from accurate)
Paulm probably meant it as a cost comparison, essentially saying that if you have an FPGA board, the price of a Xploder wouldn't be too bad.
I agree with him on the psxdev side.
Once you have the required hardware, getting homebrew to run isn't very time consuming.
You need a virtual machine with the SDK of your choice installed (takes 10 minutes).
Most of the hardware can be accessed directly and nocash tells you exactly where to look and what you can do.
There's even a real hardware debugger effort going on recently.
Yes, it is yet another chunk of work to do, I understand that.
It would be great to share the workload with others.
But if you consider the collaboration effort, sometimes it's really just easier to fire up a test yourself.
Consider also that you could share your code and ask specifically when some results are weird
I agree with him on the psxdev side.
Once you have the required hardware, getting homebrew to run isn't very time consuming.
You need a virtual machine with the SDK of your choice installed (takes 10 minutes).
Most of the hardware can be accessed directly and nocash tells you exactly where to look and what you can do.
There's even a real hardware debugger effort going on recently.
Yes, it is yet another chunk of work to do, I understand that.
It would be great to share the workload with others.
But if you consider the collaboration effort, sometimes it's really just easier to fire up a test yourself.
Consider also that you could share your code and ask specifically when some results are weird
- Dedok179
- Serious PSXDEV User
- Posts: 86
- Joined: Jun 11, 2015
- I am a: Programmer, Beginning reverser
- PlayStation Model: SCPH-5502
- Discord: Dedok179#2632
- Location: Tula,RU
Maybe here you will find some necessary information. What you are doing is a very interesting project.
https://github.com/led02/psx_fpga
https://github.com/led02/psx_fpga
Hi !
Oh I see, well, it is not a financial issue.
I have spent quite some money on HW stuff for various projects.
The thing is that I am married, have kid and family to take care of, a quite busy SW engineering job, and then finally my free time. My most precious resources is time.
Yeah, I considered buying an old PSX and get a devkit. But I know enough from experience that for a new environment, what is supposed to take like 5 hours, is going to take actually 25 or 30. And then I will face all the problems of a new beginner dev, being stuck sometime...
(Just this sentence : "You need a virtual machine with the SDK of your choice installed (takes 10 minutes)." Uh oh... with the experience I had about virtual machines, does it mean linux ?! Then nope, it is not 10 mn, it is going to be 5 hours if I am lucky... Me and unix shell mix like oil and water. I think you kindly overestimate this skill that I do not have and the entry cost of it )
I am really not sure if I want to go this way .
May be I will when it will come to get the DMA and bus and connecting all the pieces together. At that point, it is most likely that I will write test and look at them with a logic analyzer to check the bus activity and timing between the GPU and CPU chip on a mother board.
For now, I am much more focused on getting each component the proper computation/behavior according specs and internal timing.
Given that for HW, designing the logic and HW itself is one thing, but it also requires to write C code sometime and create a model to verify the mathematics, check the results, etc... Lots of potential different implementation to consider too.
All in all, to be honest, it is already a too time consuming endeavor for me alone. As I want to be able to continue doing this project, I do not plan to put more burden on my shoulders than I have now.
MDEC is done at 90~95% I would say. It took me nearly 2 month of work full time (outside of my job at night until 3AM, and spent all my week-end on it. I think I did work as much as many hours as my real job during the same time period)
http://jpsxdec.blogspot.com/2010/09/psx ... wdown.html
There was a nice zip with all the different MDEC implementation from various emulators, I did modify the MAME source code and put all my stuff in there instead and did all my R&D with it.
Anyway, I released my latest version of the MDEC implementation this week-end, you may enjoy the PDF in the doc folder and the dev log. (GTE stuff is outdated, it will probably change on github in a few month.)
( https://github.com/Laxer3a/MDEC )
GTE, while probably completely different from the original design, is getting there too but there are a lot of work to do on it yet. I finally fixed the design after a 3rd attempts, I need now to update the verilog sources to the latest design and have my tool generate the proper microcode for the instructions (Of course it will respect the original computation, bugs (!) and timing)
I will probably put my GTE (and if lucky my GPU) implementation into a custom build of PCSXR to test the other chips.
I hope to complete 100% both MDEC and GTE by the end of this year anyway. Then I will work on the GPU if I still have the fire in me...
Oh I see, well, it is not a financial issue.
I have spent quite some money on HW stuff for various projects.
The thing is that I am married, have kid and family to take care of, a quite busy SW engineering job, and then finally my free time. My most precious resources is time.
Yeah, I considered buying an old PSX and get a devkit. But I know enough from experience that for a new environment, what is supposed to take like 5 hours, is going to take actually 25 or 30. And then I will face all the problems of a new beginner dev, being stuck sometime...
(Just this sentence : "You need a virtual machine with the SDK of your choice installed (takes 10 minutes)." Uh oh... with the experience I had about virtual machines, does it mean linux ?! Then nope, it is not 10 mn, it is going to be 5 hours if I am lucky... Me and unix shell mix like oil and water. I think you kindly overestimate this skill that I do not have and the entry cost of it )
I am really not sure if I want to go this way .
May be I will when it will come to get the DMA and bus and connecting all the pieces together. At that point, it is most likely that I will write test and look at them with a logic analyzer to check the bus activity and timing between the GPU and CPU chip on a mother board.
For now, I am much more focused on getting each component the proper computation/behavior according specs and internal timing.
Given that for HW, designing the logic and HW itself is one thing, but it also requires to write C code sometime and create a model to verify the mathematics, check the results, etc... Lots of potential different implementation to consider too.
All in all, to be honest, it is already a too time consuming endeavor for me alone. As I want to be able to continue doing this project, I do not plan to put more burden on my shoulders than I have now.
MDEC is done at 90~95% I would say. It took me nearly 2 month of work full time (outside of my job at night until 3AM, and spent all my week-end on it. I think I did work as much as many hours as my real job during the same time period)
http://jpsxdec.blogspot.com/2010/09/psx ... wdown.html
There was a nice zip with all the different MDEC implementation from various emulators, I did modify the MAME source code and put all my stuff in there instead and did all my R&D with it.
Anyway, I released my latest version of the MDEC implementation this week-end, you may enjoy the PDF in the doc folder and the dev log. (GTE stuff is outdated, it will probably change on github in a few month.)
( https://github.com/Laxer3a/MDEC )
GTE, while probably completely different from the original design, is getting there too but there are a lot of work to do on it yet. I finally fixed the design after a 3rd attempts, I need now to update the verilog sources to the latest design and have my tool generate the proper microcode for the instructions (Of course it will respect the original computation, bugs (!) and timing)
I will probably put my GTE (and if lucky my GPU) implementation into a custom build of PCSXR to test the other chips.
I hope to complete 100% both MDEC and GTE by the end of this year anyway. Then I will work on the GPU if I still have the fire in me...
Oh yeah, I have seen this too. One of my friend is trying to get it running on a board...Dedok179 wrote: ↑September 2nd, 2019, 5:14 pm Maybe here you will find some necessary information. What you are doing is a very interesting project.
https://github.com/led02/psx_fpga
But their design has many flaws, implementation is incomplete too I think.
I think you can't ask for university student to complete a project when the time allocated to it is too small or their ambition too big. They should have worked on a single chip (or sub unit) and make it a lot smaller and cleaner.
Their GPU is too huge (I did not checked the timing). Same with the computation in the GTE. I don't know how many multiplier unit or logic they use, but it is huge. Definitely not how a FPGA implementation should be and neither like a real PSX is. But I can't blame them, they probably got a lot of limited time. Moreover, they are there (university) to learn, not to make retro chip.
-
- What is PSXDEV?
- Posts: 1
- Joined: Dec 05, 2019
Is there a github repo link for this project if people want to contribute to it. I'm interested in making contributions to it.laxer3a wrote: ↑August 25th, 2019, 6:08 am Hi ,
I have registered here, launching my little bottle at the sea, hoping for somebody to be able to answer my questions.
I have been working for a few months now on my free time to recreate in FPGA the PSX chips.
While full integration is not done yet, the core stuff is starting to take a good shape (MDEC computation mostly done within timing budget, most of the GTE reworked 3 times to fit the computation performance, and finally getting to see the light at the end of the tunnel).
MDEC and GTE will still probably to keep me busy for this year still...
I am also looking at the GPU specs nowadays, but I will probably have no time to attack the FPGA implementation of this chip in the next 10 to 12 months most likely...
Still, I would like some help from the dev side with somebody able to test the real HW and write/perform some test where the specification are not clear or who can help me finding answers to my questions. My time being limited, it is very time consuming to find (or fail to find) an information that takes 5 mn to design once I have the answer, but eat up my time on trying to find the answer (of course I read the doc, but I am not expert on the platform, do not know the specs by heart, consider me as a beginner when it comes to coding experience on the PSX, etc...etc..). So for now I am doing 'educated guesses' when implementing based on the specs / emulator source code / etc.
I do have a PS1, but no devkit for now and I'd prefer to spend most of my time doing the HW design/implementation than checking things around...
So is there anybody here tempted to join the project and work with me, bringing their experience of the PSX in this endeavor ?
It is not like I will ask many many things and require a lot of test, but sometime have a list of question that could be answered probably very quickly by somebody with a bit of PSX experience. Or at least I could discuss with somebody to give me an opinion about what I am wondering.
Of course proper credit in the HDL source code will be made, and I would actually find nice to release the source code of the tests if code has been created for this.
Discussion via email would be better than using the Board tbh, Discord for real time would even be better.
One example : the No$PSX specs says that any graphic command coordinates are 12 bit signed (-1024..1023) and that if the size of a triangle is >= 512 vertically it is going to be rejected. (difference in Y axis) same for >= 1024 horizontally...
If we add the position offset register, it seems that internally coordinate are not 12 bit anymore but 13 bit width then (because of the addition). Of course, as it is a translation with offsets, the condition should still hold.
I would suspect that internally the difference test is the same but actually on 13 bit. But does the offset command (GP0(E5)) impact that or not... It is not really said clearly in the spec. I would suspect it changes nothing and the chip behave perfectly just using 13 bit instead of 12 internally to compute the differences/perform internal clipping
I also have questions about DMA, MDEC timing, etc... Anyway, I hope will be interested to help and pick up the bottle .
-
- Interested PSXDEV User
- Posts: 5
- Joined: Sep 09, 2013
Hey man, dont be hating :plaxer3a wrote: ↑September 2nd, 2019, 6:43 pmOh yeah, I have seen this too. One of my friend is trying to get it running on a board...Dedok179 wrote: ↑September 2nd, 2019, 5:14 pm Maybe here you will find some necessary information. What you are doing is a very interesting project.
https://github.com/led02/psx_fpga
But their design has many flaws, implementation is incomplete too I think.
I think you can't ask for university student to complete a project when the time allocated to it is too small or their ambition too big. They should have worked on a single chip (or sub unit) and make it a lot smaller and cleaner.
Their GPU is too huge (I did not checked the timing). Same with the computation in the GTE. I don't know how many multiplier unit or logic they use, but it is huge. Definitely not how a FPGA implementation should be and neither like a real PSX is. But I can't blame them, they probably got a lot of limited time. Moreover, they are there (university) to learn, not to make retro chip.
I'm mostly kidding, it was three of us working on it and we had too much other work to do that semester to complete the project. The GPU works fine but it will take up half of the FPGA you're targeting by itself; I could regale you with the story of how we spent a ton of time doing silly things and ended up having working pieces but not a working whole thanks to a small integration bug; some of that is in our group report also in the repo. The point was a learning experience anyway, and I did learn alot. I don't think any of us three would mind talking with you about it if you have any interest in our experience. I know you've been working on this for a while whereas we haven't really touched it in 6 years; so good luck lol
And of course, since it looks like you posted in August and it's now December, that's how long we had on the whole project
Hey, we are in March now !
Sure I'd like to hear your stuff, and be glad to be in touch !
For the repos, here we are :
SPU :
https://github.com/Laxer3a/MDEC/tree/SPU_master
MDEC :
https://github.com/Laxer3a/MDEC/tree/MDEC_master
GPU :
https://github.com/Laxer3a/MDEC/commits/SIMPLE_PALETTE
Branch are a bit messy for now... I guess I should split the project into 3 different projects on GitHub later and avoid this mess. Originally I planned only to do the MDEC
Please do not hesitate to send private message, I guess it will send me a mail telling me that something happened .
I have also opened a Patreon ( https://www.patreon.com/laxer3a ). Most post are now public, so you can see the progresses done. Also, if you are a dev with some time and know the PSX, i'd be interested to get some help.
Please send me a PM (will give my mail / Discord ID after that). As you can now see how far the project is doing, I guess people can start to see how serious I was about the probject.
For information, the GPU/SPU and MDEC should take about 30% (may be less ?) of the FPGA in the MiSTer/NanoDe10 board for now. The other big part being the GTE/CPU/CDRom. After that, things should be quite a lot smaller...
Sure I'd like to hear your stuff, and be glad to be in touch !
For the repos, here we are :
SPU :
https://github.com/Laxer3a/MDEC/tree/SPU_master
MDEC :
https://github.com/Laxer3a/MDEC/tree/MDEC_master
GPU :
https://github.com/Laxer3a/MDEC/commits/SIMPLE_PALETTE
Branch are a bit messy for now... I guess I should split the project into 3 different projects on GitHub later and avoid this mess. Originally I planned only to do the MDEC
Please do not hesitate to send private message, I guess it will send me a mail telling me that something happened .
I have also opened a Patreon ( https://www.patreon.com/laxer3a ). Most post are now public, so you can see the progresses done. Also, if you are a dev with some time and know the PSX, i'd be interested to get some help.
Please send me a PM (will give my mail / Discord ID after that). As you can now see how far the project is doing, I guess people can start to see how serious I was about the probject.
For information, the GPU/SPU and MDEC should take about 30% (may be less ?) of the FPGA in the MiSTer/NanoDe10 board for now. The other big part being the GTE/CPU/CDRom. After that, things should be quite a lot smaller...
There were some people working on something similar - http://www.psxdev.net/forum/viewtopic.php?t=429 however the site where their project report was, is offline.
Here is it in the WaybackMachine:
https://web.archive.org/web/20180607111 ... ports.html
https://web.archive.org/web/20180205071 ... 13_PSX.pdf
Oh, I see that one of the people from that project posted above! Great to see that! That's such an amazing project!
BTW, I have some info about the peripheral bus interface, SIO0, maybe a bit of the CDROM regs, but it is mainly like the one in psx-spx, with some bits added here and there.
EDIT: I hope you do record *all* your findings about how the hardware really behaves and was intended to behave, because they can be very valuable to others. Also have a look at the code of PS1 emulators, as they most likely emulate the hardware well enough to at least run games.
Also (for yourself) it would be good if you take separate notes on how the hardware *should* work, and then separately about how it is optimized to do that, as commonly optimization doesn't handle all initially intended use-cases and is also more complex to understand. I mean to say - be careful not to do too much optimization too early, as that can make things very confusing and result in complex bugs to solve.
Regarding testing on the actual hardware, I have mostly done that on a PS2 for PS2 stuff in the past, and even once used the PS2 IOP compiler to compile code for the PS1 CPU (as they are basically the same CPU core) to test stuff on it. So as long as you have a compiler you don't even need much libraries to test basic stuff - as long as you code all the test-code. That was under a 'pre-built' MinGW environment on Windows, that also runs on modern Windows 10, and there is no installing necessary. I guess there should be a similar option for PS1 dev as well.
Here is it in the WaybackMachine:
https://web.archive.org/web/20180607111 ... ports.html
https://web.archive.org/web/20180205071 ... 13_PSX.pdf
Oh, I see that one of the people from that project posted above! Great to see that! That's such an amazing project!
BTW, I have some info about the peripheral bus interface, SIO0, maybe a bit of the CDROM regs, but it is mainly like the one in psx-spx, with some bits added here and there.
EDIT: I hope you do record *all* your findings about how the hardware really behaves and was intended to behave, because they can be very valuable to others. Also have a look at the code of PS1 emulators, as they most likely emulate the hardware well enough to at least run games.
Also (for yourself) it would be good if you take separate notes on how the hardware *should* work, and then separately about how it is optimized to do that, as commonly optimization doesn't handle all initially intended use-cases and is also more complex to understand. I mean to say - be careful not to do too much optimization too early, as that can make things very confusing and result in complex bugs to solve.
I have no idea about this, but I would guess that this check should be per-primitive and for the next one should get rendered.
Regarding testing on the actual hardware, I have mostly done that on a PS2 for PS2 stuff in the past, and even once used the PS2 IOP compiler to compile code for the PS1 CPU (as they are basically the same CPU core) to test stuff on it. So as long as you have a compiler you don't even need much libraries to test basic stuff - as long as you code all the test-code. That was under a 'pre-built' MinGW environment on Windows, that also runs on modern Windows 10, and there is no installing necessary. I guess there should be a similar option for PS1 dev as well.
Who is online
Users browsing this forum: No registered users and 4 guests