I've been following the steps outlined on Patrice Mandin's site to set up a cross-compiling version of gcc-3.3.3 that will produce outputs for the m68k-atari-mint target. I'm not running MiNT, but this seems to be the closest thing to building an Atari TOS compatible binary.
What I don't understand is the amount of work necessary to do so. I first have to compile an oldish binutils with a native x86 linux target, then start using that binutils to build the same version of binutils, this time for the 68k Atari target... then build the entire gcc-3.3.3 for x86, and then again for 68k. Jesus.
I thought the whole idea of a modular compiler design was that it wouldn't cost too much to implement a new target machine - just a separate backend that can be plugged into your existing compiler, that accepts intermediate representation code from the frontend and produces assembly specific to the target machine which it can assemble and link.
So why do I have to recompile binutils and the entire gcc twice each? This doesn't seem right.
But then, I don't know much at all about gcc - perhaps there are other, much easier ways of accomplishing the same goal.
Please comment if you know anything about adding new backends to gcc!
Sunday, May 27, 2007
Saturday, May 12, 2007
Postscript looking manky?
Since I got my nice new cheapo network laser printer set up in Linux and Windows, I've noticed a problem when printing some Postscript documents from Linux. The pages will look fine on screen, but some fonts won't be sent to the printer (or something) and it will use a proportional font (e.g. Helvetica)'s character spacing with Courier instead, a fixed font, which results in totally uneven and sometimes overlapping characters horizontally. I read somewhere that Ghostscript has much more fonts than Postscript, so perhaps not all the font data is being sent to the printer and this is the problem.
I scratched my head for a while, trying different viewers and the like, but eventually stumbled, completely accidentally on a fix which works for me, although results in massive bloat:
Run ps2ps on the postscript file. This is supposed to 'optimise' the postscript file somehow, but so far has resulted in them increasing in size for me, and looking weird on screen... however, the printer output is absolutely spot on. It just seems to convert everything into image data... which is a lame fix, but suits me for printing.
I'd appreciate it if anyone else knows more about this and maybe a better solution!
I scratched my head for a while, trying different viewers and the like, but eventually stumbled, completely accidentally on a fix which works for me, although results in massive bloat:
Run ps2ps on the postscript file. This is supposed to 'optimise' the postscript file somehow, but so far has resulted in them increasing in size for me, and looking weird on screen... however, the printer output is absolutely spot on. It just seems to convert everything into image data... which is a lame fix, but suits me for printing.
I'd appreciate it if anyone else knows more about this and maybe a better solution!
Tuesday, May 08, 2007
more exams
Another set of exams coming up. This time, I think I'll fall back on the tactics I used in 2nd and 3rd year - just reading as much as possible, then using mnemonics to memorise lists of keywords to remind me of notable elements in each subject.
I'm not sure if I'll use Pauker at all for this - my exam performance in semester one this year was much worse than last year (68% average compared to 81%...??), probably because I was biting off too much to chew. The flashcards route needs more experimentation for general study.
Anyway, this time there are only three exams, one of which (Compiler Construction 2) is almost a gimme (although I thought its predecessor was a great exam and only ended up getting 63% or something on the paper, so I'll have to be wary), one of which should be alright given a bit of care and attention (Real-time Embedded) and one of which will require a bit more work since it's so bloody abstract (Z).
That said, JM seems really decent so I don't think he'll fuck us over on the exam.
Just have to stop procrastinating now...
I'm not sure if I'll use Pauker at all for this - my exam performance in semester one this year was much worse than last year (68% average compared to 81%...??), probably because I was biting off too much to chew. The flashcards route needs more experimentation for general study.
Anyway, this time there are only three exams, one of which (Compiler Construction 2) is almost a gimme (although I thought its predecessor was a great exam and only ended up getting 63% or something on the paper, so I'll have to be wary), one of which should be alright given a bit of care and attention (Real-time Embedded) and one of which will require a bit more work since it's so bloody abstract (Z).
That said, JM seems really decent so I don't think he'll fuck us over on the exam.
Just have to stop procrastinating now...
Friday, May 04, 2007
birthday +/-
Had a nice birthday yesterday, to a large extent because I finished and submitted my project documentation (on time). Looked pretty good too.
Then balanced it out by having a massive row today... logically.
Also, when I got home yesterday evening, my Firefox 2 had somehow lost the session with about 20 tabs open... which was frustrating, so I gave Opera a go... seems so much better than Firefox - much smaller memory/CPU footprint and faster response times, as well as some handy mouse gestures and that speed dial thing. It's pretty impressive! And looks like it works fine with SCIM...: 我想吃一点人东西。
Then balanced it out by having a massive row today... logically.
Also, when I got home yesterday evening, my Firefox 2 had somehow lost the session with about 20 tabs open... which was frustrating, so I gave Opera a go... seems so much better than Firefox - much smaller memory/CPU footprint and faster response times, as well as some handy mouse gestures and that speed dial thing. It's pretty impressive! And looks like it works fine with SCIM...: 我想吃一点人东西。
Wednesday, May 02, 2007
finally some improvement!
I had an epiphany (love that word) last night while trying to sleep, and came upon an idea that seems incredibly obvious now, so I guess I was stuck in a mental rut for a while. Need to raise the temperature when that happens, or something...
Anyway, my agent was spending all its time choosing illegal moves and being punished by the environment which trivially rejected those moves until it returned a valid one. However, since I'm using a neural network as a function approximator for the TD(λ) agent, making 1000 illegal moves followed by one valid one isn't a great way to learn - the backpropagation for the many illegal moves not only slows the whole thing down, but much worse, it results in so much network noise that the reward for a good move seems to get drowned out, as one of my lecturers in DCU helpfully pointed out a few days ago.
So last night I realised that not only was training on illegal moves a waste of time, but more importantly, since illegal moves are well-defined, I could just have the environment tell the learning agent which moves are legal (by passing in an array of booleans, one for each action... false for illegal and true for legal). Then the agent devalues the illegal moves and pays attention to the valid moves instead... although this could have been implemented better. In fact, everything could do with a lot of refactoring - it's too coupled to add proper unit tests for some of the behaviour.
Anyway, I implemented this and now the agents train against each other at a rate of almost one game per second on my machine. Not blazingly fast, but enough to get the following modest result in enough time to do a demo hopefully:
score before training: 76%
score after training: 88%
Success: 22 tests passed.
Test time: 886.09 seconds.
That was for 100 evaluation games against two random players, followed by 500 training games against other TD(λ) players, followed by another 100 evaluation games against randomers. It's probably not totally accurate as it's learning even against the first random player, and also because random players don't play to win, so it's almost a different game!
Still, it's an encouraging result... actually the first encouraging result. Bit of a happy birthday present really...
Anyway, my agent was spending all its time choosing illegal moves and being punished by the environment which trivially rejected those moves until it returned a valid one. However, since I'm using a neural network as a function approximator for the TD(λ) agent, making 1000 illegal moves followed by one valid one isn't a great way to learn - the backpropagation for the many illegal moves not only slows the whole thing down, but much worse, it results in so much network noise that the reward for a good move seems to get drowned out, as one of my lecturers in DCU helpfully pointed out a few days ago.
So last night I realised that not only was training on illegal moves a waste of time, but more importantly, since illegal moves are well-defined, I could just have the environment tell the learning agent which moves are legal (by passing in an array of booleans, one for each action... false for illegal and true for legal). Then the agent devalues the illegal moves and pays attention to the valid moves instead... although this could have been implemented better. In fact, everything could do with a lot of refactoring - it's too coupled to add proper unit tests for some of the behaviour.
Anyway, I implemented this and now the agents train against each other at a rate of almost one game per second on my machine. Not blazingly fast, but enough to get the following modest result in enough time to do a demo hopefully:
score before training: 76%
score after training: 88%
Success: 22 tests passed.
Test time: 886.09 seconds.
That was for 100 evaluation games against two random players, followed by 500 training games against other TD(λ) players, followed by another 100 evaluation games against randomers. It's probably not totally accurate as it's learning even against the first random player, and also because random players don't play to win, so it's almost a different game!
Still, it's an encouraging result... actually the first encouraging result. Bit of a happy birthday present really...
Subscribe to:
Posts (Atom)