A recent post has got me thinking about things I used to do and either cannot or don't do any more.
I think it would be a good idea to document some of them to let the younger generations see what we had to do.
When I first started I didn't have an assembler, hell I hadn't even heard of an assembler, I worked in machine code. Most of the time this was fine, you didn't have a whole lot of instructions to memorise, it was just relative branches that were a problem. You had to count the number of bytes by hand. I can still remember some instructions.0xA0 = ldy # 0xA2 = ldx # 0xA9 lda #
Self modifying code
When I started working on the Atari ST I hated the layout of the graphics display. Drawing lines etc was slow because of it, so my line draw used self modifying code. The idea was simple, use the line colour to create the draw pixel code inside the inner loop, so you do a few extra instructions outside the loop, then the inner loop is as fast as it can be. Oh and yes we had to write code to draw things. We didn't just call a function. We had to know things like Bresenham's line draw and circle draw algorithms and implement then ourselves.
Yes real programmers use goto's :> My AI routines were very complex and I found that calling separate routines was slow and cumbersome. I came up with a way of modularising the code into code blocks of the same length. Using a bit pattern I could then create complex routines as a serious of pseudo opcodes encoded into a bit patterm. A bit was set add this value to the pc, a bit was unset add this value to the pc. This was particularly good in the early x86 machines as stack operations on these chips were awfully slow.
Multiplies and divides were slow....really slow. So we did everything we could to avoid them. We would use relationships like sin(A + B) = sinAcosB + cosAsinB to reduce the number of multiplies in a matrix operation. When the matrix was finally fully formed we could use other tricks to remove multiplies.
Take each term in the matrix and shift it down (divide it by two) store it in an array, repeat. Then 12 * matrix[0,0] reduces to matrix[0,0,8] + matrix[0,0,4] replacing a multiply with an addition.
There were so many maths cheats we would need a whole book to list them.
A friend of mine hated subroutines, as far as he was concerned pushing an address onto the stack, modifying pc, popping an address, modifying pc...... hell what a waste of time. He wrote macros and strung them together. At the end of the program there was a single jump instruction. The only one in his code. Which went back to the start of the program.
Early chips often had opcodes that simply weren't listed in the docs. You would notice a gap in the instruction set and wonder why, so you found out. You wrote a test to see what the instruction did. Often they were useful, but it did take a lot of experimentation to tie down EXACTLY what they did. The z80 was the undisputed king for undocumented features. There were more undocumented opcodes than documented ones!
A good programmer always had a couple of pieces of tape on the side of his TV. When you were working on a routine you would change the border colour just before it, then change it back after the routine had finished. Since we know the time it takes to update the TV screen, we know how long a single scan line on the TV takes to display, so we know how long the routine took.
Handy stuff tape.
Anybody else got any stories
I remember those horrible goto's. I never did machine code, but when I used to have my Radio Shack color computer it was part of Basic back then, before someone came up with the idea of functions. I think that's where the phrase "shagetti code" really came from, because that's what code looked like. I was kind of afraid of computers back then because of the science fiction stories, but the programming part sounded so interesting I had to try it.
Those are days best forgotten as far as I'm concerned. I hate low level programming. When I see attempts at visual programming, though, I think, that's going too far. It's all right as an editor, but the real glue has to be verbal code as far as I'm concerned.
I love low level coding.
A few years ago I worked for a company called Tao. We worked in a language called VP which was a macro assembler like language. It was awesome, really loved it. You wrote "tools" which, once loaded into memory were available to any program.
Really great to work on.
If you love low level coding, learn verilog/vhdl and buy a cheap FPGA kit. The cheap ones are crazy large these days, you can actually do something interesting with them. (Here's hoping high level synthesis tools will become more accessible to the general public.. using vhdl/verilog is kinda like writing in assembler in this day and age).
Funny you should say that :>
I and a friend have been working on a super computer for a while.
Can't give away details, but it's targeted at a £300 retail price and has enough power to fully ray trace a 1080 HD scene full of objects at more than 60hz
He's doing the hardware, I'm going to be doing the software.
Stainless this is awesome, can you give specs ??? cores, memory ? i want to try something different from polygons, and i am right now exploring voxels and sparse octrees, will your super-pc be able to run this kind of heavy computational load in a good fashion ?
Can't give away too many details, but think along the line of an array of fast processors with a self programming topography and ultra fast chip to chip communication.
Each chip will have it's own cache. The memory access speed for this cache is stupidly fast.
A single chip has the bandwidth to generate a single 1080p frame at 60hz, and you can have any power of 2 chips in the array.
Chris is doing all the work on the hardware, I'll ask him if he is ready to publish anything
"A single chip has the bandwidth to generate a single 1080p frame at 60hz, and you can have any power of 2 chips in the array." That alone isn't sufficient , i can have a blank screen and running the main loop much higher than 60hz, how are you measuring stuff ? rays/intersections / secs , triangles/sec, GFlops ??? Can i raytrace in realtime with your pc ?
realtime raytracing is the base we use to see if the technology works
when we get the whole network design finished I'll give you more details, but at the moment a single chip can resolve 1920*1080 rays at 60 hz with a simple scene. We are assuming 32 chips as a base machine for £300
Can i be put in a preorder list or something ??
Heh, gotos. Reminds me of libjpeg where you have to use a longjump to return flow control after handling a jpeg error. I was like (o¿O)
I'm not old school enough to score brownie points with the old boys. Back in my day, playing with the BEEP command in QBasic was "teh kool". If only I arrived to the scene a couple years earlier... I'd be a millionaire hyping y2k doomsday nonesense and building ungodly expensive banking systems on fortran and cobol. That money wagon is long gone though.
Not yet... sorry .
I'll prod Chris and spur him on. He's nearly as old as me and can be a little .... strange ...
fortran was nice, I liked it. Cobol was a pain.
I remember having to write a page of code just to tell the compiler what to generate. You had to tell the compiler what machine it was running on.
AUTHOR. MARY DOE.
ASSIGN TO DISK1.
ASSIGN TO PRINTER.
a. Area A e. Area B
b. Area A f. Area A
c. Area B g. Area A
d. Area A h. Area B
I worked on y2k stuff and it was a nightmare. It was common practice to patch the binaries rather than going through the whole (edit, compile, install) path. And there was never any docs. This was called "job security". So you had to decompile the binary back to code fix it , then compile and test.
I've been nearly rich, I lost £40,000,000.00 overnight, just don't know where I put it. Lost £3,000,000.00 when the CEO of a company gambled with a floatation and lost. Been made redundant five times when startups ran out of money. Interesting times.
I remember on the old Commodore pets you could type in a poke that destroyed the machine. Caused a short circuit and blew the motherboard. (AFAIR)
This heavily depends on the scene - do you have some infor how simple the scene is? Is it Sponza? (Because doing 1920*1080 @ 60 for Sponza on 1/30th of chip that costs 300 GBP would be really revolutionary ... otherwise you're matching GPUs, which isn't bad performance at all, but...)