There are three approaches I normally see.
1) Fixed scale pipeline.
You code your entire game to work on a single screen size. Then you scale the entire game view to the real screen size.
If you are doing software rendering, you work in an off screen buffer and blit the result onto the real screen.
If you are working with OpenGL et. al. then you create a scaling matrix in your projection.
2) Variable scale pipeline.
You calculate scaled values for all coordinates and draw into the on screen buffer with scaled blits.
Often you have to store multiple copies of the graphics designed for different screen resolutions.
3) Variable screen architecture
You use layouts to control the placement of the game elements, the layouts change the position of the elements on
screen based on the available pixels
Java uses this a lot.
Graphics tend to be unscaled.
All approaches have their high and low points. I tend to look at the game design and decide if I can tweak the design to be resolution independent, if not then I have to pick one and run with it.
For example, a chess board. A chess board has to be 8 by 8. You cannot say my screen is X wide so I'm going to make the chess board 10 by 10.
In the angry birds type game though you can say "I need a minimum of X pixels on the screen, if the screen is smaller I have to scale the view, if it is bigger I can display more of the world"
Scaling artifacts are just a fact of life when you start doing resolution independent code, the only way I have found to avoid them is to procedurally generate content for the current screen resolution.