dk2 at October 27th, 2013 22:43 — #1
Handling asset sizes on multiple devices is increasingly becoming an issue with the proliferation of mobile and tablet devices. This is especially true on Android, where you have to expect your game to run on devices you never knew even existed. The typical approach I see taken is to provide different sets of textures for each supported resolution, and then choose the appropriate set at runtime. However, this approach doesn't scale that well on mobile/tablet devices, since the number of possible resolutions is increasingly growing. Also, this negatively impacts the size of the game in terms of space. In one game I developed with a friend, we took the approach of using one set of high-resolution textures and then running a one-time pre-processor that dynamically generates texture sizes appropriate for the device the game is running on. Similar to what the author suggests, I observed the Lanczos filter had the best results. Although this slowed the initial load time of the game during the first run, it greatly simplified the development effort and allowed us to support virtually any device resolution.
fireside at October 27th, 2013 23:00 — #2
That was really interesting. I didn't know there was that big a difference.
thenut at October 28th, 2013 03:03 — #3
Reading this makes me think about the ZSNES emulator. I believe they implemented all of those scaling algorithms. I think the eagle filter (HQ) was by far the best for upscaling. At least for SNES games because it preserves that cartoon look, although it's not always desirable.
Another interesting algorithm is fractal scaling (using self-similarity properties of images). Genuine Fractals is a commercial company selling their software using this algorithm and they have some interesting results. Generally very sharp and they maintain shape, although scaling up is not free so image quality still suffers.
My belief is that if you want to support multiple screens, vector graphics is the way to go. Even with 3D vector APIs like OpenGL, it's easy to rasterize everything and forget about vector graphics. However, if you have a solid framework in place (such as what Microsoft did with WPF / Silverlight / WinPhones and Tablets), then doing vector based graphics is easy and it works. It does require a complete development shift though. However, tools are improving and I believe someone on Kickstarter was working on letting artists provide 2D skeletons and animations for vector art. I think that will open a lot of doors. Of course, there's also Blender
fireside at October 28th, 2013 06:05 — #4
Even if you just use vector for gui stuff, it helps a lot. Otherwise the print is all over the place and all different sizes. I still like rasterized art though, along with vector art. I wouldn't want to have only one. As computing gets faster, that scaling would become less and less noticeable. Having all these devices will just create standardization and faster ways of doing it, some even built into the chips.
stainless at October 29th, 2013 05:21 — #5
There are lot's of techniques for getting around variable display resolution, vector graphics is IMHO the worst.
Flash is a prime example of a vector graphics engine and it's dying, one of the main reasons is that it's so hard to hardware accelerate vector graphics.
I use things like a layout class, signed distance fields for fonts, and clever scaling algorithms myself.
thenut at October 31st, 2013 11:09 — #6
OpenGL and DirectX are vector graphics APIs Flash is poor because it wasn't designed properly and it's overly bloated. A simple Flex app is 300 KB without doing anything and that has to be transfered over the wire to an equally bloated player.
Techniques like distance field fonts are improvements over traditional bitmap fonts, but they still have their limitations. Supporting TTYs and true vector fonts offer the greatest compession and flexibility. For instance, you could generate 3D glyphs, or add visual effects like metaballs to the glyphs and give them a cool geometric blending animations. I mean, these are just random thoughts. The possibilities are limitless.
stainless at November 2nd, 2013 05:21 — #7
OpenGL and DirectX are rasterisers. At the very lowest level they just move pixels into a memory buffer one at a time.
Things like Tempest in the arcades were vector games. You supplied a list of lines to the hardware and it drew them to the screen, not a memory buffer.
Flash was very clever when it first came out. At the very lowest level it had a 256 colour (byte per pixel) render buffer. All draws were converted into a list of horizontal lines. These lines were then kind of depth sorted until only the visible segments were actually drawn into the memory buffer.
This 256 colour array was then passed to the hardware in the most sensible way.
So to render a filled shape, they scale the control points by the camera, calculate every point on the curve, find top and bottom points, create left and right point lists, add horizontal lines between them. (it's a little more complicated than that as you can have multiple lists per glyph to handle peaks). Sort all glyph line lists by depth, create a draw list. Parse draw list to memory.
A hell of a lot of work.
Using vector fonts is more flexible than any form of bitmapped fonts, but is it worth it?
The amount of code required , and cpu cycles burned, to render a single glyph.... well it's prohibitive. Unless you really need it, the quality of the rendering just isn't worth the overheads.
Especially if you are working on mobile devices. You may have a 1920 by 1080 display, but it's a six inches big. Is it worthwhile having vector fonts?
reedbeta at November 2nd, 2013 12:34 — #8
Well, when you run your web browser or a PDF viewer or whatever, it's using vector fonts for everything there, and they manage to do smooth scrolling, animation etc., even on mobile. A game has tighter performance requirements, but it also has much less text than that.
Also, the scanline algorithm isn't the only way to render vector stuff. There are GPU-accelerated vector renderers like NV_path_rendering that convert the vector primitives into triangles and rasterize them on the GPU, which can do it much more efficiently than the CPU, in both performance and power consumption. NV_path_rendering doesn't work on mobile yet (I think it uses CUDA internally) but it won't be long until mobile GPUs are capable of doing equivalent things, if they aren't already.
stainless at November 3rd, 2013 04:44 — #9
True, but browsers use huge amounts of memory, and are stand alone apps.
Blocks of text come out of the layout engine and are renderered into textures ready for display.
Even with the best path rendering in the world, you are still talking about rendering a hell of a lot more than 2 tri's per glyph.
When it comes to things like a web browser, it may be worthwhile as the most important thing it is doing is displaying text, but for a game? God no. I would much rather use signed distance fonts and save the rest of my triangle budget for things that really matter.