Have people tried novel approaches to CPU voxel rendering? There are many solutions not quite found anywhere.
Here's one: http://goo.gl/txUlSl (Unlimited Detail) & another being implemented now below.
Please, post your experiments.
Im sorta only interested in voxels cause they are cool to raytrace with, like for global illumination, not much other reason at this time. but it has to be a gpu implementation, cpu is too slow.
So basicly my opinion is voxels unless procedurally generated are a bit of a waste of time when it comes to representing detailed things, just my opinion.
The problem is, except on parallel computers (GPUs), ray casting / tracing is hopeless. One has to find something else, an object-order algorithm. The links point to such approaches. Unfortunately one is ridiculously slow & the other isn't promising.
For Unlimited Detail investigators: http://goo.gl/sdjXVG Are there such investigators here?
Ray tracing is not as hopeless as it seems to be, you can get fairly good approach for standard triangle rendering (using BVHs or KD Trees), but you probably know Arauna ray tracer for example. Of course it uses packet tracing and frustum tracing (to quickly determine which parts of trees are going to be ray traced).
You could use similar approach for voxel tracing, which could be fast enough.
why so interested in cpu implementations, gpu is where the juice is at, i think.
I'm sure everybody, though tacitly, is of the opinion that a fast CPU thing is much more satisfactory. People think sequentially, time is the consciousness' frame. Greedy algorithms are inherently sequential. Btw. rouncer, it's your voxel renderer that compelled me to post my experiments.
@17 Gen r
I'm sure everybody, though tacitly, is of the opinion that a fast CPU thing is much more satisfactory.
Why would "everybody" be satisfied with a fast CPU renderer? I don't think this is true at all. If you have an embarassingly parallel workload then it's embarassing not to be doing it on a parallel computer, i.e. a GPU!
A KD-tree partition should be fast enough for you to find voxel chunks to raytrace. The bigger issue with voxels is memory. If you want any decent render, you have to stream large swaths of voxel data from the disk. Since that's your slowest component, you're limited in how fast your CPU algorithm can go.
I personally convert my voxel data into polygons, apply a Catmull-Clark subdivision to smooth the edges, and then pass it into the GPU for rendering. It's quick, simple, and gets the job done even on older hardware.
Unlimited Detail is an old school front-to-back octree traversal. Octrees aren't rejected only if remaining view subpyramids intersect them. It's a synthesis of http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.1295 & http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.397
VD does this but poorly. It advances & quadrisects transverse sections of view subpyramids while front-to-back traversing the octree. This approach minimizes the working set & is sequential. The number of octree accesses is as the viewport because of the quadtree complexity theorem. I implemented the other link: a catastrophe (but novel "mass 3-D to 2-D conversion").
Really, my work?
The interesting thing in my mind wasnt the renderer at all, it was the editing method that I was struggling to get together, took me ages and a gpu flood fill to get my voxel csg going really fast. (for operating on GIANT voxel sparse worlds - complicates the algorythm)
Hand editing millions and millions of voxels (with a csg engine) to make a world is like entering a million data pieces into the cyc database, it takes ages, and I ended up disbanding the project cause I didnt think hand editing was the way to make these voxel worlds, and procedural generation makes more sense.
Just think, a density function (a three dimension equation of matter of space) relates to voxel space without any conversion, so thats what I think the coolest way to make something "big" voxel wise would be.
Making voxel characters is easier, (and hand editing now actually makes more sense) and you can even store the voxels in a brute force lot of data, space and matter, if you were making characters, leave the world/environment to procedural generation, and actually use brute volume stores for the characters themselves, id imagine you could get to 2048x2048x2048 voxels??? (with the latest equipment, mind you) not bad..., doing it gpu wise is what id do.
If your just using brute force volume stores (in lots of textures) then your boolean comparisons to do csg are the easiest thing imaginable to code, just flip bits on and off and its that simple.
you convert to SVO after the modelling is complete and disc stored.
The really cool gpu voxel programs are the realtime liquid/smoke generators, they are the coolest thing i saw. They involve some kind of feedback loop on a finite volume. but im doing everything gpu of course.
Remember, theres no point having a voxel renderer with nothing to render with, so that was my main effort into voxels was for, modelling.
3d coat is a cool modeller, i recommend and also purchased the student license for. (and its a christian company, so its like buying a pass into heaven hehe)
Really, my work?
A very talented person's Unlimited Detail. (The guy was into emulators in the past).
I see. it is very clever, especially to do it without a turbo card.
Here's the last attempt: http://goo.gl/eK1SYG
It's a novel octree splatter involving a "mass 3-D to 2-D conversion". Slow. No hateful thing (ray, float, / or *) occurs. If one were to properly combine Teller, Yagel & Meagher's front-to-back octree traversal then success would be his.
Haven't tried this yet,but do agree that GPU rendering will be much faster than renderers.
It is fast now (archive kept up to date until UD floored). UD is not a secret anymore.
For there is nothing covered, that shall not be revealed; neither hid, that shall not be known. (Luke 12.2)
Imagine a view pyramid & a cube (not necessarily in the pyramid). Narrow e.g., by dichotomy, pyramid about cube. If the intersection of a pyramid with view plane (a rectangle) is occluded then we're done i.e., the narrowing process is pursued no further & the cube is rejected. Otherwise if rectangle small enough or cube leaf then output rectangle in the color of cube. Otherwise proceed to the branches (octree or BVH for familiar animation), preferably in front-to-back fashion (no Z-buffer): from cube to rectangle by way of pyramid or, rendering as lossy data compression.
/, * or floats are avoided by working with the intersections of pyramid with 9 parallel planes in view space, one of which is the view plane & the others pass through the vertices of cube: 9 rectangles. The rectangles of an octant of cube are constructed from those of said cube, by taking appropriate midsides i.e., suppose rectangles with sides Lj, Uj, Rj & Dj (0 <= j < 8) for the father's vertices. We would like to know the rectangle Lij, Uij, Rij & Dij for the vertex j of the ith child: Xij = (Xi + Xj) / 2 where X = L, U, R or D.
One is, of a certainty, astonished at the apparent disinterest of the Internet in trying to understand how that which said Internet so much criticizes does work. Except for Russians, nobody seems to pursue the reverse engineering of UD. This is made all the more uncanny by there being a soon to be published patent: is there some evil design, forcing the mass of mediocre programmers/Carmack worshipers to go the parallel octree raycasting route, impractical except on GPUs? UD is a /, * & float-free serial (non-parallel) volume renderer. Such features should be compelling to reasonable men in view of the fact that one can no longer do inelegant programming on pollutant GPUs in impunity.
The patent is being (liberally) implemented. M3D2D (see a previous link) may well be faster. The UD in Geoverse is fast because it relies on parallelism (many cores, occlusion mask dealt with many bits at a time, SIMD etc.).
An unpolished kludge of UD: http://goo.gl/r7p7e9
It projects the 4 or 6 vertices that suffice to construct the rectangular splat (not 8). There is an incremental technique that does this. Observe moreover that our cubes all have the same orientation, this causes the indices of the vertices with the min/max signed depth, or those which are the most in the view volume, to be invariant from cube to cube. Here, out of sight culling is done 1 vertex per view volume bounding plane.
Orthogonal mode consists in a front-to-back splatting of a scaled by 1 / midz replica of a cube.