In a forward shading renderer (that is the "typical" kind of renderer in which material properties and lighting are all evaluated in one pixel shader), for correct results you should identify which lights affect which objects, using bounding volume intersection tests and suchlike. For instance, for a point light with a spherical region of effect, you'd push the bounding sphere through your scene graph and see what objects it intersects. Each of those objects must then be rendered with that light. You can't impose an arbitrary cutoff like 10 lights, or you may cause artifacts and miss some lights in some cases. Additive blending can be used to accumulate light, with each object being rendered as many times as necessary to accumulate all the lights acting on it. As an optimization you might include shaders that compute two or more lights at once, to cut down on the total number of draws.
Another approach is deferred shading. If you haven't heard about this, google it; there are many, many articles on the subject. Briefly, the idea is to render material properties (color, normal, specular intensity and power, etc.) into an offscreen buffer (called a "G-buffer" for some reason), then do lighting in image space by writing a pixel shader that fetches the material properties from the G-buffer and evaluates the lighting equation. This decouples material shaders from lighting shaders, greatly reduces the number of draw calls, and doesn't require you to track which lights hit which objects; but it also requires more memory bandwidth, and places restrictions on the shading model due to all its parameters having to fit into the G-buffer.
In either of these approaches, good frustum/occlusion culling is essential for good performance; it pays to spend some effort making sure you don't draw things that aren't visible. And none of this is really about D3D10; these are generic rendering approaches that are applicable to any API. Finally, since you mentioned the workload per frame of figuring out what to draw, note that it's often possible to exploit temporal coherence here, since what to draw this frame is usually pretty similar to what you drew last frame. Try to build a data structure that allows you to hang on to some of the information from last frame rather than starting from scratch; of course it also must be able to adapt to changing circumstances, but you can still save a lot of performance this way.