math & physics
Given a input data set (x, y, z, Kmin, Kmax) that satisfies the following euqations
Ax + By + Cz > Kmin
Ax + By + Cz \< Kmax
is there a way to find A, B, and C.
I understand that there is no unique solution to this problem. But what I am looking for is a possibly the best solution (with tolerance).
You weren't kidding when you said "there is no unique solution to this problem". In fact, there is a 3-dimensional space of possible solutions
Consider this. Even if you drop the (kmin, kmax) business and simply require Ax + By + Cz = K, then for ANY choice of A and B you can find a C satisfying the equation exactly. A trivial example: suppose A and B equal 0; then C = K / z.
In order to narrow down the search further, you must specify some restrictions on the values of A, B, and C, and/or tell us more about what you mean by "best".
Here is the bigger picture of the whole problem I have in hand.
I have to select between "Object1" and "Object2"
Each of these objects have 3 unique properties namely x, y, and z.
Each property's value is in the closed range [0.0f, 1.0f]
I would select Object1 over Object2 if
R(Object1) > R(Object2)
where R(Object) = (Ax + By + Cz) / (A + B + C)
R(Object) is directly proportional to x, y, and z
My approach to the problem was to prepare a discrete data set with input values x, y, and z. Then use the data set to find the solution for A, B and C that satisfies the necessary conditions.
Any better ideas ???
Okay... are you saying that you have multiple (x, y, z) triples and need to find A, B, C that satisfy certain conditions over the whole set of (x, y, z) data? This is more tractable than your previous problem, but still needs more explanation.
Is this a homework problem?
Ax + By + Cz > Kmin
Ax + By + Cz \< Kmax is there a way to find A, B, and C.
You can probably solve such optimization problems but they are beyond my mathematical horizion.
However, what you can do, and where I'm able to help is to find a solution for:
Ax + By + Cz = K
For a arbitary large dataset of x,y,z and k. With a bit of matrix math you can get the least square fitting approximation of A, B and C.
Since the approximation is usually not a exact solution your resulting k-values will vary somewhat. I think the least square fit of the dataset for k = (kmin + kmax)/2 will be a very good candidate to satisfy your problem.
You could also use a genetic algorithm and breed a "best" solution.
There are no 'best' solution unless you have an objective function. If this is also linear you can solve this using linear optimization. There is actually a very beautiful algorithm for solving these kind of problems called the simplex method. Google is your friend...
I don't really understand what you're trying to do. Are the A, B, C 's per object or are they global?
I put the problem into matrix form. e.g.
A * b = x
(uppercase is a matrix, lowercase a vector).
And solve it with the pseudo inverse (multiply both sides with A transposed)
A * b * A(transposed) = x * A(transposed)
The multiplication with the transposed matrix turns a x*y matrix into a square matrix, and the problem is solvable (kinda.. you'll get the least square fit of the answer).
Okay guys. It looks like my initial problem and later explanation were quite misleading. Let me try again.
First, This is NOT a homework problem. I am a graduate and don't really wish to go back to school so soon again.
Here is the complete problem. I need to prioritize from set of objects based on the following equation.
R(Object) = (Ax + By + Cz) / (A + B + C)
where x, y, and z are properties of this object and A, B and C are their corresponding weights (how important is this particular property ?)
My approach to the problem was to prepare a finite data set for different values of x, y, and z and then fill this data-set with approximately close values for R(Object) (or better of a range in which the R(Object) should fall [kmin, kmax]).
As an exmaple consider this. I am a seller and I have 4 apples. Apples have 3 properties namely ripeness, redness and size. Every buyer will have different criteria for selecting one from the 4 possible apples. Some might prioritize the size of the apple above its redness and other might consider the ripeness over everything else. These "selection criteria" will act as weights (namely, the A, B and C) for the individual property of the apple. Depending on these weights, the R(apple) will be different for all 4 apples for any given customer (A, B and C are the same for any particular customer) and will result in a different apple chosen by different buyer. Also, as time passes, the values of these properties might change. NOTE: The weights chosen for individual property by any customer always remain the same (even over time).
So, Yes ReedBeta is right. I have a whole set of x, y, z data set. I fill in a R(Object) value for a given set of x, y, and z. I need to find A, B, and C so that they satisfy the complete data set.
Until yesterday, I used to tweak these weights myself to get a descent selection behavior but as the number of properties per object and customers increase this is becoming too time-consuming process.
Hope the example should help explain the problem.
I shall correspond my reply to SigKILL's question in correspondence with my above example.
A, B, and C are constants per customer. Not sure, if that makes them "global" or not but for sure they are not different per "object".
Ahh, I understand now. In that case, a least-squares solver is probably the best way to go about this, as Nils mentioned.
You should probably look the algorithm up in a linear algebra book (if you don't know it allready), since Nils attempt to describe it failed...
Ah, I didn't realize that Nils' explanation was incorrect =D
Let M be the matrix that contains the x, y, z values for an object in each row (so the matrix will have n rows and 3 columns). Let a be the column vector containing A, B, C, and then let k be the column vector containing all the k values corresponding to the x, y, z values.
Then you are trying to solve the equation M * a = k. While this is not directly solveable, you can form the least-squares solution by multiplying both sides by M\\^T (on the left, rather than on the right as Nils tried):
M\\^T * M * a = M\\^T * k
This equation should be solveable exactly, since M\\^T * M is a square matrix. Hence:
a = (M\\^T * M)\\^-1 * M\\^T * k
This is called the 'normal equation', and the matrix (M\\^T * M)\\^-1 * M\\^T is called the 'pseudo-inverse' of M. For more details on why this procedure works, see a linear algebra text as SigKILL suggested, or google it.
Uh... did I wrote it the wrong way around?
Shame on me, and sorry for that, I typed it out of my head...
Thanks guys. I used what ReedBeta described and it all worked like Magic.