Jump to content
Science Forums

What can we know of reality?


Recommended Posts

I find it quite sad that the only person with sufficient understanding of the problem I have attacked to have made an attempt to follow my deduction has such a limited education in mathematics. This is going to slow things down greatly particularly when we get to solving that fundamental equation.

 

I know exactly what you are thinking; just your luck, right? :) Anyway, this sure slows things down. Qfwfq seems to understand what the topic is as well. If we could just convince him that it is worthwhile to put in the time and effort to understand these definitions... So let me just mention to the group that it seems like this procedure should be useful as a component for an AI system. (just in case that makes this worthwhile to some people :)

 

Anyway, everybody feel free to chip in to explain me the math concepts that I have questions about or that I have clearly misunderstood. My math knowledge is right next to none :)

 

Another thing that is slowing us down is that the LaTex doesn't work in the quotes. Makes it more time consuming to reply to some posts :I

 

For this post, I just place all the LaTex outside of quotes.

 

Sorry Anssi, but you happen to be wrong on this point. It is no more than the fact that your math background is limited. Let me do the following algebra for you. The definition of a derivative is

[math] \frac{d}{dx}f(x)=\lim_{\Delta x \rightarrow 0}{\frac{f(x+\Delta x)- f(x)}{\Delta x}}.[/math]

---end of quote

 

I took a look at Wikipedia yesterday and figured out where that definition comes from. We are one small step forward.

 

The question is then, given that definition, what is the derivative of f(x) multiplied by g(x).

 

But I couldn't figure out where that g(x) came from. Why are we interested of multiplying the derivative of f(x) with it?

Anyhow...

 

It should be clear from the above that the correct expression is:

[math] \frac{d}{dx}f(x)g(x)=\lim_{\Delta x \rightarrow 0}{\frac{f(x+\Delta x)g(x+\Delta x) - f(x)g(x)}{\Delta x}}.[/math]

---end of quote

 

...with my limited math knowledge, I can't find a fault from that.

 

But with the following there are two things that puzzle me:

 

---QUOTE Doctordick---

Against that we can be confident that [imath]f(x+\Delta x)g(x+\Delta x) - f(x+\Delta x)g(x+\Delta x)[/imath] is exactly zero and furthermore, in the limit as [imath]\Delta x[/imath] goes to zero, [imath]f(x)g(x+\Delta x) - f(x)g(x+\Delta x)[/imath] is also exactly zero: i.e., in that particular limit, f(x)g(x)-f(x)g(x)=0.

---end of quote

 

The part that I painted orange is puzzling because that math expression would be zero not only in the limit but with any [math]\Delta x[/math]. Wonder if you made a mistake there since both sides of the minus sign are identical. But then perhaps that is exactly what you wanted to say?

 

In any case, another thing that troubles me is that it seems invalid to actually pull to limit to 0, since in that case the concept of a derivate is kind of non-sensical (since you can't get an answer from your calculation).

 

Do I understand the concepts of the limit and derivate correctly in that with non-linear functions it is not possible to get the exact derivative for any particular value of "x", but instead it is only possible to get an approximation with any arbitrary accuracy? Doesn't that mean that as the limit [math]\Delta x[/math] is actually pulled all the way to zero, we are no longer talking about a derivative? That "definition of derivative" seems to come out as non-sensical also if [math]\Delta x[/math] is actually pulled to zero. (It would just come out as 0/0)

 

For that reason it seems invalid to just remove [math]\Delta x[/math] from the equations, as you seem to have done. Perhaps this concern would disappear if I understood the rest of the post better...

 

Perhaps it's because of my above confusions, but I can't understand how does the equation turn into:

---QUOTE Doctordick---

This means that

 

[math] \frac{d}{dx}f(x)g(x)=\lim_{\Delta x \rightarrow 0}{\frac{f(x+\Delta x)g(x+\Delta x) - f(x)g(x)+f(x)g(x+\Delta x) - f(x)g(x+\Delta x)}{\Delta x}}.[/math]

 

or, reordering terms,

 

[math] \frac{d}{dx}f(x)g(x)=\lim_{\Delta x \rightarrow 0}\left\{{\frac{f(x+\Delta x)g(x+\Delta x)- f(x)g(x+\Delta x)}{\Delta x}+\frac{f(x)g(x+\Delta x) - f(x)g(x)}{\Delta x}}\right\}.[/math]

---end of quote

 

And the following I just have to take on faith:

 

---quote Doctordick---

Which is identical to

 

[math] \frac{d}{dx}f(x)g(x)=\lim_{\Delta x \rightarrow 0}\left\{\left\{\frac{d}{dx}f(x)\right\}g(x+\Delta x)+f(x)\left\{\frac{d}{dx}g(x)\right\}\right\}.[/math]

---end of quote

 

becasue I don't know what those brackets mean. Unless their function is the same as the ordinary ( and )? In any case I don't know how to figure out whether those two above expressions are identical or not.

 

With the following I have the same reservation about removing [math]\Delta x[/math] as before.

 

---quote Doctordick---

That limit should be clear to you. It is exactly

 

[math] \frac{d}{dx}f(x)g(x)=\left\{\frac{d}{dx}f(x)\right\}g(x)+f(x)\left\{\frac{d}{dx}g(x)\right\}.[/math]

---end of quote

 

With all the rest I am struggling even more. I have to spend more time with it and I'll try to get back to it soon, but in the meantime it would be helpful if you can answer the above.

 

I can mention though that what threw me off was that I just remembered the shift symmetry equations from the PF thread post #464, but indeed they were not differentiation equations for [math]\psi[/math] but P.

 

I wonder if I should try and pick up some math textbook and learn some required areas little bit...

 

-Anssi

Link to comment
Share on other sites

Qfwfq seems to understand what the topic is as well. If we could just convince him that it is worthwhile to put in the time and effort to understand these definitions...
Actually it's a lack of time, and I'm also a very exhausted gondolier.

 

Anyway, everybody feel free to chip in to explain me the math concepts that I have questions about or that I have clearly misunderstood. My math knowledge is right next to none :)
The definition of derivative and the rule for deriving a product are standard in basic calculus. Dick is trying to give you these but they aren't specific to his work, I think what makes it a bit confusing is that he's going a bit into the demonstration of them instead of just stating them; you can find them in a textbook and lots of folk here can confirm them.

 

Essentially he's saying that the derivative of the modulus squared:

 

[math]\frac{d}{dx}P(x)=\frac{d}{dx}\left(\psi(x)\psi^{*}(x)\right)[/math]

 

will be zero if:

 

[math]\frac{d}{dx}\psi(x)=iK\psi(x)[/math] with [math]K\in\mathbb{R}[/math]

 

Now it is essential for K to be real (meaning [math]K=K^*[/math] i. e. it equals its complex conjugate). This actually is enough for the above first derivative of P to be zero (K could even be x-dependent, as this would only change the second and higher derivatives of [imath]\psi(x)[/imath]). I suppose Dick is requiring the first derivative to be zero for all x, which implies its further derivatives being zero as well; this could be his reason for saying K must be constant. However, I'm not so sure K needs to be constant (x-independent), it seems to me that a real-valued K(x) wouldn't break the symmetry as [imath]\psi(x)[/imath] would have an x-independent modulus anyway (indeed, I get the second derivative of P being zero too). :confused:

Link to comment
Share on other sites

Actually it's a lack of time, and I'm also a very exhausted gondolier.

 

:)

 

The definition of derivative and the rule for deriving a product are standard in basic calculus. Dick is trying to give you these but they aren't specific to his work, I think what makes it a bit confusing is that he's going a bit into the demonstration of them instead of just stating them; you can find them in a textbook and lots of folk here can confirm them.

 

Okay. Looks like there are a lot of free textbooks around the internet too, which is going to be helpful for me. (If you know any that covers this stuff well, feel free to point me to it)

 

Anyway, in-depth demonstrations are probably good because my knowledge about this stuff is just basically nil. I just know one concept here and another thing there because of having run into them in random occasions. But as I'm sure you can see from my reply, I clearly need to get a more comprehensive idea about certain areas of math. Right now I need to make a lot of shaky assumptions to understand what is being said, and the chance for error is HUGE.

 

Anyway, what you said was actually very helpful;

 

Essentially he's saying that the derivative of the modulus squared:

[math]\frac{d}{dx}P(x)=\frac{d}{dx}\left(\psi(x)\psi^{*}(x)\right)[/math]

 

will be zero if:

 

[math]\frac{d}{dx}\psi(x)=iK\psi(x)[/math] with [math]K\in\mathbb{R}[/math]

 

Now it is essential for K to be real (meaning [math]K=K^*[/math] i. e. it equals its complex conjugate).

 

I assume "modulus squared" means that the "absolute value" of something is squared?

 

I assume that [math]\psi^{*}[/math] means the complex conjugate of a function psi (Doctordick used a dagger earlier)

 

I assume [math]K\in\mathbb{R}[/math] means "K" is a real number. (I just ran into the meaning of [math]\in[/math] a moment ago by pure chance... it's hard to google symbols! :)

 

Anyway, I'm starting to see the point of this little bit. In the PF posts Doctordick explained the shift symmetry without mentioning [math]iK\psi(x)[/math] in that context directly (and I never picked it up because I still haven't been able to understand all the follow-up math where it first appeared). (On top of this I confused the derivative of "P" for that of [math]\psi[/math])

 

So let me get back to post #47 where I asked what does the right side of the symmetry equation mean.

 

Before I even get to trying to comprehend why:

[math]\sum_i \frac{\partial}{\partial x_i}\vec{\psi}(x_1,\tau_1,x_2,\tau_2, \cdots , x_n, \tau_n,t) = iK_x\vec{\psi},[/math]

 

I really need to understand what the symbols [math]iK_x\vec{\psi},[/math] mean.

 

Now I suppose "K" means some number (and in this case it needs to be real number).

 

What about "i"?

 

Why is there "x" in [math]K_x[/math]

 

The psi with the arrow I understand.

 

So, maybe every now and then it seems like I know something, but it really is just an illusion :) Usually I don't find logical procedures too difficult, it's just uncovering the meanings of all these symbols that's tricky. I have a feeling this iK stuff is something fairly trivial and something that Doctordick has already stated in some other form to me, but it's hard to figure things out when I have to assume WAY too many meanings on the symbols... (Funny, the topic kind of is about making logical assumptions about unknown symbols & patterns :)

 

-Anssi

Link to comment
Share on other sites

i is the imaginary unit and its square is -1. A generic complex number may be written as [imath]z = a + ib[/imath] with a and b both being real valued. Multiplying any real value by i gives an imaginary value; if the above a is 0 and b isn't, then z is a "pure imaginary" value.

 

The complex conjugate of the above is [imath]z^* = a - ib[/imath] (regard this as a definition) and its modulus squared is [imath]|z|^2 = a^2 + b^2=(a + ib)(a - ib)[/imath]. The dagger is used for various kinds of conjugation so it can also be used for this kind, I'm just accustomed to using the asterisk when it's the simple case of a single complex value.

 

Why is there "x" in [math]K_x[/math]
That's just how Dick indicates the K associated with derivation wrt x, there is also that wrt t. Now these relations (between these derivatives and the K values) appear akin to the de Broglie-Einstein relations except there are things I'm unsure of, such as what I said this morning. I was hoping you would help in getting Dick to clarify these things, but if your math is that basic it'll prolly be a slow process. You seem to have the patience though. :)
Link to comment
Share on other sites

Now it is essential for K to be real (meaning [math]K=K^*[/math] i. e. it equals its complex conjugate). This actually is enough for the above first derivative of P to be zero (K could even be x-dependent, as this would only change the second and higher derivatives of [imath]psi(x)[/imath]). I suppose Dick is requiring the first derivative to be zero for all x, which implies its further derivatives being zero as well; this could be his reason for saying K must be constant. However, I'm not so sure K needs to be constant (x-independent), it seems to me that a real-valued K(x) wouldn't break the symmetry as [imath]psi(x)[/imath] would have an x-independent modulus anyway (indeed, I get the second derivative of P being zero too). :confused:
Perhaps I misunderstand something here. The initial statement I made was that the solution (the explanation or the epistemological construct) could not depend upon the actual labeling procedure used to refer to the underlying ontological elements. I used numerical labeling only because the number of labels available was infinite, not because of numerical associations with those ontological elements. The idea that ones expectations could be seen as a function of those labels (as numbers) was no more than a statement that, if a method of obtaining expectations existed (i.e., an explanation under my definition of the same), a method of getting from those labels to the expectations existed and that is almost the definition of a mathematical function. A specific explanation is represented by a specific set of numerical labels, the meanings of the labels must be learned from the ”what is” is “what is” table representing that specific explanation (in all of its glory).

 

The application of shift symmetry is no more than the statement that adding a specific number (one single given number) to every numerical label changes nothing, if the original set of numbers was sufficient to decipher that explanation, the altered set is just as sufficient to accomplish the same result. That means that the mathematical function defined above must yield exactly the same expectations.

[math]P(x_1,x_2,\cdots,x_n,t) = P(x_1+a,x_2+a,\cdots,x_n+a,t)[/math]

 

I used this fact to show that the “shift symmetry” implied [imath]\frac{d}{da}P=0[/imath] and this relationship implies (to anyone who understands partial derivatives) that,

[math]\sum_n \frac{\partial}{\partial x_i}P(x_1,x_2,\cdots,x_n,t) =0.[/math]

 

But, if [imath]P=\vec{\psi}^\dagger \cdot \vec{\psi}[/imath] the fundamental requirement on [imath]\vec{\psi}[/imath] was, not that the sum over partials vanish but rather that

[math]\sum_n \frac{\partial}{\partial x_i}\vec{\psi}(x_1,x_2,\cdots,x_n,t) = iK\vec{\psi}(x_1,x_2,\cdots,x_n,t).[/math]

 

where, as you say, K is a real number.

 

Now you say that K can be a function of x as “it seems to [you] that a real-valued K(x)wouldn't break the symmetry as [imath]\psi(x)[/imath] would have an x-independent modulus anyway”. I do not understand what you mean by “an x-independent modulus” (note that there exists no information not contained within the complete set of numerical references in that ”what is” is “what is” table). I also do not understand exactly what “x” are you referring to: the specific [imath]x_i[/imath] in each explicit term in the sum over [imath]\frac{\partial}{\partial x_i}\vec{\psi}[/imath] or do you mean for that “x” to stand for the entire collection of x arguments [imath](x_1,x_2,\cdots,x_n)[/imath] in [imath]\vec{\psi}[/imath].

 

Well Anssi, I knew your mathematics background was limited but I didn't know just how limited. I had presumed you had at least an introduction to calculus. Perhaps some of the others here can be helpful. I have googled “calculus” and found nothing yielding direct understanding of what it is all about. They all seem to be organized along the lines of standard courses in calculus. I don't feel that the “wrote” details are as significant as a clear understanding of the basics. If one understands the basics, a little logic will allow deduction of the more subtle aspects.

 

In my head, the place to start is with a graphed function (any function). The slope of the function is usually defined as the “rise” over the “run”. If the function is not straight, this becomes something which depends on where you look. If we are looking at a function graphed on an x,y coordinate system (and that function is what we call “single valued”: we can get into multi-valued functions later) then a specification of x yields a specification of y. In fact, the “specification of y” is exactly what we mean by f(x). The slope of the curve (the “rise” over the “run”) at x would be established by a tangent to that curve but, analytically (with mathematics) we can approximate that slope by looking at two points very close to each other (the x reference of those points will be called x and [imath]x+\Delta x[/imath] where [imath]\Delta x[/imath] stands for a small change in x). It should be clear to you that the “run” is then given by [imath]\Delta x[/imath]. Likewise, the “rise” must be [imath]f(x+\Delta x) - f(x)[/imath]. This leads immediately to the fact that our “approximate” slope is given by

Slope = [math]\frac{f(x+\Delta x) - f(x)}{\Delta x}[/math]

 

and, just as clearly, the fact that the “approximate” slope would be the correct slope if [imath]\Delta x[/imath] were zero. On the other hand, devision be zero is strictly forbidden in mathematics; that is why the differential is defined to be “the limit” as [imath]\Delta x[/imath] goes to zero. As you said the numerator here goes to zero and that led you to the idea that the result would be zero; however, the denominator is also going to zero so that the correct result is a function of the rate at which they both go to zero. As an example, let's look at the case [imath]f(x) = x^2 [/imath]. In that case,

[math]f(x+\Delta x) = (x+\Delta x)^2 =(x+\Delta x)(x+\Delta x)=x(x+\Delta x)+\Delta x(x+\Delta x)=x^2+2x\Delta x+(\Delta x)^2.[/math]

 

Given that deduced three term representation of [imath]f(x+\Delta x)[/imath] and the definition of a derivative, our first step would be to subtract f(x) which is of course [imath]x^2[/imath] and removes the first term. The second step is to divide by [imath]\Delta x[/imath] and, since [imath]\Delta x[/imath] appears in both of the remaining terms, the final result is [imath]2x+\Delta x[/imath]. Now the limit as [imath]\Delta x[/imath] goes to zero is quite clear allowing us to immediately write down the derivative of the function,[imath]f(x) = x^2[/imath]:

[math]\frac{d}{dx}x^2 = 2x[/math]

The part that I painted orange is puzzling because that math expression would be zero not only in the limit but with any Delta x. Wonder if you made a mistake there since both sides of the minus sign are identical. But then perhaps that is exactly what you wanted to say?
I was intentionally adding zero in the middle in order to develop a set of terms which I could reorder yielding the sum of what amounts to the definition of the differentials with respect of x of f(x) and g(x).
In any case, another thing that troubles me is that it seems invalid to actually pull to limit to 0, since in that case the concept of a derivative is kind of nonsensical (since you can't get an answer from your calculation).
My derivation of the derivative of x squared above should clear that up.
For that reason it seems invalid to just remove [math]Delta x[/math] from the equations, as you seem to have done. Perhaps this concern would disappear if I understood the rest of the post better..
It is the actual act of dividing by zero which must be avoided. So long as delta x is not zero, the division still yields a meaningful result. If no division by delta x is required, you can simply remove the delta x term as, in the limit, it will be zero. That means that the various expressions I wrote down are all zero (in the limit) and thus any one of them can be inserted without affecting the final result in any way.
I don't know what those brackets mean. Unless their function is the same as the ordinary ( and )? In any case I don't know how to figure out whether those two above expressions are identical or not.
They are quite ordinary usage of brackets. The outside bracket is merely set off the fact that the limit applies to the whole expression and not just the first term and the other two sets are to indicate that the first is the differential of f(x) and not the product f(x)g(x) (being what is meant on the left side of the equation). In the first term, I have factored out [imath]g(x+\Delta x)[/imath] and in the second, I have factored out f(x). The limit as delta x goes to zero of what remains after the respective factor is removed is exactly the definition of the derivative with respect to x of f(x) and g(x). This is no more than a deduction of the rule for finding the derivative of a product of two functions. It is the rule I am using when I write out the partial derivatives of [imath]\vec{\psi}^\dagger \cdot \vec{\psi}[/imath]
But as I'm sure you can see from my reply, I clearly need to get a more comprehensive idea about certain areas of math. Right now I need to make a lot of shaky assumptions to understand what is being said, and the chance for error is HUGE.
I want you to understand every step well enough to catch any error on my part. I know that is asking a lot but that is the only way that I will be willing to proceed. I have no interest at all in convincing anyone that I have made no errors. If I have, I want to know about it.

I have a feeling this iK stuff is something fairly trivial and something that Doctordick has already stated in some other form to me, but it's hard to figure things out when I have to assume WAY too many meanings on the symbols... (Funny, the topic kind of is about making logical assumptions about unknown symbols & patterns :)
Sorry about that; I had presumed that you knew enough calculus to deduce that result on your own. For the moment, it is possible that I have made an error and that [imath]K_x[/imath] could be a function of the set of arguments [imath](x_1,x_2,\cdots, x_n)[/imath]. I will leave it to Qfwfq to demonstrate that to me as I do not presently see it. If it is true, then it would mean their would have to be additional constraints my attack does not take into account (more mathematical relationships to be satisfied). If he is right, I would very much like to see what those consequences would be.

 

As I have said many times in the past, I take mathematics as a given which I do not need to defend: i.e., I leave that defense to others much brighter than I.

That's just how Dick indicates the K associated with derivation wrt x, there is also that wrt t. Now these relations (between these derivatives and the K values) appear akin to the de Broglie-Einstein relations except there are things I'm unsure of, such as what I said this morning. I was hoping you would help in getting Dick to clarify these things, but if your math is that basic it'll prolly be a slow process. You seem to have the patience though. :)
All I meant to say was that the sum over partials need not be zero (though it could be zero) but rather could be a constant. Being “akin to the de Broglie-Einstein relations” is an issue to be settled after relating the equation to reality, a question far down the line at this moment. And yes, it will certainly be a slow process but any progress is better then none. I sincerely believe that most knowledgeable people simply bring too much baggage to the discussion. What I am saying is actually almost unbelievably simple. It is the consequences which become complex.

 

By the way Buffy alway keeps harping about the insights which arise from my work. There is one which should be quite evident even at this lowly level. If I am right, my fundamental equation requires that a paradigm exists (for every flaw-free explanation of anything) where the expectations (the probability of defined possibilities) are given by a first order linear differential equation. Looking at the complexity of modern physics, that alone would be a rather astounding result. Almost equivalent to the simplification Newton introduced to cosmology with his laws.

 

Have fun -- Dick

Link to comment
Share on other sites

By the way Buffy alway keeps harping about the insights which arise from my work. There is one which should be quite evident even at this lowly level. If I am right, my fundamental equation requires that a paradigm exists (for every flaw-free explanation of anything) where the expectations (the probability of defined possibilities) are given by a first order linear differential equation. Looking at the complexity of modern physics, that alone would be a rather astounding result. Almost equivalent to the simplification Newton introduced to cosmology with his laws.

See? That wasn't that hard was it?

 

Even I with my puny little brain--that obviously is incapable of understanding the three years of advanced Calculus and Abstract Algebra that it tried to absorb in one of the top math departments in the world--can understand! That would indeed be earth-shattering, and I'll be following this one closely, trying not to be a Noying along the way...

 

Until the sun comes up over Santa Monica Boulevard, :phones:

Buffy

Link to comment
Share on other sites

If I am right, my fundamental equation requires that a paradigm exists (for every flaw-free explanation of anything) where the expectations (the probability of defined possibilities) are given by a first order linear differential equation. Looking at the complexity of modern physics, that alone would be a rather astounding result. Almost equivalent to the simplification Newton introduced to cosmology with his laws.

 

So to turn it around, if there exists some phenomenon which can be shown to be incapable of modeling with a linear differential equation, would that mean there is a flaw in your argument?

-Will

Link to comment
Share on other sites

Sorry Dick, I'm afraid I can't find sufficient clarification in your replies to me.

 

Perhaps I misunderstand something here. The initial statement I made was that the solution (the explanation or the epistemological construct) could not depend upon the actual labeling procedure used to refer to the underlying ontological elements. I used numerical labeling only because the number of labels available was infinite, not because of numerical associations with those ontological elements.
I had gathered this, though the details aren't clear because I unfortunately haven't been able to follow the whole of your discussion with Ansii.

 

I do not understand what you mean by “an x-independent modulus”

...........

I also do not understand exactly what “x” are you referring to:

Well, OK Dick, I was simplifying the calculus to a single variable x in lieu of the many ones with index i. One derivative, instead of summation over the many partials. Now by your definitions, P is the modulus squared of psi (leaving out parameters and the likes):

 

[math]P=\psi^*\psi=|\psi|^2[/math]

 

Do you disagree that K is just the dependence of phase on the variable of derivation? A constant K is certainly a possibility, but try giving it a non-zero x-derivative and checking that the derivative of P will nonetheless be zero. This is valid for any single [imath]x_i[/imath] and just as much for the summation; [imath]\psi[/imath] could simply have a factor of the type [imath]e^{if_i(x_i)}[/imath] for each value of the index (or equivalently one factor with summation in the exponent).

 

I agree this might not be an actual objection to your whole argument, it could simply require a more extensive analysis. Surely though it widens possibilities from the linear differential equations you mention.

 

All I meant to say was that the sum over partials need not be zero (though it could be zero) but rather could be a constant. Being “akin to the de Broglie-Einstein relations” is an issue to be settled after relating the equation to reality, a question far down the line at this moment.
The way I meant "akin to" or "related to" is just that I can see the nexus, despite not getting the details straight. I've no trouble envisioning that quantum physics could be viewed as a specific case of your musings, just that I can't yet figure to what extent it be exactly determined as consequential or what further constraints might remain to be added.
Link to comment
Share on other sites

See? That wasn't that hard was it?
No but it isn't really forthright either. The real truth is that the complexity of the circumstance lies in the solutions and not in the equation itself. I know you think that I am trying to foist something on everyone, and that my conclusions are ridiculous, but, unless someone points out an error in my deductions, I can not help but think my conclusions are correct. Please believe me; if you see any error in my deductions, I want to hear about it. I certainly know I can be stupid at times and I may have very well missed some trivial but important point.
So to turn it around, if there exists some phenomenon which can be shown to be incapable of modeling with a linear differential equation, would that mean there is a flaw in your argument?
You are making an error in your logic here. Because you can come up with an interpretation of reality which requires that some specific probability must satisfy some non linear differential equation does not constitute proof that no interpretation exists which yields those same probabilities as a solution to a linear differential equation. Of course, as I said to Buffy, I may very well have missed some important point in my supposed deduction. If you see such an error, please bring it to my attention. I also point out that my assertion of "no assumptions" is essentially the avoidance of inductive logic (which is the very essence of assumption).

 

Finally, Qfwfq, I do not understand what you are trying to say. I start with the fact that

[math]\sum_i \frac{\partial}{\partial x_i}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t) = 0 [/math]

 

and then assert that there must exist an [imath]\vec{\psi}[/imath] with the same arguments such that [imath]P=\vec{\psi}^\dagger \cdot \vec{\psi}[/imath]. Certainly any [imath]\vec{\psi}_0[/imath] which satisfies the equation

[math]\sum_i \frac{\partial}{\partial x_i}\vec{\psi}_0(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t) = 0 [/math]

 

will satisfy the original constraint. Furthermore, over and above that, if I know [imath]\vec{\psi}_0[/imath], I immediately know that the function [imath]\vec{\psi}_1[/imath] defined as

[math]\vec{\psi}_1 = e^{iK \sum_i^n \frac{x_i}{n}}\vec{\psi}_0[/math]

 

satisfies the equation

[math]\sum_i \frac{\partial}{\partial x_i}\vec{\psi}_1(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t) = iK\vec{\psi}_1[/math]

 

and also satisfies the same original constraint on P. You assert that “[imath]\psi[/imath] could simply have a factor of the type [imath]e^{if_i(x_i)}[/imath] for each value of the index (or equivalently one factor with summation in the exponent)” and, by such a means, you conclude that K need not be a constant. I personally have been unable to find such a representation which satisfies the original constraint.

 

You may have overlooked the fact that [imath]\vec{\psi}[/imath] is absolutely undefined except for the fact that [imath]\vec{\psi}^\dagger \cdot \vec{\psi}[/imath] must be the probability we are searching for. This means the actual form of [imath]\vec{\psi}[/imath] is an open issue and that the kinds of factors associated with each argument are open to be anything which serves the need to satisfy the constraints. The only reason I put forth the transformation I did was to point out how the shift symmetry yields a conserved quantity. If you can show me another term (other than a constant) which can be factored out of absolutely any definable [imath]\vec{\psi}[/imath] such that

[math]\sum_i \frac{\partial}{\partial x_i}\vec{\psi}_2 = iK(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)\vec{\psi}_2[/math]

 

is guaranteed to satisfy the differential constraint on P, I would be happy to look at it; however, it appears to me that the equation above directly fails to satisfy the shift symmetry unless the function K itself satisfies that constraint. I certainly don't know, as the issue is far too complex for me to conclude that no other possibilities exist, but I am nonetheless dubious.

 

If you are right, and there is another general solution to the constraint that I have overlooked, it would imply that there is another mathematical relationship available which would in essence allow additional acceptable flaw-free epistemological constructs to exist under my paradigm.

I agree this might not be an actual objection to your whole argument, it could simply require a more extensive analysis. Surely though it widens possibilities from the linear differential equations you mention.
I don't really think so as what it really does, perhaps, is to allow additional solutions to that self same equation.
The way I meant "akin to" or "related to" is just that I can see the nexus, despite not getting the details straight. I've no trouble envisioning that quantum physics could be viewed as a specific case of your musings, just that I can't yet figure to what extent it be exactly determined as consequential or what further constraints might remain to be added.
Essentially, post #42 on this thread completes the derivation of my fundamental equation which expresses the fundamental constraints on the possible functions [imath]\vec{\psi}[/imath] which are to yield the probabilities of those ontological elements referred to by the numerical indices. There are still three subtle aspects of the situation which need to be pointed out but, beyond that, the issue is simple: find the solutions to that equation.

 

Have fun -- Dick

Link to comment
Share on other sites

Dr. Dick,

 

Could you please explain where the "m" in this equation is presented in previous posts and what it stands for. It just seems to appear out of thin air here at this step. Thank you.

 

[math]

\left\{\sum_i \vec{\alpha}_i \cdot \nabla_i + \sum_{i neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\psi} = K\frac{\partial}{\partial t}\vec{\psi} = iKm\vec{\psi}.

[/math]

Link to comment
Share on other sites

Removing the equation from the quote (Anssi's solution to the LaTex problem seems to be the best):

[math]

\left\{\sum_i \vec{\alpha}_i \cdot \nabla_i + \sum_{i neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\psi} = K\frac{\partial}{\partial t}\vec{\psi} = iKm\vec{\psi}.

[/math]

Could you please explain where the "m" in this equation is presented in previous posts and what it stands for. It just seems to appear out of thin air here at this step.
It arises directly from sloppy proof reading and I apologize sincerely. The best way to understand how it got there is to first look back at appendix 1 of my paper “A Universal Analytical Model of Explanation Itself” (the link to the appendix can be found at the end of the paragraph entitled,“The first important consequences of this model:”). In that appendix, I show how the following constraints on [imath]\vec{\psi}[/imath] arose from shift symmetry. (Please note that, in that paper, I use [imath]\vec{\Psi}[/imath] in place of the [imath]\vec{\psi}[/imath] and k in place of the K which I have been using in this thread.)

[math]\sum_i^n \frac{\partial}{\partial x_i}\vec{\psi}= iK_x \vec{\psi}\;,\;\;\sum_i^n \frac{\partial}{\partial \tau_i}\vec{\psi}= iK_\tau \vec{\psi}\;\;and\;\;\frac{\partial}{\partial t}\vec{\psi}= im\vec{\psi}[/math]

 

These three relationships constitute the constraints on [imath]\vec{\psi}[/imath] required to satisfy the shift symmetry embedded in the problem.

 

To this, I added a further constraint:

[math]\sum_{i \neq j} \delta(x_i -x_j)\delta(\tau_i -\tau_j) \vec{\psi}= 0[/math]

 

Then I simply asserted that the equation

[math]

\left\{\sum_i \vec{\alpha}_i \cdot \nabla_i + \sum_{i neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\psi} = K\frac{\partial}{\partial t}\vec{\psi} = iKm\vec{\psi}.

[/math]

 

would enforce exactly those constraints. I explicitly defined and constrained the non commuting elements in that expression (see the physicsforums post #471 which I referenced earlier in this thread) and showed that, with those definitions, all solution of this equation did indeed fulfill exactly the constraints stated above and that any [imath]\vec{\psi}[/imath] satisfying those constraints (in the frame defined by K=0) also solved the equation (this is also proved in appendix 3 of the “explanation paper”). The difficulty you refer to arises essentially because I only included the constraint on the partials with respect to x on the physicsforums post #471. I had thought I had included all three expressions there.

 

The actual fact is that K, k, and Km are just constants in all of the expressions the issue being that they are constants (conserved quantities). The final equation in the expression you quote essentially enforces the shift symmetry constraint on t. So, as you say, K, k and m just appear out of thin air. The only important part is the fact that the equation provides exactly the constraints I have deduced are required and no more.

 

I hope that clears things up for you -- Dick

Link to comment
Share on other sites

You are making an error in your logic here. Because you can come up with an interpretation of reality which requires that some specific probability must satisfy some non linear differential equation does not constitute proof that no interpretation exists which yields those same probabilities as a solution to a linear differential equation.

 

Indeed it does not. However, if one can rigorously show that NO linear equation can provide those probabilities (i.e. show that there exist non-linear equations that cannot map to linear equations), does that imply your equation is wrong?

-Will

Link to comment
Share on other sites

If you can show me another term (other than a constant) which can be factored out of absolutely any definable [math]vec{psi}[/math] such that

[math]sum_i frac{partial}{partial x_i}vec{psi}_2 = iK(x_1,tau_1,x_2,tau_2,cdots,x_n,tau_n,t)vec{psi}_2[/math]

 

is guaranteed to satisfy the differential constraint on P, I would be happy to look at it; however, it appears to me that the equation above directly fails to satisfy the shift symmetry unless the function K itself satisfies that constraint. I certainly don't know, as the issue is far too complex for me to conclude that no other possibilities exist, but I am nonetheless dubious.

I wasn't expecting I would have to spell it out to you Dick. Consider [imath]\psi[/imath] of the type:

 

[math]\psi(x)=Ae^{if(x)}[/math]

 

[math]\frac{d\psi(x)}{dx}=iAf^{\prime}(x)e^{if(x)}=if^{\prime}(x)\psi(x)[/math]

 

and define: [math]f^{\prime}(x)=K(x)[/math] so that, if [math]f(x)=Kx[/math] (derivative of K is zero and hence also the second derivative of f), then we get the case you contemplate. However, it is easy to see that:

 

[math]\frac{d}{dx}P(x)=\frac{d}{dx}(\psi^*(x)\psi(x))=0[/math]

 

does not require that form of f(x), the derivative of P(x) will still be zero. It is trivial to see because [imath]P(x) = A^2[/imath] which doesn't depend on x. You can even compute the derivative from that of [imath]\psi(x)[/imath] with an arbitrary [imath]K^{\prime}(x)[/imath] but it's obvious that "if there be justice", as one of my professors liked to say, the result must be zero.

 

It beats me that you have doubts about it.

Link to comment
Share on other sites

Indeed it does not. However, if one can rigorously show that NO linear equation can provide those probabilities (i.e. show that there exist non-linear equations that cannot map to linear equations), does that imply your equation is wrong?
The existence of non-linear equations which do not map to linear equations would not be sufficient as the concept totally neglects the issue of interpretation. You have apparently failed to notice that issue in my comment, “... does not constitute proof that no interpretation exists which yields those same probabilities ...”. I suspect you are presuming that your personal representation of reality is the only representation possible and it is that perception which is on very tenuous ground (quite equivalent to religious insistence on the existence of god). Secondly, I think you are vastly underestimating the complexity of the solutions to my equation.

 

This whole thing began almost fifty years ago when, as a graduate student in physics, it became evident to me that the whole scientific community based their “science” on presumptions which were only defendable by assertions like “it cannot be otherwise, we have universal agreement on it or how else would you explain it anyway”. To me that sounded almost exactly the same as the common religious defenses of their position. It seemed to me that the scientific community was essentially no more than the priests of a new religion. What I tried to understand was exactly what one could be confident of: i.e., what were the real fundamental constraints on logical explanations.

 

If you look at my presentation carefully, you will find only two constraints expressed. One is the direct consequence of shift symmetry (in the symbolism used to represent the fundamentals, simple numerical labels, one can add any specific number to the entire set without altering the solution based on that symbolic representation: i.e., language is an open issue) and the other is, so long as all the elements obey the rules of the explanation, there can be as many fictitious elements as the explanation requires. The second is the source of my rule, [imath]\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j)\vec{\psi}=0[/imath], which I proved can enforce any set of valid ontological elements in the ”what is”, is “what is” representation.

 

That's it; there is no more. Those constraints are indeed enforced by by my equation. There could also exist other constraints which might limit the possibilities but that issue I will leave to others. Certainly one could have no expectations of anything fundamental coming out of such trivial constraints but I found it interesting that those constraints could be put in such a succinct form and tried for years to develop some solution to that equation. It seemed to me that a solution could not help but say something significant but it never occurred to me just how significant the solutions were until I managed my first solution some ten years later. In simple terms, unless you can solve that equation, you have no evidence that it is wrong.

Consider [imath]psi[/imath] of the type:
[math]\psi(x)=Ae^{if(x)}[/math]

 

What you seem to miss is the fact that I am not looking for solutions, I am looking for ways of expressing fundamental constraints (quite a different issue). The function [imath]\vec{\psi}[/imath] is defined only by the fact that it must yield the proper probabilities; it must otherwise be left totally arbitrary. If you make any attempt to limit [imath]\vec{\psi}[/imath] in any other way, you are violating the central theme: you are failing to include all possibilities. That is to say, suppose [imath]\vec{\psi}[/imath] is not of the form [math]\psi(x)=Ae^{if(x)}[/math]?

It beats me that you have doubts about it.
I think that is because you don't understand what I am doing.

 

Have fun -- Dick

Link to comment
Share on other sites

DoctorDick:

 

Thank you for your explanation of the "m" term in one of your constraints. I have another question. Would you agree with this statement:

 

An explanation is a set of constraints justifying an action of the solver

 

I know the above differs from your definition of explanation--but, given that your equation of explanation derives from fundamental constraints, and that a constraint is a relationship between two sets, it just seems the above definition captures the essence of what you claim an explanation to be.

 

Would you agree that explanation itself, could be reduced to a dialectic between the two sets of constraints given below (which of course are your constraints), with the first being a type of necessary (~invariant) constraint and the second being a type of contingent (~variant) constraint, and that thus it is the dialectical synthesis of these two sets of constraints (the necessary and the contingent) that justify the action of those that attempt explanation of anything ? Thank you in advance for putting up with my constraint in understanding of what you are claiming. You see, to me, all reality, all that is true, is synthesis of dialectic, thus I look for the dialectic in your claim, and the best I can do is find possible synthesis in sets of constraints that may underlie the explanation of explanation itself.

 

Does EXPLANATION = dialectic between:

 

[math]

[\sum_i^n \frac{\partial}{\partial x_i}\vec{\psi}= iK_x \vec{\psi}\;,\;\;\sum_i^n \frac{\partial}{\partial \tau_i}\vec{\psi}= iK_\tau \vec{\psi}\;\;,\;\;\frac{\partial}{\partial t}\vec{\psi}= im\vec{\psi}]

[/math]

 

+

 

[math][\sum_{i \neq j} \delta(x_i -x_j)\delta(\tau_i -\tau_j) \vec{\psi}= 0][/math]

 

?

Link to comment
Share on other sites

What you seem to miss is the fact that I am not looking for solutions, I am looking for ways of expressing fundamental constraints (quite a different issue). The function [imath]vec{psi}[/imath] is defined only by the fact that it must yield the proper probabilities; it must otherwise be left totally arbitrary. If you make any attempt to limit [imath]vec{psi}[/imath] in any other way, you are violating the central theme: you are failing to include all possibilities. That is to say, suppose [imath]vec{psi}[/imath] is not of the form [math]psi(x)=Ae^{if(x)}[/math]?
What you seem to miss is the fact that I am not singling out solutions, I've successfully shown that the set of them is more ample than you thought. I showed that those are a subset of the form I started with, which you quoted, and which does meet the requirement of the shift symmetry. You are looking for ways of expressing fundamental constraints but claim one which is too restricted (unless there is some requirement further to shift symmetry). I'll be waiting to hear how the less restrictive constraint changes the conclusions that can be drawn from your requirements.

 

I think that is because you don't understand what I am doing.
I'm beginning to think you are muddled about it yourself. If you can sort out the point I raised and address it seriously, I'll be willing to continue my participation in this discussion.

 

:shrug:

Link to comment
Share on other sites

An explanation is a set of constraints justifying an action of the solver
No, I would not agree. In my view, an explanation is a definition, the definition of a procedure, “how to get from here to there”. The concept of a constraint is not sufficient to embody an explanation. Now that method may include constraints on the allowed steps but such constraints are essentially part of the definition.
I know the above differs from your definition of explanation--but, given that your equation of explanation derives from fundamental constraints, and that a constraint is a relationship between two sets, it just seems the above definition captures the essence of what you claim an explanation to be.
I don't think so. The essence of my definition is that an explanation can be seen as a mathematical function which converts data (numerical references to information) to answers (numerical references to information). My constraints arise directly from the fact that information not in the problem (what is to be explained) cannot exist in the solution (the explanation). In many respects, it is a statement of the freedom of interpretation which must propagate through the process of developing the explanation (conservation of ignorance so to speak).
You see, to me, all reality, all that is true, is synthesis of dialectic, thus I look for the dialectic in your claim, and the best I can do is find possible synthesis in sets of constraints that may underlie the explanation of explanation itself.
I looked up “dialectic” and got the impression that what was intended was the back and forth exchange underlying the development of an explanation and, for the most part, concerned the refinement of inductive conclusions. You should note that what I have presented is a deduction and no inductive conclusions (other than the definition of an explanation itself) play any role.

 

If I ever get to the process of showing the solutions to my equation which I have discovered, I will define additional things in terms of the data: i.e., concepts or relationships which invariably appear in any data set and deserve being defined purely for the convenience of discussion (your dialectic). My position is that your dialectic would be much more logically secure if posed in those concepts (being well defined concepts). Now that is only an opinion but I think I can bring forth a lot of reasons to try it (that process would be a discussion or a dialectic, not this endeavor; this is just pure logical analysis).

I'm beginning to think you are muddled about it yourself. If you can sort out the point I raised and address it seriously, I'll be willing to continue my participation in this discussion.
I am sorry to disappoint you but I really do not understand your presentation of your complaints. You invariably put your examples in terms of functions of one variable. Without multiple arguments, shift symmetry is utterly meaningless. Everything you bring up seems to be with regard to functions of one single argument and that simply is not what we are dealing with here. Perhaps there is another way to look at what I am saying which could be clearer to you. You said,
However, it is easy to see that:

 

[math]frac{d}{dx}P(x)=frac{d}{dx}(psi^*(x)psi(x))=0[/math]

 

does not require that form of f(x), the derivative of P(x) will still be zero. It is trivial to see because [imath]P(x) = A^2[/imath] which doesn't depend on x.

I do not comprehend how you come to the conclusion that [imath]P(x) = A^2[/imath] doesn't depend on x? (I am presuming that “A” you are using here is supposed to be something like the amplitude of your [imath]\psi[/imath]. The argument of P is at best a finite number of x's and at worst, an infinite number. It is the derivative with respect to the shift parameter “a” which must vanish (not the derivative with respect to any given x) and that fact can be used to show that

[math]\sum_i \frac{\partial}{\partial x_i}P(x_1,\tau_1,x_2,\tau_2, \cdots, x_n,\tau_n,t) = 0.[/math]

 

That certainly does not require that [imath]P(x)[/imath] can not depend on the collection of arguments [imath]x_i[/imath]. Rather, it requires the sum to be zero whenever the the arguments constitute a set consistent with the explanation which is to yield a non zero P and furthermore that the same P will be obtained if all [imath]x_i[/imath] are entirely replaced with [imath]x_i + a[/imath] (the shift symmetry implicit in the problem). Each term in that sum can have wildly different dependence on the collection of arguments; all that is required is that the sum over all of them vanishes.

 

I showed that, if one defined [imath]\vec{\psi}[/imath] to be a function such that [imath]P =\vec{\psi}\cdot \vec{\psi}[/imath] then the above constraint can trivially be replaced with,

[math]\sum_i \frac{\partial}{\partial x_i}\vec{\psi}(x_1,\tau_1,x_2,\tau_2, \cdots, x_n,\tau_n,t) = 0.[/math]

 

I also pointed out that

[math]\sum_i \frac{\partial}{\partial x_i}\vec{\psi}(x_1,\tau_1,x_2,\tau_2, \cdots, x_n,\tau_n,t) = -iK\vec{\psi}[/math]

 

would also solve the identical equation. In general, additional equations imply additional constraints; however, this result is actually not another constraint as, if I have a [imath]\vec{\psi}_0[/imath] which solves the first equation, I can generate a [imath]\vec{\psi}_1[/imath] which will solve the second. It is that fact which creates the conservation property required by shift symmetry.

 

But, in interest of clearing up this issue concerning factors of [imath]e^{if(x)}[/imath] with regard to specific [imath]x_i[/imath] arguments within [imath]\vec{\psi}[/imath] notice that I have said that the function is constrained in no way other than it yield the probabilities identical to the explanation it represents. Just as the individual terms in that sum over partials taken on P can all have wildly different dependence on the collection of arguments, so can the individual terms of those partials taken of [imath]\vec{\psi}[/imath]. This variability includes all of the possible variations (phase effects) brought up by you.

You can even compute the derivative from that of [imath]psi(x)[/imath] with an arbitrary [imath]K^{prime}(x)[/imath] but it's obvious that "if there be justice", as one of my professors liked to say, the result must be zero.
Again, your example is a function of one single argument and that simply is not what we are dealing with here. Go ahead, take higher derivatives of the correct [imath]\vec{\psi}[/imath] if you wish, what you will get (presuming you have the correct [imath]\vec{\psi}[/imath]: i.e., the one which yields the probabilities engendered by your explanation) may be interesting; however, if you write down an equation which you are going to require them to satisfy, you would then be adding additional constraints, not removing them.

 

Or another way to look at it. This equation contains only terms which are partials with respect to a specific [imath]x_i[/imath].

[math]\sum_i \frac{\partial}{\partial x_i}\vec{\psi}(x_1,\tau_1,x_2,\tau_2, \cdots, x_n,\tau_n,t) = 0.[/math]

 

Start by creating [imath]\vec{\psi}[/imath] with any sort of dependence on those arguments you wish. Each term in that sum acquires its value via the derivative of [imath]\vec{\psi}[/imath] with respect to the ith variable, treating all the other variables as constants. The only constraint which exists will rear its ugly head when you go to design the dependence on [imath]x_n[/imath], the last term: i.e., the sum of all the terms must vanish. The partial with respect to [imath]x_n[/imath] of [imath]\vec{\psi}[/imath] (the term of interest) in that equation is absolutely free in the sense that the argument [imath]x_n[/imath] is a essentially handled as a constant in all other partials. The problem could clearly be quite difficult but I suspect one could design the [imath]x_n[/imath] dependence of [imath]\vec{\psi}[/imath] such that the desired derivative would be the negative of the sum of the terms already designed. If that can be done then the shift symmetry would be implicit. As I said to Erasmus00, the complexity allowed here is almost unimaginable.

 

Have fun -- Dick

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...