Jump to content
Science Forums

What can we know of reality?


Recommended Posts

Sorry it has taken me a while to reply. Haven't had much time lately, plus my PC already crashed once while writing a reply :P

 

i is the imaginary unit

 

Thank you, that would not have been my first guess :)

I'm not exactly fluent with the properties of complex numbers, but with the help of wikipedia I can understand something about why such a concept exists.

 

And more to the point, now at least the [math]iK_x\vec{\psi}[/math] part of the "symmetry equations" is not completely alien to me, and I can understand something about how the before mentioned definitions could lead to the constraint:

 

[math]\sum_i \frac{\partial}{\partial x_i}\vec{\psi} = iK_x \vec{\psi}[/math]

 

But I still need to wrestle with the details in post #51.

 

btw, this stuff was explained with some more detail in PF post #471, I just had yet to wrestle that post through.

 

That's just how Dick indicates the K associated with derivation wrt x, there is also that wrt t. Now these relations (between these derivatives and the K values) appear akin to the de Broglie-Einstein relations except there are things I'm unsure of, such as what I said this morning. I was hoping you would help in getting Dick to clarify these things, but if your math is that basic it'll prolly be a slow process. You seem to have the patience though. :)

 

I have the patience and there's no doubt in my mind that I couldn't learn the necessary concepts here. It's just the time that is bit of a problem sometimes and slows things down. The faster route I can find to figure out all the necessary concepts, the better.

 

As of clarifying something between you and Dick, I don't think I can be much of help too soon. I am not even sure what you are asking about, but I just thought it might be helpful if I comment on the context of the propability function since you were not participating in the earlier PF thread.

 

Here's a somewhat blunt, but logically equivalent thought experiment.

 

Suppose you are receiving a stream of data whose meaning is completely unknown to you. You are given a task to make predictions of that data.

 

Suppose you spot/define certain features of that data and label them with arbitrary names. In this context we choose to label them with arbitrary numbers just so we can exploit established mathematical concepts.

 

Suppose you come up with some probability function that correctly predicts probabilities to the future data.

 

That function has its roots in the so-far accumulated data, but obviously those arbitrary chosen labels (numbers that referred to certain features of the data) could have been anything, and still the probability function would behave the same way. I.e. you could arbitrarily swap some labels (over the entire data), and nothing would change. (=start referring to "apples" as "oranges" and vice versa).

 

And that's what the sum over partial derivatives of the probability function refer to (each "X" is an arbitrary label for some feature of the data, and adding a number to each X is the same as changing the name of each feature over the entire data)

 

As Doctordick ofter says, one can conceive the probability function as a function that "fits the points of the data". I.e. if you just plotted all the data onto some euclidean space (broken into discrete points), you could look at the probability function as a function that could plot those points. In that case, if you just shifted all the points in that space to one direction or another, the probability function would not get invalidated.

 

Anyway, like I said, it could be you had picked all that stuff up already, since I cannot really understand what you were asking :)

 

-Anssi

Link to comment
Share on other sites

Well Anssi, I knew your mathematics background was limited but I didn't know just how limited. I had presumed you had at least an introduction to calculus. Perhaps some of the others here can be helpful. I have googled “calculus” and found nothing yielding direct understanding of what it is all about. They all seem to be organized along the lines of standard courses in calculus. I don't feel that the “wrote” details are as significant as a clear understanding of the basics. If one understands the basics, a little logic will allow deduction of the more subtle aspects.

 

Yes I totally agree, as that has been my attitude when it comes to educating myself. I.e. I try not to bother with the trivial information but rather try to figure out the mechanisms that allow me to come up with the trivia when needed.

 

For that reason it takes me a lot of time to wrestle with the concepts that you are using and I am unfamiliar with. I still want to spend some time trying to understand better the constraint [math]\sum_i \frac{\partial}{\partial x_i}\vec{\psi} = iK_x \vec{\psi}[/math]. But already I can say I have much better idea about where it's coming from that before. I think part of the problem is that a lot of the information that I need is still contained in some of the PF posts that are still on my "to do" list. (And I guess there's no way for you to remember what did I respond to already and what did I not)

 

Also I understand it can be problematic for you to figure out what do I know and what do I not know... That's why I tend to write down how I've interpreted things.

 

As you said the numerator here goes to zero and that led you to the idea that the result would be zero; however, the denominator is also going to zero so that the correct result is a function of the rate at which they both go to zero.

 

Ah, okay that seems to make sense. Yeah, functions as graphs was already a familiar idea to me, and the idea about the limit I had figured out from the wikipedia page for "derivative" (and the way you explained it confirmed me that I had gotten it right). So of course I wondered what could it mean to have "rise" when there's no "run" (i.e. if you zoom in "infinitely close" to the graph, can you say there's a slope there). There's no doubt in my mind that that's well defined stuff and just standard math conventions, but those are the questions I would ask my math teacher :)

 

Anyway, I'll still wrestle with the specific example you gave to make sure I understand it properly, but right now I'm running out of time again... I'll try to get back to it soon.

 

-Anssi

Link to comment
Share on other sites

As an example, let's look at the case f(x) = x^2. In that case,

[math]f(x+\Delta x) = (x+\Delta x)^2 =(x+\Delta x)(x+\Delta x)=x(x+\Delta x)+\Delta x(x+\Delta x)=x^2+2x\Delta x+(\Delta x)^2.[/math]

 

--end of quote

 

Okay, I walked through that and I think I understand it now. After factoring out terms we can get to the exact results. Now I can get back to post #51.

 

I understand a lot more about it already, but there are still some details that I couldn't figure out. (btw, if this is standard stuff, there must be standard explanations on some online textbook from which I could just educate myself?)

 

---QUOTE post #51---

[math]

\frac{d}{dx}f(x)g(x)=\lim_{\Delta x \rightarrow 0}\left\{\left\{\frac{d}{dx}f(x)\right\}g(x+\Delta x)+f(x)\left\{\frac{d}{dx}g(x)\right\}\right\}.

[/math]

---end of quote

 

I couldn't figure out where that middle part [math]g(x+\Delta x)+f(x)[/math] comes from. I suppose it has got something to do with the terms you added two steps earlier, and I suppose the step between (where you just reordered terms) was supposed to clarify something, but I'm at loss here anyway.

 

The rest of the post seems pretty clear to me, just I don't know the meaning of that middle part in the consequent steps either obviously.

 

-Anssi

Link to comment
Share on other sites

I'm not exactly fluent with the properties of complex numbers, but with the help of wikipedia I can understand something about why such a concept exists.
In many respects, the concept of a complex number is exactly the same as the concept of enumerating the points in a two dimensional plane. If you lay out points in a two dimensional plane (call the dimensions x and y) then x=a and y=b specify a point in that plane. In the same sense, any such point can be seen as representing the complex number a + bi or, in a sense, the vector pointing from the origin to that point. If you look at it from that perspective, addition is identical to vector addition. Multiplication of two complex numbers (in the vector perspective) is analogous to adding the angle of each vector from the real axis together and multiplying the magnitude of the two vectors.

 

If we look at that vector in two dimensions represented in polar coordinates (identifying x with the real axis) r and [imath]\theta[/imath], a would be r cos [imath]\theta[/imath] and b would be r sin[imath]\theta[/imath]. The product of two such vectors would be a difficult trigonometry problem. The angle of the resultant vector would be simply [imath]\theta_1 + \theta_2[/imath] but to prove that the magnitude of vector corresponding to [imath](a_1+b_1i)(a_2+b_2i)[/imath] is indeed the same as the product [imath]r_1 r_2[/imath] requires a little knowledge of trigonometry.

 

The only reason I brought that up is to point out that complex numbers add nothing new to functional relationships expressible by the form [imath]\vec{\psi}(x_1,x_2,\cdots,x_n)[/imath] they merely provide us with the ability to express a specific relationship between selected pairs of output in a simple manner. An advantage in mathematical manipulation of interest. As I have said many times, I define mathematics as the invention and study of abstract internally consistent systems and I am quite sure that any system which could be shown to be internally consistent would be readily accepted as a mathematical system (or the mathematicians would point out that it was equivalent to one they already had).

 

One of my major complaints about the reactions of others trying to understand what I say is that they bring too much baggage to the problem. I suspect that is exactly what is going on with Qfwfq and it is what is beginning to occur in you. You express an interest in what this expression means.

[math]\sum_i \frac{\partial}{\partial x_i}\vec{\psi} = iK_x \vec{\psi}[/math]

 

It merely means that the existence of shift symmetry in the arguments of P is totally equivalent to [imath]\vec{\psi}(x_1,x_2,\cdots,x_n)[/imath] satisfying that equation. The first is equivalent to the second and the second is equivalent to the first. To “understand” such a thing is to “understand” the steps of the deduction, one to the other. The problem is that mathematics brings logical deduction to level incomprehensible by common intuitive grasp and often fails to throw that little switch in your head which says, “oh, I understand that”.

I have the patience and there's no doubt in my mind that I couldn't learn the necessary concepts here. It's just the time that is bit of a problem sometimes and slows things down. The faster route I can find to figure out all the necessary concepts, the better.
This is what I mean by the “baggage” issue. It is easy to get bogged down in the details and fail to comprehend the essence of the thing. For example, Qfwfq, Erasmus00 and I suspect Buffy are all bothered by the idea that they don't see the above linear differential relationship as allowing sufficient complexity to fulfill the needs of the physical problems they can conceive of and thus presume it cannot possibly be a valid constraint (at least not on their mental picture of reality). Fundamentally, that expression is almost exactly the statement of conservation of momentum in classical quantum mechanics (just multiply by i h bar and the left hand side is the sum of what is usually referred to as the total momentum operator in a photon gas). If one had the correct wave function for the most complex collection of massless entities conceivable (and that wave function could include complexities beyond belief), then (from the perspective of classical quantum mechanics) application of that operator would yield, term by term, the momentum of each individual entity going to make up that collection. That the sum equals a constant is no more than a statement of conservation of momentum. The constraint is almost trivial and yet they all want to insist that it cannot possibly be valid. The only explanation I can conceive of is they just are thinking about too many things and not looking at the equation itself: too much baggage.
In this context we choose to label them with arbitrary numbers just so we can exploit established mathematical concepts.
I would rather say, “so we can communicate issues expressible in well defined terms together with logical conclusions beyond the capability of common intuitive logic”.
... you could arbitrarily swap some labels (over the entire data), and nothing would change. (=start referring to "apples" as "oranges" and vice versa).
This is a good example of what I am talking about. The “swap” you refer to could certainly be done; however, that possibility leads to few logical conclusions concerning the overall nature of the problem. Now, if you could set up an internally consistent system of changing labels and, from that system, deduce some logical consequences (we would be talking mathematics) then that kind of “swapping” could be a valuable thing to look at. Adding a constant to every numerical label is just such a “well defined” change of labels and it has some important logical consequences.
I still want to spend some time trying to understand better the constraint

[math]\sum_i \frac{\partial}{\partial x_i}\vec{\psi} = iK_x \vec{\psi}[/math].

 

As I said above it is no more than a statement that shift symmetry is a valid symmetry of the problem.

 

---quote Anssi

I couldn't figure out where that middle part [math]g(x+\Delta x)+f(x)[/math] comes from. I suppose it has got something to do with the terms you added two steps earlier, and I suppose the step between (where you just reordered terms) was supposed to clarify something, but I'm at loss here anyway.

---end quote

 

Look at the equation one line above after I reorder the terms:

[math] \frac{d}{dx}f(x)g(x)=\lim_{\Delta x \rightarrow 0}\left\{{\frac{f(x+\Delta x)g(x+\Delta x)- f(x)g(x+\Delta x)}{\Delta x}+\frac{f(x)g(x+\Delta x) - f(x)g(x)}{\Delta x}}\right\}.[/math]

 

Notice that inside the curly brackets there are two fractions. Examine the numerators of those fractions. Notice that [imath]g(x+\Delta x)[/imath] can be factored from the left hand numerator and that, after that term is factored out, what remains is exactly the definition of the derivative of f(x) before the limit is taken. Notice also that f(x) can be factored from the numerator of the right hand fraction and, after that term is factored out, what remains is exactly the definition of the derivative of g(x) before the limit is taken. What I have done is replaced the left fraction

[math]\lim_{\Delta x \rightarrow 0}\frac{f(x+\Delta x)- f(x)}{\Delta x}g(x+\Delta x)[/math] with [math]\lim_{\Delta x \rightarrow 0}\left\{\frac{d}{dx}f(x)\right\}g(x+\Delta x)[/math]

 

and the right fraction

[math]\lim_{\Delta x \rightarrow 0}\frac{g(x+\Delta x)- g(x)}{\Delta x}f(x)[/math] with [math]\lim_{\Delta x \rightarrow 0}f(x)\left\{\frac{d}{dx}g(x)\right\}[/math]

 

yielding exactly the result

[math] \frac{d}{dx}f(x)g(x)=\lim_{\Delta x \rightarrow 0}\left\{\left\{\frac{d}{dx}f(x)\right\}g(x+\Delta x)+f(x)\left\{\frac{d}{dx}g(x)\right\}\right\}.[/math]

 

Where only one [imath]\Delta x[/imath] remains and taking the limit is trivial.

 

With regard to post #51, I would presume at this point that you understand the four constraints I deduced and that the only difficulty remaining is to understand why that implies my equation is valid. In describing that processes, I said, “All one need do is multiply the fundamental equation (I had the word solution here in error which I have now fixed) through by the term [imath]\alpha_{qx}[/imath], commute it through the various alpha and beta elements in the equation and then sum the result over q.” I am not sure of your familiarity with commutation so I thought I might point out the following.

[imath][\alpha_{ix} , \alpha_{jx}] \equiv \alpha_{ix} \alpha_{jx} + \alpha_{jx}\alpha_{ix} = \delta_{ij}[/imath]

 

can be rearranged to show that [imath]\alpha_{ix}\alpha_{jx} = \delta_{ij} -\alpha_{jx}\alpha_{ix}[/imath] which implies

[imath]\alpha_{qx}\alpha_{ix} = \delta_{iq} -\alpha_{ix}\alpha_{qx}[/imath] and [imath]\alpha_{qx}\beta_{ij} = -\beta_{ij}\alpha_{qx}[/imath]

 

(look at the defined commutation of alpha with beta).

 

Thus all that happens as [imath]\alpha_{qx}[/imath] is commutated through an alpha or a beta is a sign change except when q=i. In that case, the [imath]\delta_{iq}[/imath] picks up one additional term with no alpha or beta. The term being picked up is exactly

[math] \frac{\partial}{\partial x_q}\vec{\psi}[/math]

 

When the result is summed over q, all terms with [imath]\sum_q \alpha_{qx}\vec{\psi}[/imath] vanish and we are only left with

[math] \sum_q\frac{\partial}{\partial x_q}\vec{\psi}=0[/math]

 

The other sums work out exactly the same way, reproducing the original constraints except for that constant which should be cleared up by my last post to Qfwfq. In essence, this equation poses a very trivial constraint on [imath]\vec{\psi}[/imath]: first that shift symmetry must exist and second that there always exists a set of “invalid” ontological elements such that the rule

[math]\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j)[/math]

 

will yield the “valid” ontological elements to be explained. Any competent scientist should properly say, “so what, that is no constraint worth worrying about!” And that is exactly why I find the solutions so astounding.

 

Have fun -- Dick

Link to comment
Share on other sites

Ive been racking my brains to think how to get it across to you, now I find you refuse to examine my baggage because you think I'm bringing too much of it and write it off as being due only to my disbelief. :banghead:

 

You invariably put your examples in terms of functions of one variable. Without multiple arguments, shift symmetry is utterly meaningless. Everything you bring up seems to be with regard to functions of one single argument and that simply is not what we are dealing with here.
But reasoning with more than one variable simply does not narrow down to the constant K as you claim, even less does it keep a single value of K for all variables. I used the single variable case for simplicity and it would have to work just as fine, if it did for summation over many variables. The fact that P can depend on each single x although constant along the one parametrized by a doesn't narrow down possibilities, it only widens them.

 

Let's start by saying something simple, that Ansii should have no difficulty following; you are talking about a sum of variations, we don't need calculus to get that far, let's just call each one of them [imath]\delta_i[/imath] and consider that:

 

[math]\left(\delta_i=0\;\forall i\right)\Rightarrow\sum_i\delta_i=0[/math]

 

but the inverse implication doesn't hold. Your objection to me is as if it were the other way around. The sum being zero is definitely a less tight constraint than the above implicant; it is satisfied by a much broader class of possible sets [imath]\left\{\delta_i\right\}[/imath] while the implicant is only one single n-ple. Any doubts? Now what you actually start with is:

 

[math]\sum_n \frac{\partial}{\partial x_i}P(x_1,x_2,\cdots,x_n,t) =0[/math]

 

with [imath]P=\vec{\psi}^\dagger \cdot \vec{\psi}[/imath] and claim it to imply:

 

[math]\sum_n \frac{\partial}{\partial x_i}\vec{\psi}(x_1,x_2,\cdots,x_n,t) = iK\vec{\psi}(x_1,x_2,\cdots,x_n,t)[/math]

 

but even the constraint on each partial [imath]\left(\forall i\right)[/imath] would effectively imply:

 

[math]\sum_n \frac{\partial}{\partial x_i}\vec{\psi}(x_1,x_2,\cdots,x_n,t) = iK_i(x_1,x_2,\cdots,x_n,t)\vec{\psi}(x_1,x_2,\cdots,x_n,t)[/math]

 

with each [imath]K_i[/imath] being real-valued while the constraint that you pose, with summation, is even less restrictive. In terms of solutions, expressed in the exponential form, the variation of the exponent need to be pure imaginary (and hence a phase) only for the variations along the curve parametrized by a. So not only K needn't be constant, or even the same value for all values of the index, but each [imath]K_i[/imath] needn't be real valued as long as [imath]\sum_i K_i[/imath] is; it's enough for the imaginary parts to cancel out. This should be the broadest class of possibilitis, if I haven't screwed it up; it is more complicated than in the single variable case but doesn't lead to the tighter constraint as you claim it does.

Link to comment
Share on other sites

In many respects, the concept of a complex number is exactly the same as the concept of enumerating the points in a two dimensional plane. If you lay out points in a two dimensional plane (call the dimensions x and y) then x=a and y=b specify a point in that plane. In the same sense, any such point can be seen as representing the complex number a + bi or, in a sense, the vector pointing from the origin to that point. If you look at it from that perspective, addition is identical to vector addition. Multiplication of two complex numbers (in the vector perspective) is analogous to adding the angle of each vector from the real axis together and multiplying the magnitude of the two vectors.

 

I've been looking at this geometrical perspective of complex numbers (and complex numbers in general) this evening and it's starting to seem fairly simple... I don't have time to reply to the rest of the post yet, but I thought I'd drop few lines just in case you are starting to get too bummed out by the extra "specific ontology baggage" that you are seeing once again :)

 

No need to worry; when I said I want to understand [math]\sum_i \frac{\partial}{\partial x_i}\vec{\psi} = iK_x \vec{\psi}[/math] better, I was in fact referring exactly to understanding its logical deduction from the definitions you have given. Some (mathematical) things that are blindingly trivial to you, will require me to struggle little bit before they become intuitive to me too. (In fact I've even had try and recall some very basic algebra since I haven't done any in the past 10 years)

 

So, exactly like you said, I'm trying to make that "little switch in my head" to throw :) I know these things become trivial once you spend some time with them and the oversights I'm doing now will seem embarrassing at best. But no worries, I've been in similar position before :)

 

Also the "learning of necessary concepts" was referring to mathematical concepts.

 

I figure some of the math looks familiar to some common notation of something in theoretical physics (since I've seen comments like that from other people), but I certainly fail to recognize such things, and even if I did, I'm pretty sure I'd be capable of understanding that math is one thing and its intepretation is a whole another issue.

 

I'll get back to the rest of the post as soon as I can (it doesn't seem too tricky).

 

-Anssi

Link to comment
Share on other sites

I figure some of the math looks familiar to some common notation of something in theoretical physics (since I've seen comments like that from other people), but I certainly fail to recognize such things, and even if I did, I'm pretty sure I'd be capable of understanding that math is one thing and its intepretation is a whole another issue.
Certainly, Dick deliberately uses notation akin to some things fundamental in theoretical physics because he then uses his fundamental equation as a basis for these things. This is a reason why I was interested in understanding the derivation of his equation, it would be very interesting if his argument were correct.

 

I realized I was sloppy in my last post, yeah I felt I was screwing something up but time shortage is always my problem. :doh: I left a summation in the last equation, where it should be [imath]\forall i[/imath].

 

Errata corrige:

....but even the constraint on each partial [imath]\left(\forall i\right)[/imath] would effectively imply:

 

[math]\frac{\partial}{\partial x_i}\vec{\psi}(x_1,x_2,\cdots,x_n,t) = iK_i(x_1,x_2,\cdots,x_n,t)\vec{\psi}(x_1,x_2,\cdots,x_n,t)[/math]

 

with each [imath]K_i[/imath] being real-valued while the constraint that you pose, with summation, is even less restrictive.

 

At that point, with the summation, we get that Dick's unique K is the sum [imath]K=\sum_{i=0}^{n}K_i[/imath], constrained to be real-valued:

 

[math]\sum_{i=0}^{n}\frac{\partial}{\partial x_i}\vec{\psi}(x_1,x_2,\cdots,x_n,t) = iK(x_1,x_2,\cdots,x_n,t)\vec{\psi}(x_1,x_2,\cdots,x_n,t)[/math]

 

and it remains that it can depend on all the variables and whatever. The constant value of K is an extra assumption, not a consequence of the initial ones.

Link to comment
Share on other sites

I couldn't figure out where that middle part [math]g(x+Delta x)+f(x)[/math] comes from. I suppose it has got something to do with the terms you added two steps earlier, and I suppose the step between (where you just reordered terms) was supposed to clarify something, but I'm at loss here anyway.

Look at the equation one line above after I reorder the terms:

[math] \frac{d}{dx}f(x)g(x)=\lim_{\Delta x \rightarrow 0}\left\{{\frac{f(x+\Delta x)g(x+\Delta x)- f(x)g(x+\Delta x)}{\Delta x}+\frac{f(x)g(x+\Delta x) - f(x)g(x)}{\Delta x}}\right\}.[/math]

 

 

Notice that inside the curly brackets there are two fractions. Examine the numerators of those fractions. Notice that [imath]g(x+\Delta x)[/imath] can be factored from the left hand numerator...

---end of quote

 

Really had completely forgotten that rule for factoring out terms that you were using here. Doh! So one little switch thrown... (I literally feel like I'm walking on the mine field trying to make sure I understand the deductions correctly... but at least I feel like I'm making progress every time I have time to look at this :)

 

With regard to post #51, I would presume at this point that you understand the four constraints I deduced and that the only difficulty remaining is to understand why that implies my equation is valid.

 

I thought so too, but reading some post forward, I noticed it was mentioned that K is a function! I was being told "it is a constant", so I guess what was meant that its result needs to be a constant according to the definitions and their deductions? I suspected it is some function with standard definition, and indeed found: K-function - Wikipedia, the free encyclopedia

So that's what "K" refers to here too? (If so I will look at its definition more closely)

 

About the conundrum between you and Qfwfq, I haven't had time to try and figure out what's going on too closely, but I don't think it's an extra baggage issue as he seems to be just asking about pure logic. It could well be that there's just some small but critical definition (or unobvious deduction) from our past discussions that's been missed.

 

In terms of our earlier discussions, he seems to be asking simply, why couldn't the value of K be dependent on the shift parameter (referred to as "a" in earlier posts), as it seems that it still would not violate the definition

[imath]P=\vec{\psi}^\dagger \cdot \vec{\psi}[/imath]

w/ earlier deduced constraint

[math]\sum_n \frac{\partial}{\partial x_i}P(x_1,x_2,\cdots,x_n,t) =0[/math]

 

I am very much unsure of the answer myself as I cannot readily see the necessary deductions... but then I did not have time to read all the posts with thought yet so I don't know if you already gave an answer :P

 

I should already be at bed so no time for more thinking.... later!

 

-Anssi

Link to comment
Share on other sites

Sorry but I haven't had access to the internet for the last four days, been off visiting relatives with no access. I will try to catch up.

Ive been racking my brains to think how to get it across to you, now I find you refuse to examine my baggage because you think I'm bringing too much of it and write it off as being due only to my disbelief. :banghead:
I am sorry. It was not my intention to dismiss your complaint. When I refer to “bringing too much baggage to the discussion” my complaint is entirely directed to the issue of assumptions. Whenever one uses something they have already decided is correct, there is a very great danger that the decision was based on an assumption. The more complex the circumstance behind that decision, the more likely assumptions have been made. And I am often as guilty of that error as is anyone. On reviewing what I have said, I think I may have very well gotten the horse on the wrong side of the cart a few times: i.e., my presentation was sometimes based on the conviction that the conclusions were correct. This is all stuff I did many many years ago and, at the time, felt pretty confident in my attack and very well prepared to defend any detail on a moments notice; however, actual memory of those defenses are limited at best. In the last few years, a number of people have expressed difficulty with some of my assertions and I have, to date, had no real problems with defending them although that defense seems to come to me much much slower then it did forty years ago.

 

Qfwfq, I am an old man and I am seriously aware of the fact that my mental facilities are not at all what they were fifty years ago; it is entirely possible that you are right and my concept of the situation is flawed; however, I personally just can't see it. I do believe that the problem here is that I am indeed somewhat befuddled and senility is probably not far away, but I don't think it is really as bad at the moment as you seem to think. Nevertheless, I think you are right with regard to one issue anyway. When I started with,

 

[imath]\sum \frac{\partial}{\partial x_i}P(x_1,x_2,\cdots,x_n,t) =0[/imath]

 

with [imath]P=\vec{\psi}^\dagger \cdot \vec{\psi}[/imath], and claimed it to imply

 

[math]\sum \frac{\partial}{\partial x_i}\vec{\psi}(x_1,x_2,\cdots,x_n,t) = iK\vec{\psi}(x_1,x_2,\cdots,x_n,t)[/math],

 

I was in error. The error in presentation arose because of baggage I had brought to the table: i.e., I was putting forth something which I had “already decided was correct”. I think things might have gone much better if I had not introduced complex numbers this early as it introduces a problem which would have been easier to avoid had I stuck with the real representation. I think you will agree that [imath]\sum \frac{\partial}{\partial x_i}\vec{\psi}(x_1,x_2,\cdots,x_n,t) = 0[/imath] satisfies the constraint on P. What is important here is that I introduced representing probability via [imath]\vec{\psi}[/imath] in order to assure that no possible procedure for getting from the arguments [imath](x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)[/imath] to the correct P was omitted: i.e., the notation [imath]\vec{\psi}[/imath] can represent any function (that is one of the reasons I pointed out the vector representation of complex numbers to Anssi).

 

What one needs to remember is that the set [imath]\vec{\psi}[/imath] can represent any probability distribution via the normalized inner product with itself, including both correct results and incorrect results. When it comes to eliminating some of those incorrect possibilities (not all, just a certain set), I use only constraints implied by our ignorance (the solution to our problem cannot contain information not available to us). If you do not understand the foundations of my approach, you need to read three posts I made earlier to physicsforums.com.

My perspective on symmetry is quite alien to the norm and it would be valuable for anyone interested to take a quick look at my post to savior machine (post #696 on the “Is time just an illusion?” thread on physicsforums), selfAdjoint’s response to it (post number 697 immediately below that post) and my response to selfAdjoint’s (post number 703 on that same page).
We are presently talking about shift symmetry but I will bring up some additional important symmetries down the road a bit. At the moment, there is another symmetry which really needs to be pointed out here and now.

 

We are talking about symmetries which are present because the numerical labels (i.e., any symbols) used to represent the ontological elements can be rearranged without effecting the correct solution at all. Anssi seems to be the only one to have a good grasp of that fact. I have already brought up shift symmetry and I would now like to bring up mirror symmetry: i.e., [imath](x_1,x_2,\cdots,x_n,t)\Rightarrow(-x_1,-x_2,\cdots,-x_n)[/imath]. Again, this shift in representation cannot yield a different solution as knowing a specific such assignment of labels is correct is not a possibility. With this in mind, let us look at our [imath]\vec{\psi}[/imath] again. In particular, let us look at the possibility where [imath]\vec{\psi}[/imath] has only two components, [imath]\psi_1[/imath] and [imath]\psi_2[/imath]. Just for the fun of it, let us suppose [imath]\psi_1=G(x_1,x_2,\cdots,x_n,t)[/imath] where G can be any possible function of those n arguments. Then let [imath]\psi_2=G(-x_1,-x_2,\cdots,-x_n,t)[/imath]. Note that [imath]P= 2G^2[/imath] since, by mirror symmetry both functions yield the same result; however, the parital of [imath]\psi_2[/imath] with respect to [imath]x_i[/imath] is exactly the negative of the parital of [imath]\psi_1[/imath] so that the sum over partials becomes exactly zero. Now, G may not be the correct solution (i.e., may not yield the probabilities of the explanation it is to represent) but it certainly will be totally consistent with shift symmetry.

 

My point being that, if

[math]\sum_{i=1}^n \frac{\partial}{\partial x_i}\vec{\psi}(x_1,\tau_1,x_2,\tau_1,\cdots,x_n,\tau_n,t) = 0[/math],

 

we can certainly be sure that P will satisfy shift symmetry and, at the same time, it imposes no constraints whatsoever on the actual functional form of the implied probabilities. It is then only a matter of simple observation that

[math]\vec{\phi}=e^{iK*(x_1+x_2+\cdots+x_n)}\vec{\psi}[/math]

 

is a solution of,

[math]\sum_{i=1}^n \frac{\partial}{\partial x_i}\vec{\phi}(x_1,\tau_1,x_2,\tau_1,\cdots,x_n,\tau_n,t) = iK\vec{\phi}[/math].

 

This approach avoids the problem with your analysis which I have been trying to point out. The central problem with a solution of the form

[math]\sum_{i=1}^n \frac{\partial}{\partial x_i}\vec{\phi}(x_1,\tau_1,x_2,\tau_1,\cdots,x_n,\tau_n,t) = iK(x_1,x_2,\cdots,x_n)\vec{\phi}[/math],

 

is that the expression itself does not satisfy shift symmetry (note that the linear nature of [imath]e^{iK*(x_1+x_2+\cdots+x_n)}[/imath] does). Now I will agree that there may exist such a solution (one where [imath]K(x_1,x_2,\cdots,x_n)[/imath] does satisfy shift symmetry; however, I insist that specific solution is included in the possibilities expressible by the form [imath]\vec{\psi}[/imath]. If one were to include your expression as a valid expression of the constraint required by shift symmetry, the function K would have to be specifically limited in some way otherwise, if it is left totally general, the expression no longer enforces shift symmetry on the epistemological construct but does nevertheless enforce some kind of constraint on [imath]\vec{\psi}[/imath]. The exact source and/or defense of that constraint is not at all clear to me. In fact, the case [imath]e^{iK*(x_1+x_2+\cdots+x_n)^2}[/imath] does seem to yield results consistent with shift symmetry; however, that case can also be generated from the [imath]\vec{\psi}[/imath] (the zero solution) via

[math]\vec{\Phi}=e^{iK*(x_1+x_2+\cdots+x_n)^2}\vec{\psi}[/math].

 

The problem with this solution is that another required symmetry is violated.

edited correction: Perhaps not, it seems that these may simply be special representation which may or may not be of use to our explanation. Suppose we just let the issue lay aside for a while with the simple assertion that the zero solution is sufficient to fulfil the need to satisfy shift symmetry.

That would be scale symmetry (a symmetry I would rather avoid talking about until I have some solutions to discuss). Scale symmetry indeed puts another constraint on the possible solutions but not one which is easily understood here. For the moment, let me presume that the central solution which is to be found satisfies [imath]\sum \frac{\partial}{\partial x_i}\vec{\psi}(x_1,x_2,\cdots,x_n,t) = 0[/imath].

I figure some of the math looks familiar to some common notation of something in theoretical physics (since I've seen comments like that from other people), but I certainly fail to recognize such things, and even if I did, I'm pretty sure I'd be capable of understanding that math is one thing and its interpretation is a whole another issue.
Yes, I do put things in a notation very similar to that of standard theoretical physics and perhaps that is a confusing thing to do as it brings forth a lot of those assumptions I want people to avoid. I have always assumed that bright people could work with my definitions without being confused by the implications of common physics definitions but I could very well be wrong as I seem to manage to confuse myself on occasion. I do it because, in the final analysis, the concepts I introduce map directly into the concepts of common physics with the same names except that the concepts are somewhat shifted by the absence of some unnecessary assumptions.

 

My definition of time is one of the major things there. It is quite different from the scientific norm but it nonetheless will map directly into their concepts of time; however, that mapping will lack the assumed continuity or specific ordering of reality assumed by the scientific community. The continuity and ordering becomes part of the “invalid” ontological elements added in order to establish an explanation (don't worry if that doesn't make any sense to you now, I think it will later); however, the division between “past” and “future” (the present) is very specifically defined thing largely ignored by the scientific community. (I only mention time as I think there is enough here now for any serious reader to begin to comprehend what I mean by “unnecessary assumptions”.)

The constant value of K is an extra assumption, not a consequence of the initial ones.
And Qfwfq is entirely correct there.
I noticed it was mentioned that K is a function! I was being told "it is a constant", so I guess what was meant that its result needs to be a constant according to the definitions and their deductions? I suspected it is some function with standard definition, and indeed found: K-function - Wikipedia, the free encyclopedia

So that's what "K" refers to here too? (If so I will look at its definition more closely)

No Anssi, it is most definitely not the K referred to in that Wikipedia entry.

About the conundrum between you and Qfwfq, I haven't had time to try and figure out what's going on too closely, but I don't think it's an extra baggage issue as he seems to be just asking about pure logic. It could well be that there's just some small but critical definition (or unobvious deduction) from our past discussions that's been missed.
I think the problem is that he is thinking in terms of P being constrained by shift symmetry whereas I am thinking of the entire epistemological construct on which the explanation is built needs to be consistent with shift symmetry. The two views lead to subtle differences. Read what I have written above carefully and you might understand my position.

 

And, yes, you shouldn't waste your time when you need your sleep. Why don't you not worry about the complications and just ask whatever questions come to you. I will do my best to answer and I think Qfwfq will keep me honest. Actually, from your comments, I am getting the impression you are picking up quite well on most everything, including what Qfwfq and I have been arguing about. You seem to me to understand post #51 pretty well; the big issue now is can you follow the second half of post #72. What follows is the central point of what I am talking about.

With regard to post #51, I would presume at this point that you understand the four constraints I deduced and that the only difficulty remaining is to understand why that implies my equation is valid. In describing that processes, I said, “All one need do is multiply the fundamental equation (I had the word solution here in error which I have now fixed) through by the term
I set the following outside the "Quote" in order to display the mathematics.

 

[imath]\alpha_{qx}[/imath], commute it through the various alpha and beta elements in the equation and then sum the result over q.” I am not sure of your familiarity with commutation so I thought I might point out the following.

[math][\alpha_{ix} , \alpha_{jx}] \equiv \alpha_{ix} \alpha_{jx} + \alpha_{jx}\alpha_{ix} = \delta_{ij}[/math]

 

can be rearranged to show that [imath]\alpha_{ix}\alpha_{jx} = \delta_{ij} -\alpha_{jx}\alpha_{ix}[/imath] which implies

[imath]\alpha_{qx}\alpha_{ix} = \delta_{iq} -\alpha_{ix}\alpha_{qx}[/imath] and [imath]\alpha_{qx}\beta_{ij} = -\beta_{ij}\alpha_{qx}[/imath]

 

(look at the defined commutation of alpha with beta).

 

Thus all that happens as [imath]\alpha_{qx}[/imath] is commutated through an alpha or a beta is a sign change except when q=i. In that case, the [imath]\delta_{iq}[/imath] picks up one additional term with no alpha or beta. The term being picked up is exactly

[math] \frac{\partial}{\partial x_q}\vec{\psi}[/math]

 

When the result is summed over q, all terms with [imath]\sum_q \alpha_{qx}\vec{\psi}[/imath] vanish and we are only left with

[math] \sum_q\frac{\partial}{\partial x_q}\vec{\psi}=0[/math]

 

The other sums work out exactly the same way, reproducing the original constraints except for that constant which should be cleared up by my last post to Qfwfq. In essence, this equation poses a very trivial constraint on [imath]\vec{\psi}[/imath]: first that shift symmetry must exist and second that there always exists a set of “invalid” ontological elements such that the rule

[math]\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j)[/math]

will yield the “valid” ontological elements to be explained. Any competent scientist should properly say, “so what, that is no constraint worth worrying about!” And that is exactly why I find the solutions so astounding.
Essentially, if you understand the derivation of my equation and what kind of a constraint it imposes, you would find it rather simplistic. All it really says is that any solution to the problem (any explanation of the universe) must be interpretable as a collection of elemental entities which must conserve something analogous to momentum (that could mean they are at rest or in motion: i.e., they either change or don't change) and that there exists a set of hypothetical elemental entities where “no two are exactly the same” can yield the reality you are aware of (the valid ontological elements plus those hypothetical elements). As I said, my difficulty is that any competent scientist would say such a conclusion is trivial and not even worth examining even if you could prove it were valid. Well, I think I have proved it valid and I would like someone to explain why the solutions I obtain for that equation don't seem so trivial.

 

Have fun -- Dick

Link to comment
Share on other sites

Qfwfq, after some thought I have come to the conclusion that you are absolutely correct; however, I also think my presentation is correct. I believe the confusion exists because both of us are overlooking the fact that all forms of the exponential relationship yielding phase relationships can be embedded in the vector notation of my [imath]\vec{\psi}[/imath]. Perhaps not as easily as the case I used in my demonstration but none the less the combination of that vector notation with

[math]\sum_{i=1}^n \frac{\partial}{\partial x_i}\vec{\psi}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0[/math]

 

can indeed include all the cases you have proposed. Certainly the statement is true if mirror symmetry is enforced. If you have any issues with that statement, let me know.

 

Sorry about my confusion -- Dick

Link to comment
Share on other sites

Well, I still don't get how the combination of that vector notation with

[math]\sum_{i=1}^n \frac{\partial}{\partial x_i}\vec{\psi}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0[/math]

 

could include all the cases I have proposed.

 

One thing is sure to me. Any symmetry involving [imath]P=\psi^*\psi[/imath] will leave the phase of [imath]\psi[/imath] quite free, a quite incontrovertible fact of the math. As I said, all the cases I've been discussing must also satisfy the shift symmetry "if there be Justice" i. e. by force of the phase remaining arbitrary and this goes for any symmetry constraint on P. Just consider any change in the phase of [imath]\psi[/imath] (the imaginary part of the exponent) and it's trivial that it doesn't change [imath]P=\psi^*\psi[/imath].

 

To the benefit of Ansii I'll further digress into complex values. Whether it's a single, given, fxed complex number or a value of whatever complex-valued kzabloogfle, it may be written in terms of real and imaginary part, or of modulus (also called amplitude) and phase. These are commonly expressed as follows:

 

[math]a+ib=me^{ip}[/math]

 

With Euler's formulae giving the relation between (a, b) and (m, p). The above relies on the fact that an imaginary exponent only gives phase, whereas a real exponent gives amplitude. By the properties of exponentials, if we indicate the natural logarithm of m with s so that [imath]m=e^s[/imath] then we can add to the above as follows:

 

[math]a+ib=me^{ip}=e^{s}e^{ip}=e^{s+ip}[/math]

 

It also holds that the complex conjugate of the above is:

 

[math]a-ib=me^{-ip}=e^{s}e^{-ip}=e^{s-ip}[/math]

 

and the product (as in [imath]P=\psi^*\psi[/imath]) can be so computed:

 

[math](a+ib)(a-ib)=a^2+b^2=e^{s+ip}e^{s-ip}=e^{s+ip+s-ip}=e^{2s}=\left(e^s\right)^2=m^2[/math]

 

which somewhat closes the circle. This may be of help in following my discussion of what [imath]\psi[/imath] and K may be, but we've also been discussing a specific type of differential equation (a complicated topic in general) so you need the rule for derivation of exponentials such as [imath]e^{f(x)}[/imath].

 

[math]\frac{d}{dx}e^{f(x)}=f^{\prime}(x)e^{f(x)}[/math] (the prime being shorthand for derivation)

 

Now if f is a linear function of x: [imath]f(x)=kx+q[/imath], then the above becomes:

 

[math]\frac{d}{dx}e^{kx+q}=ke^{kx+q}[/math]

 

Now, in the discussions with an [imath]iK[/imath] in them, the imaginary unit simply "flips" the real and imaginary parts of K (and changes one sign but that's no great concern) so perhaps with all this you can drag yourself through most of the calculus discussed here. The key thing in my arguments is to see how real and imaginary parts affect modulus and phase.

Link to comment
Share on other sites

Hi Qfwfq, you are making the same mistake I often find myself making; using a back slash for and ending tag when you should be using the forward slash. You've done it nine times in your latest post.

 

But, back to the post itself. I think you are concerned with the issue of solutions and not the issue of constraints. We are dealing with a linear differential equation here. We both agree that the solution to

[math]\sum_{i=1}^n \frac{\partial}{\partial x_i}\vec{\psi}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0[/math]

 

is a solution to the constraint that

[math]\sum_{i=1}^n \frac{\partial}{\partial x_i}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0[/math]

 

which was the original constraint on P deduced from shift symmetry. As you say, “Any symmetry involving [imath]P=\psi^*\psi[/imath] will leave the phase of [imath]\psi[/imath] quite free [quite unconstrained?], a quite incontrovertible fact of the math.” Essentially, all we are talking about here is a shift in phase between two different solutions (a shift in phase is a rotation in the complex plane). Since we are dealing with a linear first order differential equation, any sum of solutions is also a solution. I am presuming that you are aware that the abstract space of the vector [imath]\vec{\psi}[/imath] is not the abstract space of [imath](x_1,x_2,\cdots,x_n)[/imath] nor that of [imath](x,\tau,t)[/imath]. It is another abstract space all to itself. Essentially its use allows one to be discussing a sum of solutions simultaneously (including all kinds of rotations in that space). In particular, if [imath]\vec{\psi}[/imath] is a solution to the equation, so is [imath]i\vec{\psi}[/imath]. It seems to me that the vector notation allows a second representation of the same phenomena introduced via expression [imath]e^{if(x)}[/imath]; however, a general proof of that issue seems to be presently outside my mental abilities. What do you say to laying the issue aside until after we have discussed the general solutions of the constraint expressed by my fundamental equations:

[math]

\left\{\sum_i \vec{\alpha}_i \cdot \nabla_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\psi} = K\frac{\partial}{\partial t}\vec{\psi} = iKm\vec{\psi}.

[/math]

 

Your position is that inclusion of your additional possibilities will expand the number of possible solutions. The above set of equations is not easy to solve and I think a discussion of your complaints would be much more meaningful after I elaborate on the attack I use to extract solutions. That is to say, you are not really complaining about my constraint but rather that its design seems to limit the possibilities beyond what you would think reasonable: i.e., you feel that there are more solutions to the problem than the set satisfying the constraints imposed by that relationship.

 

How about we try to get Anssi's questions concerning the representation cleared up so that I can proceed to attack the problem of finding solutions. All I think Anssi needs to comprehend the situation is to understand how the introduction of the anti-commuting factors alpha and beta allow recovery of the four essential constraints. Essentially that solutions to the above equations satisfy the four constraints deduced and that all solutions which satisfy those four constraints are also solutions to the fundamental equation.

 

Thank you -- Dick

Link to comment
Share on other sites

You've done it nine times in your latest post.
That's a form of laziness known as the copy-paste effect!

 

I think you are concerned with the issue of solutions and not the issue of constraints.
Actually I see both as ways of discussing more or less the same thing, the factor (K(...)) which multiplies [imath]i\psi(...)[/imath] is no more nor less than the rate of change in phase ([imath]\frac{d\varphi}{da}[/imath] in the case of the shift symmetry).

 

We are dealing with a linear differential equation here.
The one in P is, this does not imply the one in [imath]\psi[/imath] must consequently be.

 

Essentially, all we are talking about here is a shift in phase between two different solutions (a shift in phase is a rotation in the complex plane).
Yes, but one nitpickin' detail: constant K is the specific case of global phase shift, the shift symmetry (or any one that's on P) doesn't imply this, it may be local.

 

What do you say to laying the issue aside until after we have discussed the general solutions of the constraint expressed by my fundamental equations:
No problem discussing these, as being a specific case and of course one of interest. Our disagreement is all a matter of necessary and sufficient condition.

 

[math]{\rm A}\Rightarrow {\rm B}[/math] (A implies B)

 

A is a sufficient condition for B, B is a necessary condition for A.

 

[math]{\rm Dic}0\Rightarrow{\rm Dic}K\Rightarrow{\rm Q}\Rightarrow{\rm P}[/math]

 

Your tighter constraint implies your less stringent one, both imply mine, all three imply the symmetry in terms of P. Of course, if asserts A and B are:

 

[math]\psi\in A[/math] and [math]\psi\in B[/math]

 

the implication [math]\psi\in A\Rightarrow\psi\in B[/math] is equivalent to:

 

[math]B\subset A[/math] (which shows how constraints are related to sets of solutions).

 

All I think Anssi needs to comprehend the situation is to understand how the introduction of the anti-commuting factors alpha and beta allow recovery of the four essential constraints. Essentially that solutions to the above equations satisfy the four constraints deduced and that all solutions which satisfy those four constraints are also solutions to the fundamental equation.
Oui, bon, allez. I'd like to see these steps better detailed, rather than work it out myself, I think it would be a quicker way to see if your argument is satisfactory, in the case you consider, and then perhaps understand the more general case.

 

I hope no wrong slashes have slipped into my equations. :D

Link to comment
Share on other sites

The original question was, what can we know of reality? One thing we do know, science has not reached the end of the line or else there would be no room for speculation and all scientists would be historians. That being said, what we percieve to be reality may not be reality at all. Or only part of what we percieve to be scientific reality is reality and will remain at steady state. But anoither part is out of touch with reality and will continue to evolve. But we can't tell one from the other, yet.

 

One of the litmus tests of scientific validity is math. But the math is only as reliable as the assumptions it is based on. One can make math do almost anything one wants, even if it is out of touch with reality. Let me give an example. Say I assumed that gravity was due to the repulsion of matter by space. This is not true, but it used for a demonstration. Someone skillful in math could use this assumption and sort of do a reciprical of the existing equations, to come up with a math model . The result will be able to make predictions and could be used to put a man on the moon. But it is not in touch with reality, yet supported with math. If we extrapolate from there and this advance math made a good prediction, is this deeper relationship reality?

 

What this means, if reality is important, math is not the final judge. Math can used to support both reality and conceptual illusions. What is more fundamental is the conceptual analysis, before the math. Good conceptual analysis has to be consistent with reality observation. At the same time, reality observation, needs to be in context of the largest system.

 

For example, say we had a photo of a pond in the back of someone's yard. We can see the pond, house and other identifying objects. If we zoom in to only the pond, this same observation is no longer certain, because one does no longer has the context of the big picture. The singular reality observation, can now be a lake, pond or ocean. It can be in the north, south, east or west, any time of the year, etc. This is the reality problem faced by conceptualizing from the point of view of specialization. It can see something closely, but not always in the context of the big picture. Yet the close view is so detailed, one assumes what we see is reality. Math is a willing accomplise in all this, with math able to support anything.

 

For example, say we determine this is a lake near the Adirondac Mts, based on water color and transparency data. Based on that, one now has certain assumptions of what they expect to happen If this doesn't happen as often as we expect, then we need to add statisitical/chaotic theory. If the big picture was also seen, then this chaos may have an explanation. What I am saying is random and chaos is a good litmus test for reality, the more it exsits, the farther from reality our assumptions are getting.

 

Let me give an historical example. Before Newton was hit on the head with the apple, that got his mental gears of gravity in motion, the affect of falling objects was percieved to be far more random. Sometimes, big things fell faster, and sometimes small things. The theory of chaos and random could have come in handy at that time to explain this. Once the rational relationships of gravity appeared, than all that chaos went away. Before Newton, nobody had big enough perpective to see a trend. But each theory could have had legs with the proper use of chaos.

 

To add to the confusion, the human mind is both rational and irrational. If both are working at the same time, the result be can rational explanations of the irrational. Or irrational explanations for the rational.

Link to comment
Share on other sites

Qfwfq, I am sorry for being so slow to respond. Your last post dismayed me quite a little. One of your comments made it quite clear that you do not understand the central problem under discussion. Anssi understands the problem but his mathematics is currently insufficient to understand my constraint mechanisms precisely. The line in your post which made your lack of understanding so clear was ,

Yes, but one nitpickin' detail: constant K is the specific case of global phase shift, the shift symmetry (or any one that's on P) doesn't imply this, it may be local.
You simply don't seem to understand the fundamental question that I am attacking. Please don't feel bad; it seems that no one comprehends what I am talking about (except of course Anssi as he seems to have come to the same question I am concerned with completely on his own). Personally, I simply do not know how to make the issue clear to someone who has never thought of it. The circumstance reminds me of something I read many years ago. It was a paper written by some ancient Greek (I don't read Greek so what I read was an English translation of the original). Though it wasn't exactly mentioned, the paper concerned the definition of speed. The writer pointed out that, if one person was faster than another, he would cover the same distance in less time or, on the other hand, he would be able to cover a greater distance in the same time. The paper continued to make a number of different comparisons in an attempt to clarify what the writer was talking about. What became quite clear was that the idea of “speed” was not a concept the writer held as obvious. Today, everyone (except maybe some very primitive peoples out of touch with modern gadgets) understand exactly what one means by speed and it is difficult for us to comprehend that confusion could ever have existed.

 

The issue here is that the indices [imath](x_1.\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,[/imath] and t) are nothing more than arbitrary numerical labels used to refer to ontological elements. They constitute an undefined language containing the information to be explained. In any logical deduction based upon those ontological elements, nothing can be introduced which violates the arbitrariness of those assignments (what I believe Anssi refers to as semantics) as, if you make an assumption which violates that arbitrariness, it amounts to asserting you know something about what the assignments mean. I have essentially defined only one basic concept; I have defined what I mean by time. Now if you were to define what you meant by a term, as I did with my definition of “time”, that would be another story: i.e., you could use the concept “local” but only after you had defined what you meant by the term. The issue is that you cannot talk about ontological elements being “local” without providing me with a method of determining which ones qualify as local, the point being that, your definition must be applicable even under the arbitrary reassignment of all of those labels. You should understand here, that I can talk about position on the x axis because that is nothing more than a representation of arbitrary label: that is to say, the labels can be arbitrarily shuffled throughout the collection of ontological elements without violating that definition.

 

This brings up another issue which I am not sure you understand. I often comment that you are concerning yourself with the solutions and not the constraints on the solutions. The solution here is to find the function [imath]\vec{\psi}[/imath] which yields the probability distribution for those ontological elements identical to the probability distribution yielded by the explanation that [imath]\vec{\psi}[/imath] is to represent. That is why I use the vector notation I use. I refer to [imath]\vec{\psi}[/imath] as a function because, given any set of arguments, that index set [imath](x_1.\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)[/imath], it must yield another set (in a totally different abstract space by the way) consisting of [imath](\psi_0,\psi_1,\psi_2,\psi_3,\cdots,\psi_q)[/imath] where the number q is an open issue. This representation can represent an explicit mathematical function, an arbitrary computer program or even a simple look up table. The central issue being that no relationship capable of generating any specific collection of expectations is omitted. This representation is capable of yielding any result so your solution (that would be your epistemological solution), no matter what it is, is representable by the expression [imath]\vec{\psi}[/imath]. The issue of finding such a solution is an epistemological problem and is of no interest here. What I am concerned with is the fact that our ignorance places some subtle constraints on a flaw-free epistemological solution. I believe I have discovered a way of expressing some very specific constraints in a mathematically exact form.

 

I showed, it detail, that this required (under the fact that arbitrary reassignment of numerical labels can have no consequences on the solution) that shift symmetry demands that the following expressions are true.

[math]\frac{P(x_1+a,x_2+a,\cdots,x_n+a,t)-P(x_1,x_2,\cdots,x_n,t)}{a}=0[/math] ,

 

[math]\frac{P(\tau_1+b,\tau_2+a,\cdots,\tau_n+a,t)-P(\tau_1,\tau_2,\cdots,\tau_n,t)}{a}=0[/math]

 

and

 

[math]\frac{P(t+a)-P(t)}{a}=0.[/math]

 

The form of those expressions being exactly the fundamental nugget of the definition of a derivative leads directly (via some common mathematics) to the requirements that, if the range of possibilities available to those labels is extended to the entire continuum, the following relationships must be valid.

[math]\sum_{i=1}^n\frac{\partial}{\partial x_i}P(x_1,x_2,\cdots,x_n,t)=0[/math] , [math]\sum_{i=1}^n\frac{\partial}{\partial \tau_i}P(\tau_1,\tau_2,\cdots,\tau_n,t)=0[/math] and [math]\frac{\partial}{\partial t}P(t)=0:[/math]

 

i.e., no valid itemized data underlying any epistemological solution can invalidate that expression.

 

If P, the method of obtaining expectation probabilities, is represented by [imath]\vec{\psi}^\dagger \cdot \vec{\psi}[/imath] (a representation used because it can represent absolutely any mechanism for obtaining P) then the above can be reduced to a constraint on that [imath]\vec{\psi}[/imath].

[math]\sum_{i=1}^n\frac{\partial}{\partial x_i}\vec{\psi}(x_1,x_2,\cdots,x_n,t)=0[/math] , [math]\sum_{i=1}^n\frac{\partial}{\partial \tau_i}\vec{\psi}(\tau_1,\tau_2,\cdots,\tau_n,t)=0[/math] and [math]\frac{\partial}{\partial t}\vec{\psi}(t)=0:[/math]

 

together with an interesting collection of associated relationships expressing essentially the same constraint.

[math]\sum_{i=1}^n\frac{\partial}{\partial x_i}\vec{\phi}(x_1,x_2,\cdots,x_n,t)=iK_x\vec{\phi}[/math] , [math]\sum_{i=1}^n\frac{\partial}{\partial \tau_i}\vec{\phi}(\tau_1,\tau_2,\cdots,\tau_n,t)=iK_\tau \vec{\phi}[/math] and [math]\frac{\partial}{\partial t}\vec{\phi}(t)=im \vec{\phi}[/math]

 

via the simple mechanism of defining [math]\vec{\phi}=e^{iK_x * ( x_1+x_2+\cdots+x_n)}e^{iK_\tau * (\tau_1+\tau_2+\cdots+\tau_n)}e^{imt}\vec{\psi}[/math]. As both of us realize, the introduction of the term [imath]e^{iK}[/imath] has utterly no impact upon the resultant P. Your position was that K could be any function of those indices without effecting the resultant P and I balked. At the time I was very disturbed by the nature of [imath]\vec{\phi}[/imath] so defined. To me it seemed quite obvious that the result violated shift symmetry. After a little examination of the details of possible [imath]\vec{\psi}[/imath] I concluded you were right (the nature of [imath]\vec{\phi}[/imath] was such that any relationship desired could be obtained). What I failed to pick up on was that you had deflected my interest from the fundamental issue. It is not just P which must satisfy shift symmetry but rather the whole issue of any aspect of any epistemological solution. No valid procedure exists which can add information to that which is to be explained and failure to accommodate shift symmetry on the reference labels to those ontological elements is sufficient cause to reject any epistemological solution.

 

The final constraint (see my exposition on the function I call F) which I introduced involved using Dirac delta functions (which together with a proper collection of invalid ontological elements could eliminate all circumstances not in the base data): i.e., such an F is capable of constraining the valid ontological elements to whatever they were (that ”what is”,is “what is” table to be explained) by simple inclusion of “invalid” ontological elements to eliminate the unwanted possibility.

[math]\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j)\vec{\psi}=0[/math]

 

I then asserted that, under my definitions of alpha and beta together with the vector representation of the partial derivatives (the meaning of the symbol [imath]\vec{\nabla}[/imath]),

[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\psi} = K\frac{\partial}{\partial t}\vec{\psi} = iKm\vec{\psi}.[/math]

 

generates exactly the same constraints expressed above. You appear to want that assertion justified in detail.

Oui, bon, allez. I'd like to see these steps better detailed, rather than work it out myself, I think it would be a quicker way to see if your argument is satisfactory, in the case you consider, and then perhaps understand the more general case.
Suppose we have found a solution to the above equation which yields exactly the probability distribution required to match a specific flaw-free epistemological solution proposed as an explanation of reality (our “valid” ontological elements on which that explanation is to be based). Call that solution [imath]\vec{\Psi}[/imath]. The first point is that the above expression is actually two equations (note the two equal signs). The final relationship is exactly relationship expressed by the shift symmetry constraint given above as [imath]\vec{\phi}[/imath] and its dependence on t. The [imath]\vec{\psi}[/imath] version (the one where the differential vanishes) is easily retrieved by multiplying [imath]\vec{\Psi}[/imath] by the factor [imath]e^{-iKmt}[/imath] which we know does not impact the probabilities yielded as P. So the shift constraint on the index t is exactly the constraint imposed by that expression.

 

So let us step on to the three other constraints which are expressed in the first equation. If [imath]\vec{\Psi}[/imath] is a solution to that equation then [imath]\vec{\Psi}[/imath] is also a solution to

[math]\left\{\sum_i \alpha_{qx}\vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\alpha_{qx}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = iKm\alpha_{qx}\vec{\Psi}[/math]

 

as all I have done is multiply through by [imath]\alpha_{qx}[/imath]. Now [imath]\alpha_{qx}[/imath] has been defined by two expressions; first by its commutation properties, [imath][\alpha_{ix} , \alpha_{jx}] \equiv \alpha_{ix} \alpha_{jx} + \alpha_{jx}\alpha_{ix} = \delta_{ij}[/imath] which demands that [imath]\alpha_{qx} \alpha_{ix}= -\alpha_{ix}\alpha_{qx} + \delta_{qi}[/imath]. Likewise, [imath]\alpha_{qx} \alpha_{i\tau}= -\alpha_{i\tau}\alpha_{qx}[/imath] and [imath]\alpha_{qx} \beta_{ij}= -\beta_{ij}\alpha_{qx}[/imath] (no delta element appears). As neither alpha or beta are functions of x, [imath]\tau[/imath] or t, we may be assured that [imath]\vec{\Psi}[/imath] will also be a solution to

[math]\left\{\sum_i -\vec{\alpha}_i \cdot \vec{\nabla}_i \alpha_{qx}+ \sum_{i \neq j}-\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \alpha_{qx}\right\}\vec{\Psi} +\frac{\partial}{\partial x_q}\vec{\Psi}= iKm\alpha_{qx}\vec{\Psi}[/math]

 

as commutation merely changes the sign and adds one additional term (that partial with respect to [imath]x_q[/imath] sans alpha or beta) when i happens to be q. Finally, factoring out the term [imath]\alpha_{qx}[/imath] we can sum the above equation over q and we still have an equation which is satisfied by [imath]\vec{\Psi}[/imath]; however, when we perform that sum we get the following result

[math]\left\{\sum_i -\vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}-\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\sum_q\alpha_{qx}\vec{\Psi} +\sum_q\frac{\partial}{\partial x_q}\vec{\Psi}= iKm\sum_q\alpha_{qx}\vec{\Psi}.[/math]

 

At this point, the second portion of the definitions of alpha and beta come in to play.

[math]\sum_i \vec{\alpha}_i \vec{\Psi} = \sum_{i \neq j}\beta_{ij} \vec{\Psi} = 0.[/math]

 

Thus it is that only one term in the above expression is non zero:

[math]\sum_q\frac{\partial}{\partial x_q}\vec{\Psi}=0[/math]

 

and it follows that our solution [imath]\vec{\Psi}[/imath] obeys the shift symmetry constraint on the [imath]x_i[/imath] which was originally deduced as necessary. Exactly the same algebraic procedure, working with [imath]\alpha_{q\tau}[/imath], will yield the fact that [imath]\vec{\Psi}[/imath] obeys the shift symmetry constraint on the [imath]\tau_i[/imath] arguments. Finally, if one multiplies through by [imath]\beta_{qp}[/imath] and commutes it to the right, the act will yield nothing more than a negative sign except for two very specific cases (where q=i and p=j or q=j and p=i). Each of those cases will pull out a single term from the sum with the Dirac delta functions: that is, [imath]\delta(x_q -x_p)\delta(\tau_q -\tau_p)[/imath] and [imath]\delta(x_p -x_q)\delta(\tau_p -\tau_q)[/imath] (again, sans any beta). When the resulting equation is summed over [imath]q \neq p [/imath] all terms vanish except the ones just discussed and one is left with

[math]\sum_{q\neq p}\delta( x_q -x_p)\delta(\tau_q -\tau_p)\vec{\Psi} +\sum_{q\neq p}\delta( x_p -x_q)\delta(\tau_p -\tau_q)\vec{\Psi}=0[/math]

 

but the two sums given are exactly the same so the result is exactly twice the single sum and one can divide by two and obtain exactly the original constraint that the Dirac delta function was created to impose.

 

It follows immediately that any solution to my fundamental equation which yields exactly the same probability distribution as a specific flaw-free epistemological solution will exactly fulfill the shift symmetry constraints required by our ignorance of reality. And finally, since the equation is a first order linear differential equation, any sum of solutions is a solution.

[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\psi} = K\frac{\partial}{\partial t}\vec{\psi}.[/math]

 

Except for the Dirac delta functions, the above equation is, for all practical purposes a wave equation (if you have to have a mental picture, the delta function things can be seen as essentially having the same impact on the equation as infinitesimal massless dust motes): i.e., it has an infinite number of solutions which essentially can be seen as propagating waves constrained only by the boundary conditions which have not been specified. It follows that any particular desired solution can be constructed via a sum of those “propagating waves”. There exists no probability distribution which cannot be seen as a sum of solutions to that equation.

 

At this point, the only constraints placed on one's epistemological solution are the constraints imposed by shift symmetry. What I have presented is no more than a different way of viewing the problem confronting us. In fact, the reaction "so what" would be very appropriate as there is nothing here to impose any constraint on one's epistemological solution except that of shift symmetry in the representation and the assertion that a sufficient number of invalid ontological elements can make the rule “no two identical elements exist” sufficient to define the “valid” ontological elements which are to be explained. Both of which are pretty obvious and certainly not to be taken seriously as neither have any really serious consequences. Or maybe I should say, at least no apparent serious consequences; we certainly cannot know if we do not examine the consequences.

 

Now, if you let me know what part of what I have said you have difficulty with, I will do my best to clarify the issues.

 

To Anssi, you might be interested in the fact that last weeks issue of “Science News” had an article called “Shifty Talk” concerning the process of word evolution. I loved one line,”our results indicate that languages can evolve in such an orderly fashion that simple mathematical descriptions capture their behavior.” One might say, they are simply working on another “explanation” of phenomena previously considered unfathomable.

 

Have fun – Dick

Link to comment
Share on other sites

Hello. Sorry for the delays, finally had time to read the new replies and have time to start figuring out the last steps to the "fundamental equation which [math]\vec{\psi}[/math] must obey"

 

You seem to me to understand post #51 pretty well; the big issue now is can you follow the second half of post #72. What follows is the central point of what I am talking about.

 

[imath]\alpha_{qx}[/imath], commute it through the various alpha and beta elements in the equation and then sum the result over q.” I am not sure of your familiarity with commutation so I thought I might point out the following.

 

[math][\alpha_{ix} , \alpha_{jx}] \equiv \alpha_{ix} \alpha_{jx} + \alpha_{jx}\alpha_{ix} = \delta_{ij}[/math]

 

 

can be rearranged to show that [imath]\alpha_{ix}\alpha_{jx} = \delta_{ij} -\alpha_{jx}\alpha_{ix}[/imath] which implies

 

[imath]\alpha_{qx}\alpha_{ix} = \delta_{iq} -\alpha_{ix}\alpha_{qx}[/imath] and [imath]\alpha_{qx}\beta_{ij} = -\beta_{ij}\alpha_{qx}[/imath]

 

(look at the defined commutation of alpha with beta).

---END OF QUOTE

 

Well I know what commutativity means as in being able to change the order of some elements without changing the end result. But I could not figure out what is going on above. I've been trying to figure out what all the math notation means by reading post #42, the end of #77, and now the further clarification at the end of #83. But once again my limited familiarity with math leaves me with far too many shaky assumptions :(

 

I don't even know what would be meaningful questions, so I'll just try to probe everything without even bothering to provide my own quesses;

 

What does "alpha element" or "beta element" refer to?

What does it mean that there is "ix" or "jx" suffix to such an element?

I notice the symbol [math]\delta[/math], does it refer to dirac delta function here also? What does the ij refer to there, and how does it turn into "iq" later (i.e. what does iq mean)?

 

Actually I am so royally lost at this point already that I can't make any sense of the rest of the post yet either. I am probably missing knowledge about some standard mathematical definitions, just I have no idea what those are and how to find material about them :P

 

The circumstance reminds me of something I read many years ago. It was a paper written by some ancient Greek (I don't read Greek so what I read was an English translation of the original). Though it wasn't exactly mentioned, the paper concerned the definition of speed. The writer pointed out that, if one person was faster than another, he would cover the same distance in less time or, on the other hand, he would be able to cover a greater distance in the same time. The paper continued to make a number of different comparisons in an attempt to clarify what the writer was talking about. What became quite clear was that the idea of “speed” was not a concept the writer held as obvious. Today, everyone (except maybe some very primitive peoples out of touch with modern gadgets) understand exactly what one means by speed and it is difficult for us to comprehend that confusion could ever have existed.

 

The issue here is that the indices and t) are nothing more than arbitrary numerical labels used to refer to ontological elements. They constitute an undefined language containing the information to be explained. In any logical deduction based upon those ontological elements, nothing can be introduced which violates the arbitrariness of those assignments (what I believe Anssi refers to as semantics)

 

In a sense, yes you could say that. Arbitrariness of assignments and the consequent arbitrariness in the defined behaviours and properties of things.

 

I would word it this way; What I sometimes refer to as "semantical worldview" is a worldview where such concepts as "speed" can be sensical only by the way they relate to other concepts such as "distance" or "time", where those concepts are sensical only how they relate to yet another concepts such as "location" or "change", and in the end the set of concepts only validate each others but not the ontological nature of reality. I.e. where concepts are understood through other concepts.

 

Referring to the gravity example that HydrogenBond mentioned; if such a worldview where "gravity was due to the repulsion of matter by space" was able to provide us with all the predictions as, say GR, then it would be just as true or untrue as GR. We should just say the concepts/elements these views consists of are a handy way to map the behaviour of reality around us.

 

That reminds me of a disturbing comment I heard from some physicist in some documentary regarding the string theory. The comment was something to the effect of "it appears that the string theory can never be proven or disproven by observation, so is it physics, or just philosophy?" The first obvious point is of course some idea being physics or philosophy applies to any view which claims a specific ontology. (Why did he suppose that some pre-existing view was the correct one and string theory was just a philosophical bastardization of that correct view is beyond me)

 

The second and more relevant point is that any single idea of a "vibrating string" you can ever fathom in your mind is completely and utterly based on the semantical concepts you have built about reality, i.e. those things like "location" or "speed" or "acceleration" that you understand by the way you have defined them in terms of other semantical (= independently undefendable) concepts.

 

I have nothing against models like string theory, but when it goes so far as to people actually start claiming that there must be ontologically real strings that vibrate in 11 dimensions (also taken as "ontologically real" dimensions, whatever "dimension" means! Get it?) is exactly as naive as saying we are conscious because there is a conscious homunculus in our mind. The predictive side of it is pure science, but the ontological mental image of it is pure religion.

 

Anyhow, Qfwfq referred to "local shift symmetry" and Doctordick jumped at it, so I thought I'd try to clarify things on my own part as well and say that the shift symmetry is not referring to shift inside some "semantically defined thing" (like "space"), but it just refers to shift among the labels used to refer to "ontological elements" (arbitrary features in a raw data whose meaning is unknown). A shift symmetry of labels inside one's worldview, so to speak.

 

It could be you had realized this and had something specific in mind when you referred to locality though, but I can't be sure. I'm afraid I need to figure the details of the math better myself before I can really say more :P Hopefully you can stick around as your comments have been helpful. Oh, and thank you about the explanations at the post #79 btw, I still need to go through them with thought though.

 

-Anssi

Link to comment
Share on other sites

You simply don't seem to understand the fundamental question that I am attacking.
Actually I think I understand well enough, aside from details, but what I need isn't a lecture in modern mathematics. I fully expected you would be able to catch on to my use of terms, global and local, just as you are using the notion of symmetry. I simply meant phase dependent coordinate values versus a same one for all of them. After all, it is the terminology of gauge symmetry, which is somewhat akin to the phase arbitrarity due to going from P to [imath]\psi[/imath].

 

BTW I don't think that Greek guy was confused by the notion of velocity, more that he was working to give the intuitive notion a precise definition for philosophical purposes. Your assumption that he was confused is somewhat like the many people who, reading your arguments, suppose you must have no connection with reality.

 

The rest of your arguments appear to mean that you are changing your assumption of shift symmetry from:

[math]\sum_{i=1}^n\frac{\partial}{\partial x_i}P(x_1,x_2,\cdots,x_n,t)=0[/math] , [math]\sum_{i=1}^n\frac{\partial}{\partial \tau_i}P(\tau_1,\tau_2,\cdots,\tau_n,t)=0[/math] and [math]\frac{\partial}{\partial t}P(t)=0:[/math]

 

to the tighter:

 

[math]\sum_{i=1}^n\frac{\partial}{\partial x_i}\vec{\psi}(x_1,x_2,\cdots,x_n,t)=0[/math] , [math]\sum_{i=1}^n\frac{\partial}{\partial \tau_i}\vec{\psi}(\tau_1,\tau_2,\cdots,\tau_n,t)=0[/math] and [math]\frac{\partial}{\partial t}\vec{\psi}(t)=0:[/math]

 

and:

 

[math]\sum_{i=1}^n\frac{\partial}{\partial x_i}\vec{\phi}(x_1,x_2,\cdots,x_n,t)=iK_x\vec{\phi}[/math] , [math]\sum_{i=1}^n\frac{\partial}{\partial \tau_i}\vec{\phi}(\tau_1,\tau_2,\cdots,\tau_n,t)=iK_\tau \vec{\phi}[/math] and [math]\frac{\partial}{\partial t}\vec{\phi}(t)=im \vec{\phi}[/math]

 

I'll examine your last post when I can, I don't think I'l be able this weekend so it depends on how hectic thing will be net week.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...