Jump to content
Science Forums

What can we know of reality?


Recommended Posts

I.e. that it is not self-consistency alone that yields newtonian mechanics?
I tend to agree, there must be some choices along the way, at the very least when Dick says "an approximation of" implies the matter of which approximation.

 

I haven't been able to follow the details of the math, since Dick revised the shift symmetry I didn't even quite catch exactly how these are consequential to what. Did you get that straight Anssi? I'm at a loss to see exactly which choices are hidden along the way.

Link to comment
Share on other sites

Hi Anssi, it is sure nice to hear from you. Hope everything turned out well.

I've read all the posts and I have to say that everything you are saying makes a lot of sense to me, but I definitely need to get myself up to speed with all the math.
Just point out the first mathematical step which you don't follow and I will do my best to give you the details, doing the best I can to leave nothing to presumption.
... I would have just assumed that an important factor at coming up with a "newtonian worldview" is also our ability to define (/identify) ontological elements freely; i.e. that a very specific type of classification AND self-consistency yields newtonian behaviour to those defined elements (and that just happens to be among the simplest views prediction-wise... in some sense that I did not understand yet).
The question has to be, exactly what is it, that we have to work with. I have used many terms for “whatever that might be”: “knowable data”, “ontological elements”, “noumena”... It really makes no difference what actual tag you place on the “????”; the only important issue to remember is that, sans an epistemological construct (“explanation”, “world view”, “understanding”, ... whatever), no matter what it is, it is totally undefined. The moment you lose sight of that fact, you have lost sight of the problem confronting you. A really significant issue resides exactly on this point: undefined means undefined: we do not know “what they are” or anything about “what they are” . When I complain about people bringing baggage to the discussion, I am talking about people bringing meaning to those reference tags, “knowable data”, “ontological elements”, “noumena”....etc..

 

Let me add another tag to that collection; a tag I have avoided, because of the tendency of people to bring meaning with tags, but a very nice tag none the less (if you can remember not to bring meaning with the tag). Suppose we give these unknown things the name “events”. It turns out that, after understanding my paradigm, this name corresponds quite well with what physicists commonly refer to as “events”. Thus the fundamental dichotomy upon which all explanations are built comes down to “real” events and “presumed” or “imagined” events (events which are required by the explanation).

What I'm asking is, isn't it possible to be self-consistent with more complicated worldview's as well? I.e. that it is not self-consistency alone that yields newtonian mechanics?
What you are missing is the fact that you (I would say “your brain” except for the fact that that concept itself brings in massive quantities of supposedly well defined “baggage”) have/has absolutely nothing to work with except a collection of totally undefined events.

 

Suppose you have a “more complicated world-view. For me to understand that world-view, you would have to explain it to me and exactly how do I come to understand your explanation? Your explanation arrives via “events” which I need to comprehend usually by means of a language which uses reference tags to which I have already attached meaning: i.e., my problem (understanding your explanation) is, in total (including understanding that language and my own experiences on which that understanding is based), exactly the same as the problem of understanding any ”totally undefined collection of events”.

 

Once an epistemological construct (“explanation”, “world view”, “understanding”, ... whatever) has been built, the builder can express the past (what he knows, or thinks he knows) in the form of a ”what is”, is “what is” table via numeric reference tags. Likewise, his expectation can be represented by the probability he places on any future entry to that table. If that probability is to be represented by the squared magnitude of [imath]\vec{\Psi}[/imath] (which can clearly represent absolutely any procedure possible) then [imath]\vec{\Psi}[/imath] must obey my fundamental equation. It follows, as the night the day, that the numerical reference tags of those undefined “events” must approximately obey Newtonian mechanics. Absolutely no other assumptions need be made.

I tend to agree, there must be some choices along the way, at the very least when Dick says "an approximation of" implies the matter of which approximation.
In my derivation of Schroedinger's equation (and thus of Newtonian mechanics) I had to make an approximation that [imath]mc^2[/imath] (essentially the energy associated with the momentum in the tau direction) had to be approximately E (the total energy of the system): i.e., the approximation that the energy associated with the standard three dimensional momentum had to be a negligible component of the total energy. In simple words, that means it is a non-relativistic solution which is entirely consistent with the normal interpretation of Schroedinger's equation which is well known to be invalid representation of a relativistic situation. That in no way implies my fundamental equation is an approximation; my fundamental equation is a derived relationship which is required to be obeyed by any flaw-free explanation (quite a different matter).
I haven't been able to follow the details of the math, since Dick revised the shift symmetry I didn't even quite catch exactly how these are consequential to what. Did you get that straight Anssi? I'm at a loss to see exactly which choices are hidden along the way.
I am at a loss to understand what you are unable to follow. I wish you would make it clear to me. As far as I am concerned, there are no “hidden choices” in my deduction. Every choice is spelled out in detail and seems rather obvious to me. Sorry if it doesn't seem obvious to you. I would try to help if I knew what was bothering you.

 

Have fun -- Dick

Link to comment
Share on other sites

I haven't been able to follow the details of the math, since Dick revised the shift symmetry I didn't even quite catch exactly how these are consequential to what. Did you get that straight Anssi?

 

No, I've been away for a while and I'll basically continue from post #171 soon... :P I suspect that's not far from where you left so it would be good if you have time to follow the thread too, and possibly help me out with some math etc..

 

It follows, as the night the day, that the numerical reference tags of those undefined “events” must approximately obey Newtonian mechanics. Absolutely no other assumptions need be made.

 

I think my comment may have been misunderstood a bit (I'm not that lost with the topic :)... hmm, and I think I have also misinterpreted your comment at the end of #203

 

I mean, I don't find it too odd that self-consistency alone can allow us to build newtonian worldview. While it's astonishing finding, the reason I don't find it that incredibly odd is simply because we are free to define reality into just such an elements that allows us to say they obey newtonian mechanics... :D)

 

But I first interpreted your comment as "self-consistency forces us to a newtonian worldview", which I see is not exactly what you said... (when I made that interpretation, it stroke me as a bit odd since surely it must always be possible to build a plethora of self-consistent worldviews that look nothing like newtonian mechanics... albeit most of them must be incredibly complex :P)

 

I still have a minute here so let me just say...

 

The central issue there is that [imath]e^{-A}e^A=1[/imath]. The complex conjugate of [imath]vec{phi}[/imath] is found by changing the sign of the imaginary parts (those parts multiplied by i). Thus it is that the resultant probabilities are not affected by these terms

 

...Okay, I think I understand that.

 

In Schroedinger's expression of quantum mechanics, momentum of an object represented by a given wave function is defined to be given by some constants times [imath]frac{partial}{partial x}Psi(x)[/imath] or rather, the expectation of the momentum is given by [imath]Psi^*(x) frac{partial}{partial x}Psi(x)[/imath] integrated over all x. In quantum mechanics, this relationship is essentially established by postulated axiom. If that definition is taken as a true expression of the classical idea of momentum, then the many body equation

 

[math]sum_i frac{partial}{partial x_i}Psi(x_1,x_2,cdots,x_n, t) =0[/math]

 

 

is no more than a statement that the sum of the momentum of all the bodies involved is zero (and that would be in the “rest position of the center of mass” of the system).

 

Okay I see... Heh, it's funny since that's how the rest position would be defined itself :)

 

Anyway, should I familiarize myself with the Schroedinger's expression of quantum mechanics more? I don't understand it very well.

 

I'll continue from here, hopefully soon...

 

-Anssi

Link to comment
Share on other sites

I suspect part of the problem is I don’t in fact know how much to try and understand now and how much of it is better left for a later time and so I may be trying to understand some things that would be better left for a later time.

 

As for your post I think that it is in fact quite interesting on several points and in fact answered some things that I was beginning to wonder about as well as giving me some new things to think about although there are not many questions about it that seem worth asking at this point.

 

Of the form of the function f this seems relatively straight forward and I have no problems with what you have done I suspect that part of the problem I had with trying to solve for f is that there are some things that you have in it that I don’t understand (partly this is I think due to never having seen some of the things that you are using before). At this point I still have some questions about it but I think that it is probably best just to leave this part where it is until you are ready to begin showing just what the effects of it are.

 

What I am getting at is the fact that I do not have any intentions of teaching physics and/or mathematics here. I will explain the exact logic behind any individual step in my deductions as that is a relatively short process; however, explaining physics and/or mathematics in general is a lifetime process. There are many physicists out there who have excellent comprehension of things like the Schroedinger Equation, Heisenberg uncertainty and how the factor hbar comes to be an important quantity.

 

Trying to have you do either of these here is not what my intention was but rather my questions stepped outside of what you are presenting and into theory due to some of the things that you are mentioning seem to be based off of ideas from quantum mechanics and I was just wondering where they came from without realizing that it was in fact outside of what you are presenting.

 

I am wondering at this point would it be reasonable to say that whenever we make an approximation to the fundamental equation or when we are working with an equation that approximates the fundamental equation that we are in fact working with a theory and that there is in fact no deductive method of obtaining the equation so it is only an inductive solution to the problem?

 

But I first interpreted your comment as "self-consistency forces us to a newtonian worldview", which I see is not exactly what you said... (when I made that interpretation, it stroke me as a bit odd since surely it must always be possible to build a plethora of self-consistent worldviews that look nothing like newtonian mechanics... albeit most of them must be incredibly complex :))

 

I suspect that what anssih has in mind is, aren’t there a lot of different approximations that you could have made instead of the ones that you did and won’t some of them lead to equations that won’t have Newtonian mechanics as a good approximation?

Link to comment
Share on other sites

I mean, I don't find it too odd that self-consistency alone can allow us to build newtonian worldview. While it's astonishing finding, the reason I don't find it that incredibly odd is simply because we are free to define reality into just such an elements that allows us to say they obey newtonian mechanics... :))
There is a lot of truth to that statement. Notice that the first things we identify are things that do not change; things that are the same all the time: “statics” so to speak and we expect them to continue to “stay the same”. The second thing is things that change in a simple way and we expect them to continue to change in that same way. Our model of reality is only one step away from Newtonian mechanics. All we need is a reason to explain why things do not continue in the same way as they did. Our explanation is, “something happened” and we give the name “a force” to that which “happened”.
Anyway, should I familiarize myself with the Schroedinger's expression of quantum mechanics more? I don't understand it very well.
Not unless you want to understand physics. As I said to Bombadil, I am not much concerned with either physics or mathematics. Mathematics has a plethora of internally consistent constructs and one could easily spend a lifetime trying to learn them all. Physics is also a very complex field and I really doubt anyone could actually master every aspect of physics. That is why the field is chock full of specialists.

 

When I was young, I wanted to understand the world I found myself in and that is why I went into physics. Physicists seemed to be actually interested in explaining why things were the way they were. Until I got into graduate school, I really thought they understood the fundamental issues of their field. I never had any interest in “doing physics”. That is one reason I essentially dropped out of the field almost the day I got my Ph.D. By that time I had realized that they had taught me all they had to say about the foundations of the field. All that really interested me was understanding why it all worked the way it did and they simply couldn't answer that question. I think I have actually discovered the answer. Anyway, it is certainly not necessary to understand physics in order to understand why it works. Besides that. there are plenty of people out there who can do the required mathematics given the starting point. It is that starting point which concerns me and I suspect it is that starting point which interests you also.

 

That starting point is in fact my fundamental equation. What is most important is that you understand exactly why that equation is valid under all possible circumstances. Showing that the rest of physics follows from that fact is actually quite trivial and one need little more than a cursory comprehension of integration and algebra to show that fact. My real problem is that no one takes the trouble to look. There are lots of people who could follow the logic if they really cared to. The problem is that their real interest is in showing that I couldn't possibly be correct. Their income depends upon the fact that they “know the truth”. Just as religionists income depends on people believing they “know the truth” or, for that matter, astrologist's income depends upon the world believing they could have a real handle on the truth. It's all the same the world over. Knowledge is Power and people guard that power with their life.

 

Suggesting that the authorities are wrong is always the “wrong” thing to do if you want recognition. The authorities won't recognize you and their minions will fight you tooth and nail.

 

Have fun -- Dick

Link to comment
Share on other sites

Suggesting that the authorities are wrong is always the “wrong” thing to do if you want recognition. The authorities won't recognize you and their minions will fight you tooth and nail.

 

One of the conceptual problems I have with the aspects of theoretical physics, which address the foundations of reality, is the observation that there are many alternatives for reality. We have quantum, wave, strings, etc. If you look at this logically, reality should only be one way and not suffer from a multiple personality disorder. In other words, reality should be one thing and not ten things that all appear valid in their own way. Since all can be supported with math, this multiple personality disorder universe approach is considered valid science.

 

If we stand outside the debate, and look at this logically, if any one of these theories is correct, then that would imply the others are illusions. It would also imply that math can be used to make illusions appear real. Another way to look at it is, each theory is part of the truth of reality, but none of the theories is the whole truth, or else there would be no need for many. This implies math can also be used to support partial truth so it can appear to be the entire truth to some people. It is not easy to fight such an irrational state of affairs, when physics can't even see there is a problem. It is self policing. The math and theories get quite complicated, exempting common sense from adding to the discussion. In my opinion, the divergence problem needs a convention where they iron it out and reduce reality to one. But none of the theories are solid enough to be the core.

 

One way to integrate would be to create the requirement that the divergence needs to interface with the preponderance of the data and not just pet data. The preponderance data is connected to an adjacent area of science that is integrated. This is chemistry. In other words, if we stack all the physics reality data and all the chemistry reality data, the one with the biggest pile should be closer to reality. If we hook up with the little pile there can be problems leading to divergence. Ironically, the big pile of physical-chemical data is not built upon divergence, even though there is far more data. This area of science is integrated. Physical chemistry is one place where both physics and chemistry agree and integrate with the big pile of data. After that the divergence begins to get worse and worse because it is no longer attached to the big pile, but to a little pile that is self serving. If we combine this with math allowing half truth to look like full truth.....

 

The entire affect, may have begun due to creating synthetic matter, not found in nature, and using this as the basis for defining reality. If you think of it logically, where in nature are particles accelerated and then collided? It is not BB, since they were not colliding but expanding, so only the pre-collision data is valid for early BB. This data does not show the same level of diversity but looks more like the chemical data with motion.

 

Maybe we can get this material within a collapsing star for a brief fraction of its life, yet this 1% is called 100% . This is detached from the big pile, but can be supported with math, so it appears to be real. While the divergence is due to others sensing this is not quite right. But because each can be supported with math, the multiple personality disorder universe is considered a valid form of science. This topic is about what we can know of reality. I would conclude it is not based on a multiple personality disorder, even if this condition can be supported with math. This condition needs therapy which can be done by requiring it interface the big data pile.

Link to comment
Share on other sites

...A really significant issue resides exactly on this point: undefined means undefined: we do not know “what they are” or anything about “what they are” ...
But, is it not true that we then know that "they" (whatever they may be) are "undefined"--thus we do know some"thing" about "they", we know that "they" are undefined. Which of course makes perfect sense, for prior to definition of a thing, must first be a thing, and then a concept of a thing. Only after the mind forms concept do we begin the process of definition. There is no such process in the mind of forming a definition directly from a thing without first going through the filter of concept formation.

 

It would seem to me that your fundamental equation represents this fundamental human mental process--the formation of definition(s) from concepts, that is, you have put this process into the language of mathematics.

 

If so, then I can see the value of your equation, for I am not aware that anyone has proposed a fundamental equation for the mental process <form definition(s) from concepts>.

 

But, if I am correct, we then need another equation, one more fundamental than yours, we need the equation to explain the process <form concept from thing (or ontologic element, etc.)>.

Link to comment
Share on other sites

... represents this fundamental human mental process...
What you don't seem to comprehend is that a “fundamental human mental process” is, in your head, a defined thing. Thus your use of the phrase implies the presumption of a “valid explanation” of some sort: i.e., you have presumed the existence of this thing you call a “human mental process”.

 

Against this, all I have presumed is “logic/mathematics” which I have defined to be the result of “the invention and study of internally self consistent systems”. As you have commented, we are interested in “explanations” and just what would an explanation be if it were not “internally self consistent”. That is absolutely the only issue of interest to me.

But, if I am correct, we then need another equation, one more fundamental than yours, we need the equation to explain the process <form concept from thing (or ontologic element, etc.)>.
What seems to be entirely beyond your comprehension, is the fact that I am not offering an explanation of anything! All I am doing is pointing out the fact that, if you allow me to use “numbers” as reference labels to these “undefined things” on which "your" explanation depends and accept the fact that your expectations, which are defined by your explanation, are probabilities expressible in terms of these same “undefined things”, then your explanation can be seen as a mathematical function [imath]\vec{\Psi}[/imath] which must obey my fundamental equation.

 

This is no more than a requirement of “internal self consistency”. As Buffy has commented, it tells us nothing about reality at all: i.e., it explains nothing! It is no more than a convenient way of looking at things, a paradigm, which automatically includes “self consistency” and, otherwise, makes no constraints whatsoever on your explanation. The fact that it requires Newtonian mechanics to be a rough rule of thumb (which the fundamental elements on which your explanation rests must obey) is a deep and profound philosophic truth.

 

Have fun -- Dick

Link to comment
Share on other sites

I tried to understand what you said to Bombadil;

 

=== QUOTE Page 18, post #171 ====

The relationship being imposed by shift symmetry in the index t is,

 

[math] \frac{\partial}{\partial t}\vec{\Psi}=0 [/math]

 

The central issue of my generalization to

 

[math] \frac{\partial}{\partial t}\vec{\psi}=-im\vec{\psi} [/math]

 

has to do with algebraic manipulation convenient to solving differential equations. That is to say, if I have a solution to the first equation above, I know that [imath]\vec{\psi}=e^{-imt}\vec{\Psi}[/imath] is a solution to the second. In exactly the same vein, if I happen to have a solution to the second equation, I immediately have a solution to the first: [imath]\vec{\Psi}=e^{imt}\vec{\psi}[/imath].

=======================

 

I am unable to understand that completely. I tried to figure out the properties of exponential functions from Wikipedia but there's a lot of information there and I wasn't able to fish out the relevant bits :(

 

Also I am unsure if this was explained already in some different terms, just for now I feel uneasy because I don't understand what you are saying there...

 

Also, trying to look for more information from the past posts led me to post #167;

 

==== QUOTE post #167 ==========

At this point we can take advantage of that complex phase shifting exponential function I talked about just above here. We know that the general solution [imath]e^{-imt}[/imath] (which itself is consistent with shift symmetry: i.e., it does nothing except generate a phase shift in the “(real, imaginary)” plane) will yield no change in probabilities generated by [imath]\vec{\Psi}[/imath]. Replacing [imath]\frac{\partial}{\partial t}\vec{\Psi}[/imath] with [imath]-im \vec{\Psi}[/imath] (K essentially amounts to nothing more than a scale factor associated with the definition of time.)

======================

 

I have a feeling that that last sentence is cut short...? Maybe it was about to say exactly what I needed to know to understand that little issue correctly? I'm a bit lost. The only thing I understood is that you are laying things down that way for some algebraic convenience, but I suppose I need to scratch my head more to get a proper grasp of this (and I REALLY want to get a proper grasp of this).

 

Anyhow, taking some of these details on faith, where we stand is that I am unable to see a flaw in your fundamental equation, i.e. as far as I can tell it does include all the necessary constraints properly. (Of course with my current mathematical knowledge, the probability of spotting any real error is about 0 :)

 

=== QUOTE post #171 =====

The equation to be solved is

 

[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\vec{\Psi}.[/math]

 

Anssi, you should take careful note that, if a given [imath]\vec{\Psi}[/imath] is a solution to that equation then so is [imath]A\vec{\Psi}[/imath] for any arbitrary value of A. The constant A can be directly factored from the differential equation. This fact goes directly to the issue of normalization.

=========================

 

Hmmm, okay I see, because normalization is essentially the same as a multiplication with some A... I cannot pick up any further implications of this yet though.

 

=== QUOTE post #171 =====

Furthermore, please note that the fourth underlying constraint was

 

[math]\sum_i \frac{\partial}{\partial t}\vec{\Psi}=0[/math]

 

(deduced from the shift symmetry in t) which I have already discussed, superficially, in my earlier post.

=========================

 

Is the "sum over i" supposed to be there?

Funny, the "earlier post" links to #167, and probably exactly to the points I am confused about :/ (Sorry I broke the link; I'm editing this in a text editor)

 

=== QUOTE post #171 =====

I might further comment that, as the equation is a linear first order equation, the full general solution is a sum over all possible solutions of the form

 

[math]\sqrt{A}e^{-imt}[/math].

=========================

 

I don't understand what that says. It doesn't help that my knowledge of exponential functions is very shaky. And of imaginary numbers. And how they work as exponents... :)

 

=== QUOTE post #171 =====

I say my discussion was superficial because, it turns out that the actual solution is not the central issue.

=========================

 

...fair to say "phew!!" :D

 

Meanwhile, there are some subtle issues to be discussed here. All of my proofs depended very much on the finite nature of that collection of variables. I hope no one minds my referring to those reference indices as variables as they certainly merge over into variables as the [imath](x,tau, t)[/imath] space used to reference them becomes continuous and we take the limits on to infinity (that limit being necessary to obtaining the differential representation).

 

I am not completely sure what that means. I'm assuming "they merge over into variables as the x, tau, t space becomes continuous" essentially means the math will be allowed to refer to full numbers but fractions too, when referring to an "x, tau, t" point?. Is that the definition of "variable" here, that it can be fractions?

 

"We take the limits on to infinity", does that essentially mean we allow the possible numerical labels to be anything at all (i.e. the x,tau,t space is infinitely large).

 

The first difficulty which occurs is the definition of normalization. Normalization was supposedly achieved by summing over all possibilities for the proposed [imath]vec{Psi}[/imath] associated with a given t and setting the amplitude (that A in the equation above for example) such that the sum is one. It should be clear to everyone here that, if the number of possibilities goes to infinity, the final value for A must go to exactly zero. The fact that it goes to exactly zero and not just some very small number is the problem introduced by continuity.

 

Actually, normalization is no real difficulty as, if the number of possibilities is infinite, one isn't concerned about a single possibility anyway (that obviously has to be zero). Rather, one is concerned about the expectation of certain finite array of possibilities as compared to an alternate collection. In such a case, our sums over those collections go into integrals over some continuous range and we are concerned with the ratios between those integrals: i.e. normalization is not required at all in order to view the results as probabilistic in nature and that is the central issue of laying out expectations consistent with an explanation. All that is important here is that [imath]vec{Psi}[/imath] can be seen as yielding the appropriate distribution of expectations and normalization isn't really a serious issue at all.

 

Okay, that makes sense but I am not able to understand all the details. I think I can almost understand it, but just to be sure, can you elaborate the operation you refer to as "our sums over those collections go into integrals over some continuous range and we are concerned with the ratios between those integrals."

 

A lot of head scratching to do still... I will continue from here soon. And also I'll comment the last posts later. Btw, I'll have the whole week off, so I should have a lot of time to really look at this issue now... Will be interesting to see how far one week can take me :)

 

-Anssi

 

Ps. for the love of god, who do I need to contact so to kindly ask to make LaTex work in the quotes? I lose a lot of time trying to keep things clear by hand, it's not even funny anymore...

Link to comment
Share on other sites

Btw, I don't have a very good grasp of Noether's theorem... Should I take the time to figure it out?

 

However, the result is a little more serious when it comes to our third constraint:

[math]F\vec{\Psi}=\sum_{i\neq j}\delta(x_i-x_j)\delta(\tau_i-\tau_j)\vec{\Psi}=0.[/math]

That constraint was introduced as a mechanism which required that no two indices to be the same; however, if [imath]vec{Psi}[/imath] becomes exactly zero (not just extremely small) that expression ceases to enforce such a constraint. Since [imath]vec{Psi}[/imath] vanishes even when the argument represents a valid possibility

 

Hmmm... You refer to [imath]\vec{\Psi}[/imath] vanishing when the argument represents just single possibility?

 

The tau, admittedly invalid ontological information, axis was introduced for the sole purpose of allowing multiple occurrences of valid ontological elements in a possible explanation to be represented by points in our space. You should comprehend that, without tau, identical x indices (which might be required by some specific explanation) would reduce to a representation by one point and the fact of multiple occurrences would be lost in the representation: i.e., the representation consisting of points in the x tau t space fails to correctly represent the known information. Continuity of the tau dimension has clearly defeated that essential purpose of tau. Somehow we must assure that such a circumstance can not occur.

 

At this point I get little bit confused because I don't understand how the continuity of tau loses the information... As I'm not entirely sure what is meant by its continuity exactly :P

 

EDIT: Oh, you were still referring to [imath]F\vec{\Psi}[/imath] failing when the x,tau,t-table is infinitely large...? I think I got it

 

-Anssi

Link to comment
Share on other sites

Not unless you want to understand physics. As I said to Bombadil, I am not much concerned with either physics or mathematics. Mathematics has a plethora of internally consistent constructs and one could easily spend a lifetime trying to learn them all. Physics is also a very complex field and I really doubt anyone could actually master every aspect of physics. That is why the field is chock full of specialists....

 

...it is certainly not necessary to understand physics in order to understand why it works. Besides that. there are plenty of people out there who can do the required mathematics given the starting point. It is that starting point which concerns me and I suspect it is that starting point which interests you also.

 

Well yeah, but then knowing some existing models makes it far easier to discuss these issues with people, when you know their terminology and how they think. Like, understanding the concepts that run relativity and quantum mechanics have been helpful in these discussions, for example in providing examples of semantically different but logically equal frameworks.

 

Anyhow, seems like Schroedinger's expression is something that would be valuable to know since it is brought up in your treatment, and it's a well defined concept that a lot of people already understand. But certainly I have no desire to try and understand every nook and cranny of physics.

 

There are lots of people who could follow the logic if they really cared to. The problem is that their real interest is in showing that I couldn't possibly be correct. Their income depends upon the fact that they “know the truth”. Just as religionists income depends on people believing they “know the truth” or, for that matter, astrologist's income depends upon the world believing they could have a real handle on the truth. It's all the same the world over. Knowledge is Power and people guard that power with their life.

 

Yeah, well, I'm not quite that cynical yet :hihi: I mean, yeah I get dismayed by close-minded people on a regular basis, but I also think that at the end of the day, everybody means well. It's just that most people seem to be quite happy in confusing ontology and mental (or physical) models of reality. Well, evidently it is hard to represent the important epistemological issues in such a way that people really understand what is being said, and don't feel like their personal ontological take on reality was offended... I guess the problem is that most people have an ontological take on reality when they shouldn't have any :doh: How long time ago Kant said what needed to be said, and still people don't get it... :/

 

Um... Okay, since we are talking about this, I guess I could make an attempt to reach another well-meaning soul; HydrogenBond.

 

One of the conceptual problems I have with the aspects of theoretical physics, which address the foundations of reality, is the observation that there are many alternatives for reality. We have quantum, wave, strings, etc. If you look at this logically, reality should only be one way and not suffer from a multiple personality disorder.

 

No one was replying to you because you are essentially referring to issues that have been covered in this thread multiple times :hihi:

 

Basically, there is absolutely no need to take any of those models as ontologically correct. When you suppose that somewehere there exists - just waiting to be found - one absolutely ontologically correct model, you have missed what epistemological considerations will tell you about the issue, or why the question "which one is true" is non-sensical to begin with;

 

When you make a physical model of something, or when you comprehend something in your head, you have essentially defined some objects (i.e. tacked some features of reality with identity) whose behaviour you are then tracking. When you build different models, you essentially tack identity on different features, and consequently you will be perceiving different behaviour (since the behaviour is associated with whatever features you tacked as "entities" or "objects").

 

So, notice how your assumptions of "what things exists" and "how they behave" are completely married to each others; You are completely free to classify ANY features of reality as "objects with identity", as long as you associate an appropriate behaviour to those objects.

 

Those objects now exist in your comprehension of reality, including how you understand your sensory data. That is, you can see them, you can feel them, you can taste them... ...but you cannot say they exist ontologically; you cannot say they have ontological identity, because they were defined for predictive purposes by yourself and you could have come up with different definitions just as well! (when I say "by yourself", I don't mean to say it was a conscious process)

 

Doctordick's treatment can be actually seen as a mathematical proof of the fact that no matter how much raw (undefined) information about reality there is to work with, you will ALWAYS be able to build a phethora of self-coherent worldviews. I.e. whatever experiences you ever come across (whatever experiments you could ever conduct), you will never be able to end up with one model being "true" and others "false".

 

What really needs to be understood with clarity at this point is that it is inherent to our comprehension of reality to define it into bunch of entities with identity, but when there's no one or nothing there to define things, why should you say reality is, in any sense at all, a set of "entities"? What sense is there to suppose any identity on anything without a human worldview; what on earth is "identity" when such a concept has not been defined by a human being?

 

Really think about that, and you should understand a quite satisfactory answer to your multi-personality problem (and why I said the question is non-sensical). The funny thing is that "when you look at it logically", I mean really look hard, the multi-personality problem should always exist, but I guess it's fair to say it only exists when you confuse models of reality with ontological reality (= reality is not multi-personal, but our way of comprehending reality always is; this is an unavoidable consequence of our very method of "understanding"). You really need to make an active attempt to not confuse the map with the territory.

 

For more discussion about this issue you can just look through my posts on this thread; quite a few of them have to do with the same exact issue. Including my first post to this thread.

 

I hope you find that satisfactory

 

-Anssi

Link to comment
Share on other sites

Suppose we find a solution to my fundamental equation, [imath]vec{Psi}[/imath], which fails to display exchange symmetry: i.e., the function changes when two representative points (x tau arguments) are exchanged. It should be clear to the reader that exactly the same [imath]vec{Psi}[/imath] will solve that equation if any two points specified by given arguments ( [imath](x.tau)_q[/imath] and [imath](x.tau)_p[/imath]) are exchanged. (the equation doesn't care which is which as they all appear in a symmetric manner). Since we are dealing with a first order linear differential equation (only first order derivatives appear) we can be assured that any sum of solutions is also a solution. If we construct the collection of all possible exchanges and add them all together the result will still be a solution to the differential equation and will now be symmetric under all possible exchanges (any exchange of arguments is exactly equivalent to simply exchanging the specific terms in that sum which represent that exchange). This solution will be consistent with Bose Einstein statistics.

 

Okay, I'm not sure if I've interpreted this correctly;

Exchange of two points is the same as changing the order of arguments for [imath]\vec{\Psi}[/imath]?

 

[imath]\vec{\Psi}(a,b,c,d)[/imath]

[imath]\vec{\Psi}(a,c,b,d)[/imath]

?

 

If a given [imath]\vec{\Psi}[/imath] fails to display exchange symmetry, what does it mean that it will still "solve the equation". Does it mean that such a [imath]\vec{\Psi}[/imath] just gives different results?

 

I don't understand properly what it means that "any sum of solutions is also a solution".

 

Hmm... I think I can understand though that having a collection of all possible exchanges, and adding them together, must yield exchange symmetric results even if the [imath]\vec{\Psi}[/imath] itself doesn't, since with the addition proccedure it'd be the same as just changing the order of terms in ordinary addition... right?

 

The procedure which yields a solution consistent with Fermi statistics is a little more complex to describe. If we begin with a specific solution [imath]vec{Psi}[/imath] and exchange two x tau points and then subtract that second function from the first, we still have a solution to the equation. If we then take that function and exchange a different pair and subtract that result from the result of the first step we again will have a solution to the equation. If we continue that process until all possible pairs have been exchanged, we again end up with a function where any exchange will yield back the same function but with a subtle difference: in this second case the function is antisymmetric: i.e., an exchange of any two points will yield a change in sign.

 

I'm not able to understand that completely either...

 

Finally, the exchange properties of [imath]vec{Psi}[/imath] need not be entirely symmetric or entirely asymmetric. The points referred to by the indices may be divided into two sets: one set symmetric under exchange and one set asymmetric under exchange. The antisymmetric case has a very interesting property. If [imath]vec{Psi}[/imath] is antisymmetric under exchange of a specific pair of arguments [imath](x.tau)_q[/imath] and [imath](x.tau)_p[/imath], [imath]vec{Psi}[/imath] must change sign under that exchange (that is the definition of the asymmetric case). Suppose the two points are exactly the same (the x, tau indices representing the two different ontological elements are exactly the same)? In such a case, exchanging the arguments is totally immaterial (if x and y are exactly the same number, f(x,y)=f(y,x)) and yet the function must change sign. There exists but one number which equals the negative of itself and that number is zero. Two different elements simply cannot be represented by the same point, the probability is zero even before normalization. Two identical Fermions can not be in exactly the same position at exactly the same time.

 

So, that seems to imply that the concept of "fermion" springs from those original constraints (for self-consistency) and some specific [imath]\vec{\Psi}[/imath]? Or something, I need to get a better grasp of these things.

 

Now this exchange issue was introduced here because my proof required no two valid noumena could be represented by the same point in that x tau space. Essentially the exchange symmetry can be seen as arising from the fact that these noumena are indistinguishable from one another. Since I have defined reality to be a collection of valid noumena, one could certainly interpret this circumstance as equivalent to defining reality to be a collection of indistinguishable “particles” moving in an x tau space (t being the parameter defining that motion); however, I would very much caution the use of the word “particle” (as the idea of a particle carries a lot of inductive baggage which really doesn't belong here; we are speaking of numerical indices with no evidence that two such identification at a different time are related). Furthermore, I am sure that the idea of “indistinguishable” noumena will raise a lot of objection; however, I challenge anyone to come up with a mechanism for distinguishing noumena in the absence of an explanation.

 

For Buffy and her interest in examples: at this point, my fundamental equation can be seen as analogous to a wave equation, a Schroedinger representation of a universe of massless infinitesimal dust motes (those dust motes being a collection of Fermions and Bosons) interacting via no interaction other than a contact interaction (that interaction being an infinite repulsion with a range of zero). The propagation of the probabilities (our expectations) is defined by that wave equation. The alpha operators attached to each index could be seen as spin operators; however, that would take a bit of a stretch in one's imagination as they don't quite follow the entire spin thesis. Nevertheless the fundamental equation can certainly be seen as a Schroedinger type representation of a many body problem constituting the entire universe and, at the same time, it can be seen as a mathematical representation of any conceivable circumstance (any collection of known data at all).

 

It would be nice to have a good enough hold of Schrödinger representation (and your treatment for that matter) to be able to properly understand that similarity between them...

 

At any moment of the past, the shape of that wave represented by [imath]vec{Psi}[/imath] can be seen as defined by our knowledge of that moment: i.e., the square of the magnitude of [imath]vec{Psi}[/imath] providing us with an estimate of the probability that distribution of indices corresponds to our knowledge. Shift symmetry together with exchange symmetry yields a definition of how that wave will propagate between “observations”: i.e., additional information concerning unobserved behavior consistent with those symmetries. It should be clear that such a paradigm is entirely consistent with any possibilities as, the past (what we know) defines the shape of the function and the continuous nature of the solution disallows no possibility for any future event; one merely restarts the propagation when any new information is obtained. It makes no predictions not required by the associated symmetries and is thus entirely general, the paradigm itself being perfectly consistent with the known past.

 

So that seems to imply (and hopefully prove once and for all) that the wave function has no ontological reality to itself (or at least there's absolutely no reason to suppose it does), but it is rather an unavoidable consequence of probabilistic tracking of elements whose behaviour (and identity) is unknown to us, apart from how they have behaved in the past (which together with probability theory yields us a way to assing a probability to how they behave in the future).

 

I.e. since we always have limited amount of knowledge about their past, no matter how we identified these elemets, we always have limited amount of knowledge about their probabilistic future.

 

I may be missing some conditions... And maybe I'm jumping to conclusions. But at any rate it would be nice to be able to show that wave function and its collapse are things that exist exclusively and inherently in our method of understanding reality. I think a lot of people would find that quite satisfactory, as oppose to some specific ontological fantasy that just has defined reality in yet another semantically different way :P

 

-Anssi

Link to comment
Share on other sites

Hi Anssi, it is quite nice to hear from you again. Sorry I have been so slow to respond (there are reasons but we need not go into them here). Your comments are intelligent and you make your issues quite clear. I will do my best to explain the important points carefully. And, Qfwfq, if you would be so kind as to provide a little assistance here, it would be very much appreciated. All I am asking for is semi-alternate support on the mathematics issues. I referred to the “important” issues because learning all of either physics or mathematics is a project which cannot be completed in a lifetime; the total breadth of both fields is so extensive that specialization is rampant. Even to teach the general stuff presented to all who major in the subject is well beyond my intention here. However, there are a number of mathematical relationships which are definitely essential to my deductions and these I want you to be very clear to you.

 

For the moment, it appears to me that you might be mixing two very different issues. First, there is the derivation of my fundamental equation and, second there is the issue of showing that the solutions to that equation are, in fact, exactly what is commonly referred to as “the fundamental underpinnings of modern physics”. Neither of these require any real knowledge of physics as they are essentially no more than straight forward, and rather simple, mathematics. Now some of these mathematical relationships are very important to modern physics but that isn't the real issue here; their importance to my derivation is the only issue I should be presently concerned with. I am afraid that I myself often lose sight of that fact and can be easily lead aside into physics or math issues which are really somewhat aside of the actual proof. As I have said a few times, my interest is not in teaching anyone physics but rather to show that all of the fundamental equations of physics are in fact, approximations to my fundamental equation and I apologize for getting into side issues.

 

Except for one subtle point, I am going to presume you understand the derivation of my equation because you really have not mentioned any issue there except for the factor [imath]e^{imx}[/imath]. The important point is that the factor has no impact upon the calculations of probability distribution yielded from [imath]\vec{\Psi}[/imath] and its use can be quite convenient to the algebra; in fact, my conversion of the four fundamental constraints originally deduced into my fundamental equation makes use of that fact. The point being that any solution to my fundamental equation must satisfy those constraints and that any [imath]\vec{\Psi}[/imath] which satisfies those constraints also satisfies my equation. As I said, there is a subtle point embedded in that [imath]e^{imx}[/imath] conversion which seems to bother almost everyone.

 

First, it is very important that one comprehend the definition of [imath]\vec{\Psi}[/imath]. The function [imath]\vec{\Psi}[/imath] is a vector in an abstract space: i.e., it is a function whose output (result if you wish) is a collection of independent numbers which can be seen as components in that abstract space. Since the argument of [imath]\vec{\Psi}[/imath] is a set of numbers (the indices which we are using to refer to the ontological elements or noumenons or events; whatever you wish to call them), [imath]\vec{\Psi}[/imath] is nothing more than a conversion from one set of numbers to another. The resultant of this conversion is a set of numbers which when squared and summed become a number which is the probability with which you expect that argument: i.e., that set of ontological elements.

 

The simple fact, that you have expectations, directly implies that such a function exists: i.e., there exists a procedure for getting from that set of numerical labels which define the circumstance under consideration to a number which expresses your expectations. My point being that the very meaning of “an explanation” is that it gives some information about what should be expected. The issue is the completeness of the representation [imath]\vec{\Psi}[/imath]. If a method to get from A to B exists, it can be thought of as a conversion of one set of numbers to another: i.e., there exists no procedure which cannot be seen as such a function (think of a computer program as a conversion from one set of numbers, the input, to a second set of numbers, the output). By my use of that notation, I have put no limitations whatsoever on the method of getting from A to B.

 

With that in mind, let us look at the standard mathematical function [imath]e^{imx}[/imath]. The first thing you need to understand is the idea of “i”, what is generally called “the imaginary number” defined to be the square root of minus one: i.e., that [imath]i^2=-1[/imath] by definition. It is essential that you understand the operational nature of this invention.

 

I have commented many times about the fact that mathematics is the invention and study of internally consistent systems. What you need to understand is that there exist systems which may appear to be quite different which are, in fact, operationally identical to one another. The only important fact in such a case is that the defined operations in the two different systems yield consequences which are totally consistent with one another. In that case, the two different representations are totally equivalent. In order to discuss the issue I want to bring up the word “orthogonal”. If two axes in a graph are orthogonal, changing the value of an x coordinate yields no change in the y coordinate and, likewise, changing the value of a y coordinate yields no change in the x coordinate. The x and y values being represented in such a graph are then what one calls “independent variables”.

 

Complex numbers are numbers which include both “imaginary” components and “real” components. The adjectives “real” and “imaginary” here refer only to numbers proportional to “1” (which are called a real numbers) and numbers proportional to “i” (which are called imaginary numbers). A combination is called a “complex” number: A = [imath]A_r +iA_i[/imath] is a “complex number”. Essentially, real and imaginary numbers are “orthogonal” to one another. That is, changing the imaginary part does not change the real part and vice-versa. In essence, a complex number is analogous to a vector in a two dimensional space and this analog is used quite often; the real component is usually associated with the x axis and the imaginary component with the y axis.

 

These two representations are in fact operationally identical to one another. It should be clear to you that vector addition and complex number addition are operationally identical: i.e., the results are totally analogous. Multiplication is a little more complex. You should be aware of the fact that multiplication is uniquely defined for numbers and that multiplication of complex numbers is defined exactly the same way (requiring only the knowledge that i*i=-1); however, multiplication of vectors is not a unique operation. Two common definitions of multiplication arise from geometry: the “dot” or scalar product and the “cross” or vector product. It turns out that it is quite easy to define another “multiplicative operation” on two dimensional vectors which is perfectly analogous to numerical multiplication of complex numbers.

 

Under this definition, two vectors multiplied together yield a new vector. The magnitude (length) of the new vector will be equal to the ordinary product of the magnitudes (lengths) of the two original vectors. The direction of the new vector (which will be specified as an angle with respect to the “real” axis) will be the simple sum of those angles. This maps directly into multiplication of real numbers as, for real numbers, that angle is always either zero degrees (it is a positive real number) or it is one hundred and eighty degrees (it is a negative real number). Multiplication of positive numbers always yields positive numbers (zero plus zero is zero) and multiplication of two negative numbers yields a positive number (one hundred and eighty plus one hundred and eighty is three hundred and sixty degrees; which on the graph is a direction identical to zero degrees). I will leave mixed (one negative and one positive number) multiplication to your thoughts.

 

All that is left is to show that the multiplication of imaginary numbers is also consistent with that definition. In the case of imaginary numbers (which are pointing in the direction of the y axis), the angle is either ninety degrees (for a positive imaginary number) or two hundred and seventy degrees (for a negative imaginary number). Since ninety plus ninety is one hundred and eighty and two hundred and seventy doubled is five hundred and forty (which is exactly three hundred and sixty plus one hundred and eighty), both positive and negative imaginary numbers yield exactly the same result as standard multiplication, If you know trigonometry it is quite simple to show that the two representations are absolutely identical for any complex number.

 

All that is necessary is to comprehend that the factor [imath]e^{i\theta}\equiv cos \theta + i sin \theta[/imath] is a complex number with a magnitude of unity and a direction given by theta (note that theta must be defined in radians and not degrees). It follows that multiplication by that factor is always absolutely identical to a simple rotation of the multiplied vector in the associated two dimensional space used to represent complex numbers. Since we have defined our probabilities to be given by the squared magnitude of an abstract vector, such rotations (or multiplications) yield utterly no change in the resultant probability but merely constitute an abstract rotation of that vector in some plane in that abstract space. Essentially, as I said to Qfwfq earlier, all of his, more complex phase rotations, can be simply handled by the structure of [imath]\vec{\Psi}[/imath] itself. There is no need to pull them out as a separate factor unless some convenient algebraic service can be provided by that extraction. I use the simple expression [imath]e^{iKt}[/imath] in order to connect the several functions of the shift symmetry differentials into one equation. If it isn't actually required in [imath]\vec{\Psi}[/imath] in order to develop your expectations, it yields no real consequences in P anyway so it does not present a real constraint on the possibilities. As Qfwfq says, “it generates a mere phase factor in the solution” with no real consequences beyond relating variables which are out of phase with one another. Anyone familiar with physics is well aware of the convenience of including phase shifted effects together with direct effects in a single equation.

 

Use of that relationship makes some calculations of voltages and currents in AC circuits quite easy (I don't think we need to go into it). It has to do with the fact of voltages across inductors and capacitors being “out of phase” with current under sinusoidal voltages. When Qfwfq talks about “phase” effects he is thinking in terms of this kind of phenomena. The whole idea is brought into quantum mechanics in a wholesale manner. In my approach, I see it as nothing except another representation of a “rotational” component in that functional translations from “a specific set of elements being referred to” (i.e., the argument of [imath]\vec{\Psi}[/imath]) to the expectations the explanation being represented by that [imath]\vec{\Psi}[/imath].

 

That is, it is absolutely nothing except an algebraic operation which simplifies analysis of the general solutions of my fundamental constraints. What is important here is that you can comprehend it as some kind of defined rotation in some plane in that abstract space of the vector, [imath]\vec{\Psi}[/imath].

I have a feeling that that last sentence is cut short...?
All I am doing is pointing out that the consequences of multiplying by [imath]e^{-imt}[/imath] essentially equivalent to replacing [imath]\frac{\partial}{\partial t}\vec{\Psi}[/imath] (which was zero) with [imath]-im \vec{\Psi}[/imath] (which is not zero). In the eventual physics which is deduced, this is equivalent to redefining the “zero point” of our energy reference. Just as the symmetry of the problem asserts that the answer must be independent of our selection of zero momentum, it must also be independent of our selection of zero energy. More interest in that relationship will arise later.

Anyhow, taking some of these details on faith, where we stand is that I am unable to see a flaw in your fundamental equation, i.e. as far as I can tell it does include all the necessary constraints properly. (Of course with my current mathematical knowledge, the probability of spotting any real error is about 0 :)
I am not really expecting you to spot any errors. What I am hoping for is that some of the other people who read the thing will begin to comprehend what I am saying and realize that it is worth the effort to check it out carefully. In order to accomplish that, they have to have some idea of what I am doing and I am hoping that my conversation with you will begin to perk their interest.

 

I would love to have some serious questions from them so that I might have a means of clarifying the issues they find troubling. To date, I have had no conformation of their understanding any of my answers to their earlier problems. What I have just said to you above, I have essentially said to Qfwfq several times without receiving any response. And Buffy has never responded to my post #129 of this thread. Neither has anyone else commented on the argument given there; I would love to hear from Erasmus00 or hallenrm both of whom seem to have an excellent grasp of physics. Using Anssi's method of quoting (so as to include LaTex expressions) I will quote the end of that post to Buffy:

 

****Start of QUOTE****

 

So, if you can explain the “noumenons” in your left hand, you can produce a table of expectations in your right hand. Since we have used numerical labels for the “noumenons” in your left hand, the procedure for producing that table in your right hand can be seen as some specific mathematical function defined by [imath]\vec{\psi}[/imath] as a function of the specific collection of “noumenons” about which the probability is desired. Essentially, based on the collection of “noumenons” in our left hand, we can produced a table of expectation given by

[math]\vec{\psi}(x_1,\tau_1,x_2,\tau_2,\cdots, x_n,\tau_n,t)[/math]

 

in our right hand (note that this is the very definition of [imath]\vec{\psi}[/imath]). You should understand that I have put this in this “left hand”, “right hand” representation because I want to remove any reference at all concerning how this is to be accomplished. All I am concerned with here is the fact that given one, you can achieve the other. At that point I make the assertion that

[math]\vec{\psi}(x_1+a,\tau_1,x_2+a,\tau_2,\cdots, x_n+a,\tau_n,t)=\vec{\psi}(x_1,\tau_1,x_2,\tau_2,\cdots, x_n,\tau_n,t),[/math]

 

which is, of course, a statement of shift symmetry itself. You baulk; so let us look at the possibilities. Essentially, your claim is an assertion that the equal sign does not belong there. If I presume your assertion is factual, it implies that, if I know the procedure for getting from the information in my left hand (the numerically labeled noumenons) to the expectations in my right hand (the mathematical function [imath]\vec{\psi}[/imath] of those noumenon labels) and I perform the relabeling (adding a to each and every noumenon label in both hands) then the procedure no longer yields the same probabilistic table.

 

If simple relabeling of the noumenon argument labels destroys the solution procedure, then that solution procedure must depend on how these noumenons are labeled and that means the labeling certainly isn't arbitrary.

 

Essentially, what you are saying is that the problem of explaining the noumenons in your left hand depends on how you label the things you are explaining. If that is the case, where are you to get the information as to the proper labeling procedure? Supposedly, your solution is based upon those undefined noumenons in your left hand and nothing else.

 

Think about it -- Dick

 

****End of QUOTE****

 

So, after railing on the authorities unwillingness to confront my assertions, I will return to Anssi's questions.

 

Anssi, all I am asking of you is that you follow my algebra. Don't concern yourself with the whys or wherefores of the process. There is actually a lot of physics behind those whys and wherefores but we really do not have the time to seriously get into that. Only two things are really important, first that there exists no procedure for generating expectations which cannot be represented by [imath]\vec{\Psi}[/imath] and second that the algebra I use to reproduce the modern physics equations I produce, makes no approximations not explicitly expressed and analyzed as to their impact on the deduction.

Hmmm, okay I see, because normalization is essentially the same as a multiplication with some A... I cannot pick up any further implications of this yet though.
For the most part, I don't expect you to pick up any implications. My only interest is in your realizing that the results are consistently absolutely general and actually imply practically nothing. When I see an implication I will talk about it. If you see an implication I will talk about it.

Is the "sum over i" supposed to be there?
No, it isn't! That is a typographical error I have made a number of times (I have now corrected that one by editing the post). Things like that really blow the professionals away as they conclude I am an idiot. Well, life is tough all over. By the way, that reference should be to post #171.
I don't understand what that says. It doesn't help that my knowledge of exponential functions is very shaky. And of imaginary numbers. And how they work as exponents... :)
Differential equations usually do not provide specific solutions. If a differential equation can be algebraically transformed into an expression of the form

[math] \left\{\right.[/math] Some complex algebraic expression of operators [math]\left.\right\}\vec{\Psi}=0,[/math]

 

then any solution will yield that zero at the end. The fact that the solution, [imath]\vec{\Psi}[/imath] can be factored out means that any sum of solutions is also a solution (a sum of zeros is zero and the equation will be satisfied). It is this fact which leads to superposition of eigenstates in quantum mechanics. Think of the surface of water. The motion of the water is definable via a differential equation but the solution of that equation does not define the shape of the surface but rather tells you how a specific shape will change with time. I bring up a water surface because the solutions of the differential equation is, approximately, sinusoidal “waves”. Those wave propagate in quite a nice way but the shape of any real surface is a sum of many different such waves. In fact, any function may be expressed as a sum of sinusoidal waves. Please don't worry about these things as life can get complex quickly. The process of finding a set of such sine functions given a specific surface shape is called a “Fourier transform”. If you want to see how quickly things can get out of hand, check out this reference. About half way down the page is a table of Fourier transforms.

 

Please don't worry about these issues at all. My only goal is to convince you that the basic axiomatic physics equations are, almost to the last one, approximations to my equation.

Is that the definition of "variable" here, that it can be fractions?
Variables means the values can change, the term “variables” can refer to integers if the writer so defines them. The issue of fractions (and particularly irrational fractions) occur when the variables are taken from a “continuous” set. Continuous means that, no matter how small the difference between two numbers gets, there are always numbers between the two. That means that the number of such numbers is infinite and infinity brings in some issues which must be handled carefully. What you need to remember is that it is our explanation which allows those indices to become continuous. Your knowledge of reality can never include a continuous set of indices as you cannot “know” an infinite amount of information.
"We take the limits on to infinity", does that essentially mean we allow the possible numerical labels to be anything at all (i.e. the x,tau,t space is infinitely large).
Not really, it is not necessary that the x,tau,t space be infinite in extent for the number of possible values to be infinite. On a continuous line from zero to one, there are an infinite number of points.

Okay, that makes sense but I am not able to understand all the details. I think I can almost understand it, but just to be sure, can you elaborate the operation you refer to as "our sums over those collections go into integrals over some continuous range and we are concerned with the ratios between those integrals."
We are getting into mathematics here. Let me say that the curved integral sign was originally a big “S” which stood for “sum”. An integral is fundamentally defined to be a sum over a weighted collection of elements in the limit where the number of elements go to infinity. Now the sum of an infinite number of elements would of course be infinite except for the fact that, as the number of elements rises, the value of the elements goes down such that the sum over the “unweighted” elements is constant. The function being integrated is the weighting function and that little “dx” you see in the integral is the element being summed. All you really need to know is that sums go over to integrals as the collection of elements being summed go from a finite set to a continuous (and thus infinite) set.
Btw, I don't have a very good grasp of Noether's theorem... Should I take the time to figure it out?
No, there is no need of that at all. I only mentioned it because it is a fundamental issue of modern physics.
Hmmm... You refer to [imath]vec{Psi}[/imath] vanishing when the argument represents just single possibility?
I am suspicious that you are confusing “a single possibility” with “a single index”. The single possibility I am referring to here is a specific set of indices (the entire set one through n) in the limit where n goes to infinity. In actual fact, that case cannot occur (the valid data on which our explanation is based cannot be infinite); however, in the mathematical representation it can certainly occur. We need this because our known information (the past) is always increasing and infinite means that, no matter how many you have, there are more. This implies the mathematical representation must go through to infinity. Otherwise, it would have to have an ending: i.e., a specific number of elements which could not be increased. It is purely an issue of the mathematical representation. I can not use the representation I am developing without maintaining continuity all the way up to infinity.

At this point I get little bit confused because I don't understand how the continuity of tau loses the information... As I'm not entirely sure what is meant by its continuity exactly :P
The information being lost is the fact that two indexed elements can end up being plotted to the same point in that x,tau,t space (which can only happen as the number of indices goes over to infinity). The data our explanation is based upon must include the possible existence of multiple occurrences of the same x index. The tau index was introduced to allow these multiple occurrences to be represented (a point can only represent one “point”). In order to assure that such multiple occurrences can be represented even in the limit of infinite number of indices, we need another mechanism to maintain these points as separate points.
EDIT: Oh, you were still referring to [imath]Fvec{Psi}[/imath] failing when the x,tau,t-table is infinitely large...? I think I got it
I suspect you have. If that is true, what I have just said should make sense to you. Let me know if that is indeed the case.

 

I don't think there is any need to comment on your next post; that would be post #216 so I will go on to #217.

Okay, I'm not sure if I've interpreted this correctly;

Exchange of two points is the same as changing the order of arguments for [imath]vec{Psi}[/imath]?

[imath]\vec{\Psi}(a,b,c,d)[/imath] goes into [imath]\vec{\Psi}(a,c,b,d)[/imath] is absolutely correct. You have explicitly “exchanged” two of the arguments of [imath]\vec{\Psi}[/imath].
If a given [imath]vec{Psi}[/imath] fails to display exchange symmetry, what does it mean that it will still "solve the equation". Does it mean that such a [imath]vec{Psi}[/imath] just gives different results?
Yes, that is exactly what it means. We have here a large number of events (ontological elements, noumenons, knowable data -- whatever you want to call them) and there is no necessity that your expectations for the behavior of any given pair will be the same. Using your example, [imath]\vec{\Psi}(a,b,c,d)[/imath] could be [imath]a+b^2+c^3+d^4[/imath]; then [imath]\vec{\Psi}(a,c,b,d)[/imath] would be [imath]a+b^3+c^2+d^4 \equiv a+c^2+b^3+d^4[/imath]. These are certainly different functions but if one is a solution to my fundamental equation, then so is the other as there is nothing in that equation which treats any specific element differently; the fact that two elements may behave differently is a consequence of the context: that is, essentially, how the other elements are behaving. Remember, the definition of [imath]\vec{\Psi}[/imath] is that it reproduces the expectations of your explanation and I have no intentions of putting any constraints on that.

 

At any rate, again with the suggested example above, the function [imath](a+b^2+c^3+d^4)+(a+c^2+b^3+d^4)[/imath] is symmetric under exchange of b and c (i.e., gives exactly the same result) and [imath](a+b^2+c^3+d^4)-(a+c^2+b^3+d^4)[/imath] is “antisymmetric” under exchange of b and c (i.e., the sign of the function changes under exchange but the function is otherwise identical). The important point being the fact that the “antisymmetric” solution exactly vanishes if the exchanged indices are exactly the same (if b=c). It follows that your expectation that they will be “in the same place” in that x,tau,t space is exactly zero: i.e., if tau and t indices of these elements are exactly the same, the x index (which would be your b and c) cannot be exactly the same. This is valid even in the limit of continuity: your expectation for the difference between b and c can be as small as you wish but it can not be zero.

I don't understand properly what it means that "any sum of solutions is also a solution".
Hopefully I made that clear in the beginning of this post.

So, that seems to imply that the concept of "fermion" springs from those original constraints (for self-consistency) and some specific [imath]vec{Psi}[/imath]? Or something, I need to get a better grasp of these things.
Well, yes and no. In my paradigm, that statement is true but it is certainly not true from the conventional perspective of modern physics. In conventional quantum mechanics, the perspective is a little different.

 

Just as an aside (and don't worry about this at all) conventional quantum mechanics began out of the analysis of solution space in what was called Hamiltonian mechanics (these were mathematical relationships developed in attempts to find general solutions to Newton's equations). The early form was called “wave mechanics” (essentially solutions to Schroedinger's equation). A competitive alternative was introduced by Heisenberg called “matrix mechanics”. Somewhere in the late twenties, Dirac proved that the two were actually mathematically identical structures and introduced a new representation for the results. Schroedinger had used wave functions as the basic representation and Heisenberg had used matrices; Dirac introduced what is called the “bra”, “ket” notation which is now the standard of modern physics.

 

I don't particularly like most "short hand" notation it because I feel it essentially hides all the physics behind the development of whatever is being expressed. I have run into too many physicists who are fluent in some short hand representation but actually have no understanding at all of what is being represented by that shorthand. Now, if your purpose is to tell others what you are doing or just keeping personal track of your progress, a convenient shorthand representation of the important elements can be very useful: i.e., if you want to “do physics” you would certainly want to use short hand. But, if you want to understand what you are doing, using shorthand can be dangerous.

 

The only reason I brought that up was to point out the fact that, in all forms of quantum mechanics, the final expectations are always achieved via a final squaring of something (or vector inner product which is the same thing). I think it was Wolfgang Pauli who first pointed out that an sign would vanish when this squaring took place. For any one body equation (which is what most everyone concerned themselves with) there is no consequence of this fact; however, for any multi-body problem with identical particles, this leads to a very profound difference. First, if the particles are identical, no physical experiment can differentiate between a given case and one where the positions of the particles are exchanged. What happens to the wave function when these particles are exchanged turns out to have very significant consequences. If the function is asymmetric under exchange, no two particles can be in the same position; if the function is symmetric under exchange, the solution can have any number of particles in the same position.

 

Einstein and Bose worked out the statistics applicable to a collection of particles described by symmetric wave functions and Fermi and Dirac worked out the statistics applicable to a collection of particles described by asymmetric wave functions. This is the source of the division of all particles into Bosons and Fermions. There are other subtleties I could go into but that is too much like trying to teach you physics. All I am trying to point out is that the classification Boson and Fermion in conventional modern physics arises in a considerably different manner: it is very dependent upon the assumption that the axioms of quantum mechanics are correct.

It would be nice to have a good enough hold of Schrödinger representation (and your treatment for that matter) to be able to properly understand that similarity between them...
Yeah, that would be nice but it isn't necessary. I only put that paragraph in for the benefit of physicists who happen to be reading this, trying to convey to them that the equations I am coming up with are not so different from conventional physics problems as they seem to think; think “examples” for Buffy.
So that seems to imply (and hopefully prove once and for all) that the wave function has no ontological reality to itself (or at least there's absolutely no reason to suppose it does), but it is rather an unavoidable consequence of probabilistic tracking of elements whose behaviour (and identity) is unknown to us, apart from how they have behaved in the past (which together with probability theory yields us a way to assing a probability to how they behave in the future).

 

I.e. since we always have limited amount of knowledge about their past, no matter how we identified these elemets, we always have limited amount of knowledge about their probabilistic future.

Exactly correct. I couldn't put it any clearer.
I may be missing some conditions... And maybe I'm jumping to conclusions. But at any rate it would be nice to be able to show that wave function and its collapse are things that exist exclusively and inherently in our method of understanding reality. I think a lot of people would find that quite satisfactory, as oppose to some specific ontological fantasy that just has defined reality in yet another semantically different way :P
No, I don't think you are jumping to conclusions at all. But you are sort of side stepping the proof itself. A proof that it is true is a much more significant than, “Gee that would be nice!” I don't think the proof is hard to follow and, as I said, I will help you through any points you find difficult. Go back and read my source paper “A Universal Analytical Model of Explanation Itself” and see if you find any steps there which are unclear to you. I really would like you to understand the proof.

 

Have fun -- Dick

Link to comment
Share on other sites

Hi Anssi, it is quite nice to hear from you again. Sorry I have been so slow to respond (there are reasons but we need not go into them here).

 

No worries, I've been myself reading the post #194 and your exchange with Bombadil from there onwards quite carefully, but needless to say I am quite lost with the mathematics you present there. But yeah, I think I still have things to munch regarding the derivation of that fundamental constraint.

 

For the moment, it appears to me that you might be mixing two very different issues. First, there is the derivation of my fundamental equation and, second there is the issue of showing that the solutions to that equation are, in fact, exactly what is commonly referred to as “the fundamental underpinnings of modern physics”.

 

Yup, that much I gathered.

 

Multiplication of positive numbers always yields positive numbers (zero plus zero is zero) and multiplication of two negative numbers yields a positive number (one hundred and eighty plus one hundred and eighty is three hundred and sixty degrees; which on the graph is a direction identical to zero degrees). I will leave mixed (one negative and one positive number) multiplication to your thoughts.

 

Haha, your faith in me shows no boundaries :D Yeah, I think I can just push myself to figure that one out.

Thank you about the whole representation of complex number multiplication, I think I figured it out.

 

All that is necessary is to comprehend that the factor [imath]e^{itheta}equiv cos theta + i sin theta[/imath] is a complex number with a magnitude of unity and a direction given by theta

 

Okay, and that is why the resulting probability is not affected by that term, I think I got it.

 

I am not really expecting you to spot any errors. What I am hoping for is that some of the other people who read the thing will begin to comprehend what I am saying and realize that it is worth the effort to check it out carefully. In order to accomplish that, they have to have some idea of what I am doing and I am hoping that my conversation with you will begin to perk their interest.

 

Yeah, I think I can possibly help with that. I think I have got an idea what Buffy is thinking about when she doesn't think shift symmetry is required by the explanation. I'll see if I can clarify that issue from a new perspective... And while I'm at it I'll try and clarify further what this analysis is about.

 

No, it isn't! That is a typographical error I have made a number of times (I have now corrected that one by editing the post).

 

Heh, so I did spot an error after all, even if it was just a typo :P

 

Differential equations usually do not provide specific solutions. If a differential equation can be algebraically transformed into an expression of the form

 

[math] left{right.[/math] Some complex algebraic expression of operators [math]left.right}vec{Psi}=0,[/math]

 

then any solution will yield that zero at the end. The fact that the solution, [imath]vec{Psi}[/imath] can be factored out means that any sum of solutions is also a solution (a sum of zeros is zero and the equation will be satisfied).

 

Oh, so that's;

[math]\left\{\right.[/math] operators [math]\left.\right\}\vec{\Psi} + \left\{\right.[/math] operators [math]\left.\right\}\vec{\Psi}=0[/math]

 

But not

[math]\left\{\right.[/math] operators [math]\left.\right\}\vec{\Psi} + \vec{\Psi} = 0[/math]

 

?

 

We are getting into mathematics here. Let me say that the curved integral sign was originally a big “S” which stood for “sum”. An integral is fundamentally defined to be a sum over a weighted collection of elements in the limit where the number of elements go to infinity. Now the sum of an infinite number of elements would of course be infinite except for the fact that, as the number of elements rises, the value of the elements goes down such that the sum over the “unweighted” elements is constant. The function being integrated is the weighting function and that little “dx” you see in the integral is the element being summed. All you really need to know is that sums go over to integrals as the collection of elements being summed go from a finite set to a continuous (and thus infinite) set.

 

Okay I think I got that.

 

I am suspicious that you are confusing “a single possibility” with “a single index”. The single possibility I am referring to here is a specific set of indices (the entire set one through n) in the limit where n goes to infinity.

 

Yeah that's what I thought, so I think I understood that bit correctly.

 

I suspect you have. If that is true, what I have just said should make sense to you. Let me know if that is indeed the case.

 

Yup, it makes sense to me.

 

At any rate, again with the suggested example above, the function [imath](a+b^2+c^3+d^4)+(a+c^2+b^3+d^4)[/imath] is symmetric under exchange of b and c (i.e., gives exactly the same result) and [imath](a+b^2+c^3+d^4)-(a+c^2+b^3+d^4)[/imath] is “antisymmetric” under exchange of b and c (i.e., the sign of the function changes under exchange but the function is otherwise identical).

 

So, that was my next question and I could not figure it out from your example; what does it mean exactly that the "sign of the function changes under exchange"? I was thinking that has to do with something changing from positive to negative, but... what? There's something here I don't get :P

 

The important point being the fact that the “antisymmetric” solution exactly vanishes if the exchanged indices are exactly the same (if b=c).

 

That part I do get.

 

Exactly correct. I couldn't put it any clearer.

No, I don't think you are jumping to conclusions at all. But you are sort of side stepping the proof itself. A proof that it is true is a much more significant than, “Gee that would be nice!”

 

Yes definitely, I get that. I basically just meant it would be nice to be able to convince people to take a good look at this and see if everything holds water.

 

I guess most people shouldn't find it surprising at this day and age that the wave function and its collapse are things that exist in our worldview exclusively, but it certainly is something to be able to comprehend exactly the logical circumstances that bring forth such a concept. (How's that for a useful end Buffy?)

 

I don't think the proof is hard to follow and, as I said, I will help you through any points you find difficult. Go back and read my source paper “A Universal Analytical Model of Explanation Itself” and see if you find any steps there which are unclear to you. I really would like you to understand the proof.

 

I think I can just barely follow that whole thing through now, at least in so far that I cannot spot an error. The most difficult part for me to follow is obviously the transformation from the four equations to the single one, but I did walk that through once already, just barely...

 

Actually the first part of your presentation where I now go "wait, wha-..?" is when you point out that the fundamental equation can be seen as a wave equation or as the probability waves of "infinitesimal dust motes" (or something akin to that). I mean, I just see a difficult looking equation, and it was a constraint to an unordered set of labels to ontological elements... I cannot see that view in my head that you can, and I don't know how it can be seen that way really... Help?

 

Oh, and about that web page otherwise, I certainly think there are few ambiguous (english) statements that are easy to misunderstand (and consequently misunderstand the meaning of the equations), and paragraphs that at least I find little bit hard to follow. I'm saying it could be clearer (yeah it's always incredibly hard to be simple, clear and unambiguous at the same time... and we are not talking about the simplest of subjects when it comes to ambiguity! :D)

 

Well, math is my weak point currently, but I think I might be able to explain the justification behind the analysis and the logical constraints from a new perspective, see if people find that clearer. Also I can see why people might see this as a risky thing to pursue, as its easy to make the misconception that some ontology is being implied by the work. I'll see what I can do to help explain the issue clearly.

 

And one more thing, since you mentioned physics and mathematics are chocked full of niche specialists, then certainly mathematically exact analysis of the (common or otherwise) characteristics of epistemological constructs is a valid field to specialize into (which, fair to say, you are pioneering here).

 

I'm just saying this to point out to everyone that even if your work has got a fatal flaw to it somewhere, such analyses can certainly be performed and it can be a valuable thing to do. If your work has got no fatal flaws to it, then this thing right here is clearly a valuable concept for number of things, and certainly can be developed further, for many useful applications on different fields.

 

So either way, very nice work, even if this comes just from me :shrug:

 

ps, just as a side-question, does this analysis imply something about delayed choice experiments of QM? Or about probability correlations between space-like separated events? (I mean Bell experiments) Obviously there are many specific ontologies that can explain those things in many different ways, just wondering if this makes any implications to any of those explanations?

 

-Anssi

Link to comment
Share on other sites

Heh, so I did spot an error after all, even if it was just a typo :P
You certainly did!
Oh, so that's;

[math]left{right.[/math] operators [math]left.right}vec{Psi} + left{right.[/math] operators [math]left.right}vec{Psi}=0[/math]

 

But not

[math]left{right.[/math] operators [math]left.right}vec{Psi} + vec{Psi} = 0[/math]

 

?

Not quite; you have displayed a little confusion here. You ought to denote the fact that you are talking about two different solutions.

[math]\left\{\right.[/math] operators [math]\left.\right\}\vec{\Psi}_1 + \left\{\right.[/math] operators [math]\left.\right\}\vec{\Psi}_2=0[/math]

which is absolutely identical to

[math]\left\{\right.[/math] operators [math]\left.\right\}(\vec{\Psi}_1 + \vec{\Psi}_2) = 0[/math]

 

i.e., a sum of solutions is a solution.

So, that was my next question and I could not figure it out from your example; what does it mean exactly that the "sign of the function changes under exchange"? I was thinking that has to do with something changing from positive to negative, but... what? There's something here I don't get :P
I suspect you are expecting something subtle when the issue is actually quite straight forward. You need to look carefully. The first function I gave you was

[math] (a+b^2+c^3+d^4)+(a+c^2+b^3+d^4) [/math]

 

which is the sum of two terms which are identical except that the second has c where the first has b and b where the first has c. The two arguments b and c have been exchanged. Now take that sum and exchange b with c. The result is explicitly

[math] (a+c^2+b^3+d^4)+(a+b^2+c^3+d^4) [/math]

 

which is absolutely identical to the original function ( the terms within the parentheses have simply reversed order). That is the symmetric case. The asymmetric case is given by

was

[math] (a+b^2+c^3+d^4)-(a+c^2+b^3+d^4) [/math]

 

Look at that function and see what it looks like if you exchange c and b. If you exchange c and b, that function becomes

[math] (a+c^2+b^3+d^4)-(a+b^2+c^3+d^4) [/math]

 

which is identical to

[math] -\left\{(a+b^2+c^3+d^4)-(a+c^2+b^3+d^4)\right\} [/math]

 

the exact negative of what we started with. Notice that if b=c the result is exactly zero. Any probability which is obtained from a function asymmetric under exchange of two arguments will be exactly zero if those two arguments are the same. The point here is that, given any explicit solution, it is quite easy to generate either a symmetric or antisymmetric version of that solution. (Please also note that the expressions given are symmetric with respect to exchange of the other variables; creating a function asymmetric with respect to all arguments is a rather long and complex construct.)

I guess most people shouldn't find it surprising at this day and age that the wave function and its collapse are things that exist in our worldview exclusively, but it certainly is something to be able to comprehend exactly the logical circumstances that bring forth such a concept. (How's that for a useful end Buffy?)
This is exactly correct. In my paradigm, every time you get new information, you need to recalculate your expectations. Though the “wave function” appears to “propagate” from the last collection of information (yielding your expectations for the current present) when you actually get the information which constitutes that present (in the world view you use to analyze your expectations) you need to recalculate your expectations. The “propagation” of those expectations is totally and completely fictitious phenomena required by your explanation.
Actually the first part of your presentation where I now go "wait, wha-..?" is when you point out that the fundamental equation can be seen as a wave equation or as the probability waves of "infinitesimal dust motes" (or something akin to that). I mean, I just see a difficult looking equation, and it was a constraint to an unordered set of labels to ontological elements... I cannot see that view in my head that you can, and I don't know how it can be seen that way really... Help?
That is entirely due to your lack of experience with differential equations. Look at my fundamental equation. It is composed of two different kinds of terms. First there is a “momentum” term, that would be the term with the differential with respect to the index [imath]\frac{\partial}{\partial x}[/imath] (defined after I deduce the Schroedinger equation) and, on the other side of the equal sign is an “energy” term (also defined after I deduce that the Schroedinger equation). The remaining term could be called an “interaction” term. It arose entirely from the “invalid ontological elements” I introduced to limit the one's expectations to what actually happened. (I proved that such a term was always capable of providing such a constraint. We can go back into that if you wish.)

 

In the absence of the “interaction” term, the fundamental equation is exactly

[math]\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi} = K\frac{\partial}{\partial t}\vec{\Psi}.[/math]

 

which is eminently separable. The right hand side has the simple solution [imath]\vec{\Psi}= e^{\frac{t}{K}}[/imath]: i.e., Energy is conserved and is proportional to K (K being defined by your definition of time). It follows that the left hand side is also a constant. This is easily accommodated by solving the equation

[math]\frac{\partial}{\partial x_i}\vec{\Psi} = K_i\vec{\Psi}.[/math]

 

which also has a trivial solution: [imath] \vec{\Psi}=e^{\frac{x_i}{K_i}}[/imath]. The only requirement is that the sum over the momentum associated with all indices must be K (again, the only issue to be settled is the definition of the relationship between momentum and energy). Essentially every event (ontological element, noumenon, knowable data – whatever you wish to call it) just changes at a constant rate (which of course could be zero -- it just stays the same). These things are quite analogous to dust motes of zero size which simply do not interact.

... and we are not talking about the simplest of subjects when it comes to ambiguity! :D
If you haven't already read it, you should read my post to saviormachine concerning the value of ambiguity.

ps, just as a side-question, does this analysis imply something about delayed choice experiments of QM? Or about probability correlations between space-like separated events? (I mean Bell experiments) Obviously there are many specific ontologies that can explain those things in many different ways, just wondering if this makes any implications to any of those explanations?
Essentially, by viewing wave function collapse as no more than an occasion to recalculate your expectations, the Bell experiments are fundamental confirmation of the paradigm. They only introduce problems if you believe those wave functions are ontologically real phenomena and that idea introduces problems not easy to solve.

 

Have fun -- Dick

Link to comment
Share on other sites

You ought to denote the fact that you are talking about two different solutions.

[math]left{right.[/math] operators [math]left.right}vec{Psi}_1 + left{right.[/math] operators [math]left.right}vec{Psi}_2=0[/math]

which is absolutely identical to

[math]left{right.[/math] operators [math]left.right}(vec{Psi}_1 + vec{Psi}_2) = 0[/math]

i.e., a sum of solutions is a solution.

 

Okay, got that.

 

Look at that function and see what it looks like if you exchange c and b. If you exchange c and b, that function becomes

[math] (a+c^2+b^3+d^4)-(a+b^2+c^3+d^4) [/math]

which is identical to

[math] -left{(a+b^2+c^3+d^4)-(a+c^2+b^3+d^4)right} [/math]

the exact negative of what we started with.

 

Okay I think that last LaTex revealed my misconception, I didn't realize the negative of a function means the negative is applied to each term of that function. (With that assumption I understand why the last two are identical)

 

That is entirely due to your lack of experience with differential equations. Look at my fundamental equation. It is composed of two different kinds of terms. First there is a “momentum” term, that would be the term with the differential with respect to the index [imath]frac{partial}{partial x}[/imath] (defined after I deduce the Schroedinger equation) and, on the other side of the equal sign is an “energy” term (also defined after I deduce that the Schroedinger equation). The remaining term could be called an “interaction” term. It arose entirely from the “invalid ontological elements” I introduced to limit the one's expectations to what actually happened. (I proved that such a term was always capable of providing such a constraint. We can go back into that if you wish.)

In the absence of the “interaction” term, the fundamental equation is exactly

[math]sum_i vec{alpha}_i cdot vec{nabla}_i vec{Psi} = Kfrac{partial}{partial t}vec{Psi}.[/math]

which is eminently separable. The right hand side has the simple solution [imath]vec{Psi}= e^{frac{t}{K}}[/imath]: i.e., Energy is conserved and is proportional to K (K being defined by your definition of time). It follows that the left hand side is also a constant. This is easily accommodated by solving the equation

[math]frac{partial}{partial x_i}vec{Psi} = K_ivec{Psi}.[/math]

which also has a trivial solution: [imath] vec{Psi}=e^{frac{x_i}{K_i}}[/imath]. The only requirement is that the sum over the momentum associated with all indices must be K (again, the only issue to be settled is the definition of the relationship between momentum and energy). Essentially every event (ontological element, noumenon, knowable data – whatever you wish to call it) just changes at a constant rate (which of course could be zero -- it just stays the same). These things are quite analogous to dust motes of zero size which simply do not interact.

 

So, I've been really scratcing my head on this part, and it's hard to pick it up. Initially a "present" was unordered sets of labels, and to be able to see the change in between "presents" as "momentum" requires one to interpret the situation in some context sensitive way I suppose.

 

I mean I'm not sure how the difference between two sets of "unordered labels" can be seen as containing the information of momentum without interpreting or assuming some sort of identity to some elements (which are the "x,tau"-points or specific patterns?)... I.e. I'm looking at the "momentum" term, but I'm not sure what does it mean to say that a specific element is "changing at a constant rate" (or what would that look like in the x,tau,t mapping)

 

Or are you making such a shift in perspective in this interpretation that the x,tau,t point is not seen as a label but as a position, and if so, what's the mechanism that says which element in one "present" is which in another "present"

 

I hope you can pick up what I'm trying to ask here :P

 

Essentially, by viewing wave function collapse as no more than an occasion to recalculate your expectations, the Bell experiments are fundamental confirmation of the paradigm. They only introduce problems if you believe those wave functions are ontologically real phenomena and that idea introduces problems not easy to solve.

 

Yeah I have a pretty okay handle of what sorts of problems it causes and I certainly think it makes perfect sense to see the wave function as purely epistemological, especially since you don't need to worry about what does it mean in reality to "observe".

 

But it seems there's just one irritating complication; what I was referring to was any experiment where Bell inequality is violated.

 

There seems to be a nice simplified interpretation of such an experiment here; Bell's theorem - Wikipedia, the free encyclopedia , and speaking in the terms of that web page, if you are measuring much higher correlation between some measurements than what you'd expect with the assumption that the measured properties were already chosen before the measurement, then that complicates the view that wave function collapse is entirely epistemological.

 

Essentially, if your paradigm doesn't seem to make any commentary about "higher than expected correlations", and if it is logically valid, then that pretty much means the explanation for those higher correlations must lie in each specific worldview, but not in those differential constraints. Obviously there already exists many specific explanations and surely people will keep building many more (as you perhaps remember, I built one myself to a point, and that was accidentally almost identical to the transactional explanation :D)

 

Anyhow, if the differential constraints don't imply anything of the issue, it seems to me that a more accurate assessment of the situation would be that the "epistemological wave function" of your paradigm and the common conception of the wave function are not 100% directly exchangeable; something else in the common physical view needs to be changed alongside... ...and we'd probably always be free (for a large part) to choose what.

 

-Anssi

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...