Doctordick Posted January 29, 2014 Report Share Posted January 29, 2014 (edited) There is an important scientific issue avoided by every scientist I have ever known. The issue I have in mind was brought up by Sir Arthur Eddington in his book, "New Pathways in Science" published in 1934 by the Cambridge University Press. The critical issue is essentially that one cannot study the recurrence of signs and indications without first identifying what these signs are. Clearly, the nature of these signs and indications are opening assumptions which are only justified by the successful theories deduced from their existence. As Eddington saw the problem, the identification of those signs and indications are the necessary opening assumptions of any scientific analysis and simply can not be avoided. After considerable thought, he essentially defends placing the problem arising from these assumptions into philosophy and outside the interest of physical scientists. In essence he defends the professional scientists avoidance of the issue as the only rational approach. I have discovered a rational attack on that difficulty which should be carefully examined. If we must accept the requirement that those opening assumptions are absolutely unavoidable, it should be seen as beneficial to consider what constraints might exist which have no dependence on what those assumptions are. That is, do there exist any consequences of merely limiting the collection of possible explanations to those which are internally self-consistent? Is it possible to perform an analysis which explicitly avoids any dependence on either what is being explained or on what is assumed? This selfsame issue can be put in a slightly different way. Our mental image of reality is constructed from data received through mechanisms (our senses) which are also part of that image. I strongly suspect that any scientist in the world would hold it as obvious that one could not possibly model the universe until after some information about that universe were first obtained. The underlying problem is the fact that we cannot possibly model our senses without presuming some model of reality. This is the underlying essence of the old chicken-egg paradox and the reason all physical scientists leave it totally out of their analyses. It should be clear is that identifying what it is that we sense is indeed the opening assumption and is thus inherently impossible to avoid: however, there is an underlying aspect of that problem which can be examined. Identifying what it is that we sense is the central issue of inventing a language. The entire scientific community presumes the language required to think about the problem is a known thing. That underlying assumption blocks objective thought on the whole subject. The critical issue is that coming up with a representation of the things to be considered (essentially inventing a language) is part of the underlying problem itself and cannot be seen as given information. There exists an alternate approach to the difficulty. Suppose we have discovered an explanation of reality including those assumptions and the language necessary to represent that explanation. If it is possible to represent that solution without imposing any constraints whatsoever as to what that solution is, then we can examine the constraints imposed by that representation itself. A solution to the underlying difficulty then lies in providing an accurate and succinct definition of language so that a universal representation can be constructed. The standard definition of the word "language" most often includes something similar to the phrase, “communication of thoughts and feelings through a system of arbitrary signals, such as voice sounds, gestures, or written symbols”. It is that word “arbitrary” which allows one to step across an otherwise implacable obstacle to objective analysis. Note that the only absolute requirement of any language is that it must be capable of representing the information of interest. What is important is the realization that a numeric labelling system can serve as a representation of any communicable concept and that the nature of the representation has absolutely no dependence whatsoever on either what is being explained or on what is being assumed. My attack is to lay out a specific logical representation capable of representing absolutely any information to be communicated without making any constraints whatsoever on what it is that is being represented. In essence, I totally lay aside the problem to be solved and instead work only with an absolutely universal representation of all possible solutions. My presentation is essentially an analysis of the question, “can one find any constraints on the collection of possible explanations without making any constraints whatsoever on the assumptions embedded in those explanations?” This question is actually entirely different from the question of what those assumptions are to be. What I am trying to point out is the fact that the problem I am examining is not the problem of finding solutions but rather the problem of defining a mechanism to representing those solutions after they are found. To begin with, any language consists of nothing more than labels for the concepts viewed as necessary to expressing those explanations of interest. Being nothing more than labels; no matter what that language might be, once it exists, those concepts can clearly be represented by an ordered list of numerical labels which can be represented by [math](x_1,x_2,\cdots,x_n)[/math] where "x" stands for a specific numerical label and the subscript denotes its position in the list. Since any description of any "circumstance" of interest to any explanation must consist of a finite collection of concepts of interest, any assertion whatsoever can be expressed as such a circumstance. It follows that any explanation can be represented by a collection of such circumstances together with an estimate as to the truth of those circumstances: i.e., having an explanation of such a set of circumstances is totally equivalent to knowing the probability that [math](x_1,x_2,\cdots,x_n)[/math] is a valid representation (per that explanation). It follows directly that understanding an explanation may be seen as being able to reproduce a one-to-one mapping into a numerical representation [math]P(x_1,x_2,\cdots,x_n)[/math] where "P" is a numerical representation of the validity of the specific circumstances being explained: i.e., the collection of circumstance represented by the notation [math](x_1,x_2,\cdots,x_n)[/math]. In essence, an “explanation” can be seen as “the process by which those answers are achieved”. Thus I propose to let "an understanding” be specifically defined by the specified answers themselves: i.e., if any specific answers defined by [math]P(x_1,x_2,\cdots,x_n)[/math] differ, the explanations (the process by which [math]P(x_1,x_2,\cdots,x_n)[/math] are determined) must constitute different understandings of what is being represented. What is really significant here is that the actual definitions of those numerical labels denoted by "x" is totally immaterial. All that is actually required is that the appropriate numerical label is consistently used for the same referenced concept. Essentially, the only issue of significance is that the information being represented can be transformed into what is essentially the specific language required to express the explanation of interest: i.e., as I said earlier, coming up with a language is part of the solution and not given information. At this point, the chosen notation (and meaning) looks to be identical to the common mathematical notation for a function of many variables. It is interesting to note that, in the English language, one could say that the truth of a statement is “a function” of what that statement says. That provides some evidence that perhaps the two concepts are indeed quite similar; however, presuming they are identical is a patently undefendable assumption and is, at this point, totally unwarranted as there are some serious problems embedded in such a suggestion. One problem is the finite nature of vocabulary. Clearly the required language can not consist of an infinite vocabulary. If [math]x_i[/math] is viewed as an argument of a mathematical function, it would be seen as representing a continuous variable. That would imply the need of an infinite vocabulary. I set "understanding" off as an independent concept in order to specifically include different understandings which yield exactly the same “known” answers. Under this definition, there exists no way of guaranteeing that two understandings are actually identical. It is always possible that there could exist additional questions not being considered, the answers of which might differentiate between these supposedly identical understandings. This is a subtle but important issue. On the other hand, the idea that actual identical understandings could exist (which certainly must be included and accepted as a possibility) raises a very interesting thought experiment. Consider two different intelligent entities who communicate with one another via exactly the same language: i.e., possess exactly the same finite collection of concepts. For the sake of argument, suppose these two happen to have discovered exactly the same “explanation” and that this discovery was based on exactly the same set of known circumstances. It follows, from the definition of understanding put forth here, that they have arrived at exactly the same understanding of those circumstances. Thus it must be that their explanations must yield exactly the same answers to all possible questions as, under the definitions being used here, they have discovered exactly the same collection of probabilities represented by [math]P(x_1,x_2,\cdots,x_n)[/math]. What is important in this thought experiment is that we have made no constraints whatsoever on the actual numerical labels used to represent those indicated circumstances. Neither have we made any assumptions as to the concepts that will be represented nor the universe within which these entities exist. Thus I assert that the following deduction is totally without assumption of any kind but follows directly from the very definition of this universal representation. The critical issue here is that, no matter what explanation is being expressed, one is still free to use absolutely any collection of numerical labels of convenience. Remember, the language itself is no more than an aspect of the understanding being explained: i.e., the meanings of the concepts used in that language must be deduced from the collection of relevant circumstances being explained. If it is true that the language the entities use must be deducible from the collection of circumstances on which their understanding is based, that fact implies a very curious constraint on the internal probability relationships represented by [math]P(x_1,x_2,\cdots,x_n)[/math]. So long as the numerical labels used to label a language element used by individual #1 corresponds exactly, one-to-one, to the same language elements referred to by the numerical labels used by individual #2, it must be that [math]P(x_1+a, x_2+a,\cdots, x_n+a)\equiv P(x_1+ b, x_2+ b,\cdots, x_n+ b )[/math] as all a and b do is relabel all the pertinent elements (think of them as no more than the order of the listing of those concepts in their personal dictionary of concepts (information from which they deduced the language). This relabelling requires no change whatsoever to the specific elements being labelled. Of issue here is that the actual numerical labels to be used are entirely arbitrary. The only important issue is that the concept being labelled by a specific numerical label is exactly the same in both cases, even though the actual numerical label being used is different. It follows directly that, if we define [math]b=a+\Delta a[/math], (remember these are mere numerical shifts in all relevant labels) we can assert that the following relationship is absolutely valid for all conceivable explanations: [math]\frac{P(x_1+a+\Delta a,x_2+a+\Delta a,\cdots,x_n+a+\Delta a)-P(x_1+a,x_2+a,\cdots,x_n+a)}{\Delta a}\equiv 0.[/math] Note that the value of [math]\Delta a[/math] is of utterly no consequence here. That is, the equation is valid for absolutely any and all possible non zero values of [math] \Delta a[/math]. It follows that, if one defines a third collection of numerical labels as [math] z_i=x_i+a[/math], the following differential representation must be true by definition: [math]\frac{d}{da}P(z_1,z_2,\cdots,z_n)\equiv 0. [/math] This expression is valid because, in the definition of a differential, the division by [math] \Delta a[/math] is expressed via the limit as [math]\Delta a[/math] approaches zero and not the actual value of [math]\Delta a[/math]. Note further that this differential expression must be valid for all explanations. If one assumes that there exists an understanding represented by [math]P(x_1,x_2,\cdots,x_n)[/math] which fails to satisfy that expression, one has essentially asserted that there exist “identical explanations” which are not included in the representation of all explanations. That would clearly be a totally unjustified assumption. Note further that multiplying every numerical label by some constant is another operation which must maintain the value of [math]P(x_1,x_2,\cdots,x_n)[/math]. This is another important aspect of an explanation. It implies scale adjustments in these numerical labels are another universal aspect of internal consistency. If it were possible to interpret [math]P(x_1,x_2,\cdots,x_n)[/math] to be a mathematical function, it should be noticed that there would exist another “mathematical” step which would lead to a relationship of fundamental importance to modern physics. A trivial understanding of partial differentiation together with the fact that [math] z_i=x_i+a[/math] (which, in the chain rule of differentiation, requires the partial of [math]x_i[/math] with respect to a to be unity) leads to the following [math]\frac{d}{da}P(z_1,z_2,\cdots,z_n) = \sum_{i=1}^n \frac{\partial}{\partial z_i}P(z_1,z_2,\cdots,z_n)\frac{\partial z_i}{\partial a}[/math] The only conclusion is that [math] \sum_{i=1}^n \frac{\partial}{\partial z_i}P(z_1,z_2,\cdots,z_n)=0[/math] would be an absolutely required constraint. The problem is that [math]P(x_1,x_2,\cdots,x_n)[/math], as currently defined, cannot possibly be interpreted to be a mathematical function. The numerical labels were defined to represent concepts essential to the explanation of reality. If the numerical labels are interpreted as continuous variables, the representation can not possibly represent a knowable language: i.e., it is clearly impossible to know an infinite vocabulary. On the other hand, there is a rather simple and straight forward solution to this specific difficulty. If anyone here has any understanding of what I have just expressed, the results can be carried far beyond what I have expressed above and I would be happy to discuss the issues further. Have fun -- Dick Edited February 12, 2014 by Doctordick Turtle 1 Quote Link to comment Share on other sites More sharing options...

alionalizoti Posted September 23, 2014 Report Share Posted September 23, 2014 Foundations of Reality Exactly, but we also need to consider other views!For example, Dirac wasted much of his time trying to find out magnetic mono - poles in his micro world fantasies.What Dirac was trying to do with the Foundations of Magnetic Reality,can be easily achieved now by any students, or pupils, simply glue sticking two natural magnetic bars NS - NS. Quote Link to comment Share on other sites More sharing options...

HydrogenBond Posted September 23, 2014 Report Share Posted September 23, 2014 (edited) A visual analogy of Eddington's concern can be seen with an example. Say we start with a large picture of a landscape, that is as big as a wall. This is analogous to reality all integrated together. Science is based on specialization and therefore does not look at this biggest picture. Rather science looks at a specialized portion of the biggest picture; biology. This is analogous to zooming into one area of the picture so we can see all the details. Although the zoom of specialization is very useful for details, it is often detached from the context of the biggest picture. We may zoom into a particle tree in the landscape, and can see the texture of the bark and the leaves. This tell us much about the tree, but it is not enough to know the full context of the tree, since beyond the zoom the image gets fuzzy; biologist is not an astronomer. One might ask the zoom specialists, is this tree in the forest, in the park, in someone's back yard, or in an arboretum, etc., This biggest picture has an impact on the theory. From the zoom POV you can see the details but not always the context of the tree in terms of the biggest picture. Philosophy deals with the bigger picture but does not always interface science all duel to specialization. One can see this fuzzy boundary in science, for example, where particle physics has little to do with chemistry even though they are side by side in the bigger picture of physical reality. The fuzzy gaps between zoom points needs a set of random assumptions. This is not because reality is random, but rather it is because the zoom approach has gaps, within the bigger reality, and needs filler. The philosophical extrapolation of Einstein's relative reference, into speciality science, validated the assumption that any zoom area was a good as any other. when it comes to defining the whole as long, as you add random filler. It sort of works for an empirical model of reality, but reality is about the logic of the biggest picture without the need of random. If we go back to our biggest picture of the landscape and begin there, we notice this is a picture of a valley between mountains ranges that appears to be in the western USA, as an example. Knowing that, the tree theory needs to take that into account, as its reality context. The biggest pictures funnels the useful details that the zoom brings to the table, in the same direction, all the centers go. Edited September 23, 2014 by HydrogenBond Quote Link to comment Share on other sites More sharing options...

Doctordick Posted October 24, 2014 Author Report Share Posted October 24, 2014 The philosophical extrapolation of Einstein's relative reference, into speciality science, validated the assumption that any zoom area was a good as any other. when it comes to defining the whole as long, as you add random filler. It sort of works for an empirical model of reality, but reality is about the logic of the biggest picture without the need of random. What you are failing to specify is the fact that any such specialization is in essence an approximation and is thus clearly not the correct answer. An example of what I am talking about is the fact that every common calculation of the orbit of the moon totally omits any gravitational interaction with Alpha Centauri. Thus the calculation is "wrong". That the calculation is useful is a totally different issue. If you read my book, you will find a comment on page 46 concerning a very important approximation. Essentially either the rest of the universe can be ignored or (if it cannot be ignored) the consequences of its existence are known. This is the essence of equation 3.31. Another way to view the same issue is that "objects can exist" in your model. If objects can not exist then the problem can not be solved. What you must always remember is that the existence of objects is always an approximation and thus an error. There are some very deep issues imbedded in that realization which seem to be incomprehensible to most everyone on this forum. Particularly those who presume to understand modern science. In that respect, modern science is very much becoming a religion and not a science. Have fun -- Dick Quote Link to comment Share on other sites More sharing options...

Rade Posted March 27, 2015 Report Share Posted March 27, 2015 DD, Well, it is not internally consistent to say that for the set of all possible objects...."the existence of objects is always an approximation, thus an error". There must always be allowed the possibility that an object may exist external to an observer. For example, it is a fact that this letter " X " has some possibility to exist as an object on the computer screen of anyone reading this, there must be the possibility that it is not an approximation of anything. This conclusion logically derives from an attempt to answer this question...when we are conscious and looking at a computer screen (such as what occurs at this moment in time), what is it exactly that we are conscious of ? What correctly could be said is that knowledge (mental grasp of " X ") and understanding (ability to communicate your mental grasp of " X " to other humans) of " X " on the computer screen is always an approximation. Does anyone claim to absolutely know the " X " object that exists external to them that they view on a computer screen ? What does it mean to say that we know " X " as perceived on the screen. Are different forms of knowledge of " X " possible, and if yes, what are they ? And what is the difference between saying we know " X " and we believe " X " ? These are the critical questions that your presentation passes over as being unimportant, a task for future revisions of your book presentation. == ps/ fyi, I made an attempt to post your 2013 book 'Foundations of Physical Reality' on Physics Forum as an Amazon link plus the link provided on Google book. This is a forum read by professional scientists and mathematicians. After 24 hours there were > 100 looks at the post but then I am sad to say the book was banned. I strongly disagree with the action taken by Physics Forum. You deserve feedback from professional physicists for your thoughts, both pro and con. Perhaps some of the 100 who did view the book will contact you with comments. Quote Link to comment Share on other sites More sharing options...

Doctordick Posted March 30, 2015 Author Report Share Posted March 30, 2015 (edited) Well Rade, that was nice of you. I was once an actual member on Physics Forum (back when it was first formed). After a few years (where it appeared to me that I was generating some interest) one of the new "managers" banned me as a troll. I suspect he also removed my posts. I do have a Ph.D. in theoretical physics awarded January 19, 1971 by Vanderbilt University and could be thought of as a professional myself; however, I earned my living via an invention I patented about the same time I left Vanderbilt. My earnings were an order of magnitude above the Vanderbilt faculty and I think it upset them. When I tried to get help publishing in the 1980's they would not even read my publication. They said no one would ever read my stuff because I hadn't paid my dues (I think they meant I hadn't published for over ten years). The physics journals said it was philosophy and of no interest to them, the philosophy journals said it was mathematics and of no interest to them and mathematicians told me there was no new math and was thus of no interest to them. Thus I forgot about the stuff. Sometime after the turn of the century I ran across the original paper while cleaning the attic. I read it, made a few changes and printed out about fifty copies bound with a plastic cover. I tried to get some interest and pretty well failed. So, a few years after I retired I hired a professional printer to Print an actual book: the third version (the one on amazon.com). I have managed to sell two. Believe me I am well aware of the lack of interest by professionals. I suspect they don't want their reason for existence to be questioned. :surprise: Have fun -- Dick Edited April 1, 2015 by Doctordick Quote Link to comment Share on other sites More sharing options...

## Recommended Posts

## Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.