Jump to content
Science Forums

Why does this work?


Pokeska

Recommended Posts

Hi, this is my first post. Hope I'm not breaking any rules. Anyway, I was messing arround programming the other day, and I accedently came up with an algorithm that will approximate the cosine of a number with a great deal of precisision. Here's the code:

 

double cosine(double n){

double a=1;

double da=0;

double p=1000;//Bigger numbers will give you a better value for cos(n)

for(double i=0;i<=n;i+=pow(p,-1)){

da+=-a/pow(p,2);

a+=da;}

return a;}

 

It's kinda hard to put this into words, and I imagine a lot of people here know a little bit of c++, so, for now, I won't try to explain what it does. I can try though, if you want. So, yeah, if anyone could tell me why this works, that would rock.

Link to comment
Share on other sites

Pokeska presents a very interesting function, which I confirm appears to converge on the true value of cos(n) as P is increased.

 

I’m mystified as to why it works. The only help I can offer is to note its similarity to another well-known algorithm that also mystifies me, Bresenham’s (1/8 of a) circle algorithm (about which I’ve asked the “why does it work” question of many people and search engines for many years, with no satisfying answer yet):

x := 0
y := R
d := 1 - y
Repeat While x =< y
   Draw (x,y)
   If d <0
       d := 2*x + 3 + d
   Else
       d := (x - y)*2 + 5 + d
       y := -1 + y
   x := x + 1

The 2 algorithms are similar, in that both increment a coordinate along a circle, in Bresenham’s case, incrementing (or decrementing) y (or x) as x (or y) is incremented, in Pokeska’s, decrementing a as the length of the arc along the circles circumference (i) is incremented. Both algorithms work only for a section of the circle, in B’s case, 1/8th of it, in P’s, 1/4th.

 

P’s cosine algorithm is computationally much less efficient than the usual, Taylor-series using algorithm:

Input: n; p
A := 0
B := 1
C := 1
I :=1 
Loop p times
   A := B/C + A
   B := -n*n *B
   C := (I+1) *I *C
   I := I+2

which converges on a precise approximation in only about p=10 iterations. With such a well-known algorithm being so much more efficient, I’m not surprised that I’ve not encountered Pokeska’s remarkable algorithm before.

 

PS: Welcome, Pokeska, to Hypography! Excellent first post! :cocktail:

Link to comment
Share on other sites

Welcome Pokeska.

 

Your code is a numerical iterative calculation based on a simple differential equation having solutions which are combinations of sin(x) and cos(x) according to boundary conditions. The inverse of p is the iteration step, the smaller it is the longer it takes but the result is more precise, but speed could be improved by storing the inverse of p and its square without repeating the pow function inside the for loop.

 

a=1 and da=0 are your boundary conditions and they fix the solution as being cosine. With a=0 and da=1 should give you sine, while removing the minus sign from a/pow(p,2) would give you exponential functions.

 

edit: The differential equation can be written as:

 

f''(x) = -f(x)

 

where f'' is the second derivative of f

Link to comment
Share on other sites

I tried the things that Qfwfq said, and I found a couple interesting things. If you start with a=0 and da=1, you end up with p*sin(n). Also, removing the minus sign from a/pow(p,2), gives a thing that look a lot like e^x, but isn't exactly it. it looks like it might be e^x moved a little bit to the right.

 

and thatnks for the other two algorithms, CraigD, I'm gonna have to check those out.

Link to comment
Share on other sites

To me your algorithm just seems the first p terms of the series expansion of the cosine (folllowing Taylor)...
It isn’t.

 

In Mathematical terms, the Taylor Series for cos(a), where a is in radians, is

Sum(i=0 to infiniy)( (-1)^i*a^(2*i)/((2*i)!) )

, eg: cos(1)= 1 -1/2 +1/24 -1/720 +1/40320 ...

 

Pokeska’s is harder to express in summation form, but looks like this for p=1000:

cos(1)= 1 –1/1000^2 –(1-1/1000^2)/1000^2 –(1–1/1000^2 –(1-1/1000^2)/1000^2 …

 

It’s a whole different algorithm, converging much more slowly, and dependent on the choice of p for precision.

but I don't really know the function in c++ "pow"...
pow(3,4);

in C is the same as

3^4

in BASIC, that is

3 to the 4th power, or 81.

 

C is a "mid-level" language, falling between assembler and a high-level language like BASIC. It isn't suposed to have intrinsic operations beyond what exists in most CPUs instruction sets, so exponentiation isn’t an operator, but has to be implemented as a library function.

Link to comment
Share on other sites

Look better Sanctus! That's the first thing I checked for but inspection immediately showed otherwise.

 

If you start with a=0 and da=1, you end up with p*sin(n).
Uuuhm... :( of course!!!! I hadn't thought!!!!!

 

The first derivative is p*da, not da!!! :( edit: therefore you get p*sin, not sin

 

it looks like it might be e^x moved a little bit to the right.
The shift would be due to boundary conditions. In order to have exactly e^x you want f(0)=1 and f'(0)=1, so: a=1 and da=1/p. :(
Link to comment
Share on other sites

x := 0
y := R
d := 1 - y
Repeat While x =< y
   Draw (x,y)
   If d <0
       d := 2*x + 3 + d
   Else
       d := (x - y)*2 + 5 + d
       y := -1 + y
   x := x + 1

I hadn't read your algorithm till now Graig but I gave it a look now.

 

What mystifies me about it is that, either I don't understand the indent notation, or the updated values of d don't get used in subsequent iterations except to switch the IF condition and y isn't updated as long as d<0. :( Can it be right?

Link to comment
Share on other sites

What mystifies me about it [bresenham’s circle algorithm] is that, either I don't understand the indent notation, or the updated values of d don't get used in subsequent iterations except to switch the IF condition and y isn't updated as long as d<0. :cocktail: Can it be right?
That’s right. Here’s sample output for a single octant of a circle or radius 100:
   x   y   d
  0 100   -99
  1 100   -96
  2 100   -91
  3 100   -84
  4 100   -75
  5 100   -64
  6 100   -51
  7 100   -36
  8 100   -19
  9 100     0
 10  99  -177
 11  99  -154
 12  99  -129
 13  99  -102
 14  99   -73
 15  99   -42
 16  99    -9
 17  99    26
 18  98  -133
 19  98   -94
 20  98   -53
 21  98   -10
 22  98    35
 23  97  -112
 24  97   -63
 25  97   -12
 26  97    41
 27  96   -96
 28  96   -39
 29  96    20
 30  95  -109
 31  95   -46
 32  95    19
 33  94  -102
 34  94   -33
 35  94    38
 36  93   -75
 37  93     0
 38  92  -107
 39  92   -28
 40  92    53
 41  91   -46
 42  91    39
 43  90   -54
 44  90    35
 45  89   -52
 46  89    41
 47  88   -40
 48  88    57
 49  87   -18
 50  87    83
 51  86    14
 52  85   -51
 53  85    56
 54  84    -3
 55  84   108
 56  83    55
 57  82     6
 58  81   -39
 59  81    80
 60  80    41
 61  79     6
 62  78   -25
 63  78   102
 64  77    77
 65  76    56
 66  75    39
 67  74    26
 68  73    17
 69  72    12
 70  71    11

Link to comment
Share on other sites

It [bressenham’s circle algorithm] seems designed for large values, I guess.
It is designed for integer values – specifically, for pixels on a graphical display. To approximate a real circle well, the radius must be large, but it renders an accurate, though rough, circle even for a small radius, eg:
   X   Y     D
  0   1     0

  0   2    -1
  1   2     2

  0   3    -2
  1   3     1
  2   2     2

  0   4    -3
  1   4     0
  2   3    -1
  3   3     6

  0   5    -4
  1   5    -1
  2   5     4
  3   4     3

Satisfied that it isn't wrong, I'll resume giving it a thought.
Thanks, Qfwfq. I’ve long been frustrated at my lack of understanding of B’s algorithm, and hope you can shed some light on it!

 

Since suggesting it, I’ve come to suspect that Pokeska’s cosine algorithm may not have much in common with B’s, rather, that it’s more related to this one for cos(N), with precision of approximately 1/P, which I just made up:

Input: N; P
X :=1 
Y := 0
C := 0
Loop until C >= N
   LX := X
   LY := Y
   X := X –(1/P)
   Y := (1 -(X^2))^.5
   C := C +((Y-LY)^2 +(X-LX)^2)^.5
Output: X

Note that all this is doing is totaling (in the variable C) the distance between points on a circle of radius 1 until a specified total (N) is reached. As 1/P approaches 0, C approaches the length of an arc of the unit circle or angle N.

 

Note that the P in this algorithm is not expected to be the same value as the p in post #1.

 

I suspect that, for the right choices of p and P, P’s algorithm could be made to agree with this one algebraically. The necessary algebra looks hard, though.

 

Pokeska, I hope we’re giving you some useful ideas about to your original question.

Link to comment
Share on other sites

It is designed for integer values – specifically, for pixels on a graphical display.
I had gathered that much, Dude! :gift: :surprise:

 

However I had an on-the fly impression about precision that was only partly dispelled when I tried a bit of calculus later. Unfortunately, once offline, I couldn't exactly remember the two polynomials that increment d but I got the general idea and I can see why it's good for 1/8 of the circumference, it requires dx > dy. That's a precision requirement, rather than large R as I had thought: if you let it iterate past x = y I think the line will continue straight (or almost?) at a pi/4 angle. I do believe however that with small R errors are simply hidden by resolution.

 

I'll think more about the polynomials but the idea may be understood along these lines, starting from the equation that describes the circumference:

 

x^2 + y^2 = r^2

 

dy/dx = -x/y

 

as x is incremented, d accumulates until it is time to decrement y by 1, at which d is pushed back under the threshold without being reset. This gives precision, as d is still accumulating but a term 2 - 2y is added to the other polynomial. The term -2y seems to give the y dependence of dy/dx, but exactly how it's equivalent I don't yet know; anyway it's clear that, as y decreases, less increments of x are necessary to pass the threshold. A unit of d is a small decrement of y, the ratio of units apparently being of order R. This is why I still tend to think R counts for precision.

 

Since suggesting it, I’ve come to suspect that Pokeska’s cosine algorithm may not have much in common with B’s
Certainly, but I don't agree that it’s more related to your imaginative one for cos(N). How exactly did you make it up? Anyway it seems more along the lines of the circle one and based on some odd series expansion but I'm reluctant to read your explanation before I've given it a thought!
Link to comment
Share on other sites

Well, that was simpler to figure out than I thought at first glance! :surprise:

Input: N; P
X :=1 
Y := 0
C := 0
Loop until C >= N
   LX := X
   LY := Y
   X := X –(1/P)
   Y := (1 -(X^2))^.5
   C := C +((Y-LY)^2 +(X-LX)^2)^.5
Output: X

Yeah, once I wrote the equations normally and thought a sec. Clever idea for computing sine or cosine.

 

I suspect that, for the right choices of p and P, P’s algorithm could be made to agree with this one algebraically.
They are different approaches. Yours isn't based on the differential equation and your iteration step is dcos(x) and not dx.
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...