-->

Minggu, 11 Juni 2017

In numerical analysis, a branch of mathematics, there are several square root algorithms or methods of computing the principal square root of a non-negative real number. For the square roots of a negative or complex number, see below.

Finding S {\displaystyle {\sqrt {S}}} is the same as solving the equation f ( x ) = x 2 âˆ' S = 0 {\displaystyle f(x)=x^{2}-S=0\,\!} for a positive x {\displaystyle x} . Therefore, any general numerical root-finding algorithm can be used. Newton's method, for example, reduces in this case to the so-called Babylonian method:

x n + 1 = x n âˆ' f ( x n ) f ′ ( x n ) = x n âˆ' x n 2 âˆ' S 2 x n = 1 2 ( x n + S x n ) {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}-{\frac {x_{n}^{2}-S}{2x_{n}}}={\frac {1}{2}}\left(x_{n}+{\frac {S}{x_{n}}}\right)}

These methods generally yield approximate results, but can be made arbitrarily precise by increasing the number of calculation steps.

Rough estimation



source : www.questionpaper4exam.com

Many square root algorithms require an initial seed value. If the initial seed value is far away from the actual square root, the algorithm will be slowed down. It is therefore useful to have a rough estimate, which may be very inaccurate but easy to calculate. With S {\displaystyle S} expressed in scientific notation as a × 10 2 n {\displaystyle a\times 10^{2n}} where 1 ≤ a < 100 {\displaystyle 1\leq a<100} and n is an integer, the square root S = a × 10 n {\displaystyle {\sqrt {S}}={\sqrt {a}}\times 10^{n}} can be estimated as

S ≈ { 2 ⋅ 10 n if  a < 10 , 6 ⋅ 10 n if  a ≥ 10. {\displaystyle {\sqrt {S}}\approx {\begin{cases}2\cdot 10^{n}&{\text{if }}a<10,\\6\cdot 10^{n}&{\text{if }}a\geq 10.\end{cases}}}

The factors two and six are used because they approximate the geometric means of the lowest and highest possible values with the given number of digits: 1 ⋅ 10 = 10 4 ≈ 2 {\displaystyle {\sqrt {{\sqrt {1}}\cdot {\sqrt {10}}}}={\sqrt[{4}]{10}}\approx 2\,} and 10 ⋅ 100 = 1000 4 ≈ 6 {\displaystyle {\sqrt {{\sqrt {10}}\cdot {\sqrt {100}}}}={\sqrt[{4}]{1000}}\approx 6\,} .

For S = 125348 = 12.5348 × 10 4 {\displaystyle S=125348=12.5348\times 10^{4}} , the estimate is S ≈ 6 ⋅ 10 2 = 600 {\displaystyle {\sqrt {S}}\approx 6\cdot 10^{2}=600} .

When working in the binary numeral system (as computers do internally), by expressing S {\displaystyle S} as a × 2 2 n {\displaystyle a\times 2^{2n}} where 0.1 2 ≤ a < 10 2 {\displaystyle 0.1_{2}\leq a<10_{2}} , the square root S = a × 2 n {\displaystyle {\sqrt {S}}={\sqrt {a}}\times 2^{n}} can be estimated as S ≈ 2 n {\displaystyle {\sqrt {S}}\approx 2^{n}} , since the geometric mean of the lowest and highest possible values is 0.1 2 ⋅ 10 2 = 1 4 = 1 {\displaystyle {\sqrt {{\sqrt {0.1_{2}}}\cdot {\sqrt {10_{2}}}}}={\sqrt[{4}]{1}}=1} .

For S = 125348 = 1 1110 1001 1010 0100 2 = 1.1110 1001 1010 0100 2 × 2 16 {\displaystyle S=125348=1\;1110\;1001\;1010\;0100_{2}=1.1110\;1001\;1010\;0100_{2}\times 2^{16}\,} the binary approximation gives S ≈ 2 8 = 1 0000 0000 2 = 256 . {\displaystyle {\sqrt {S}}\approx 2^{8}=1\;0000\;0000_{2}=256\,.}

These approximations are useful to find better seeds for iterative algorithms, which results in faster convergence.

Babylonian method



source : jrh794.wordpress.com

Perhaps the first algorithm used for approximating √S is known as the Babylonian method, named after the Babylonians, or "Hero's method", named after the first-century Greek mathematician Hero of Alexandria who gave the first explicit description of the method. It can be derived from (but predates by 16 centuries) Newton's method. The basic idea is that if x is an overestimate to the square root of a non-negative real number S then S/x will be an underestimate, or vice versa, and so the average of these two numbers may reasonably be expected to provide a better approximation (though the formal proof of that assertion depends on the inequality of arithmetic and geometric means that shows this average is always an overestimate of the square root, as noted in the article on square roots, thus assuring convergence).

More precisely, if x is our initial guess of √S and e is the error in our estimate such that S = (x+ e)2, then we can expand the binomial and solve for

e = S âˆ' x 2 2 x + e ≈ S âˆ' x 2 2 x , {\displaystyle e={\frac {S-x^{2}}{2x+e}}\approx {\frac {S-x^{2}}{2x}},} since e ≪ x {\displaystyle e\ll x} .

Therefore, we can compensate for the error and update our old estimate as

x := x + e = S + x 2 2 x = S x + x 2 {\displaystyle x:=x+e={\frac {S+x^{2}}{2x}}={\frac {{\frac {S}{x}}+x}{2}}}

Since the computed error was not exact, this becomes our next best guess. The process of updating is iterated until desired accuracy is obtained. This is a quadratically convergent algorithm, which means that the number of correct digits of the approximation roughly doubles with each iteration. It proceeds as follows:

  1. Begin with an arbitrary positive starting value x0 (the closer to the actual square root of S, the better).
  2. Let xn + 1 be the average of xn and S/xn (using the arithmetic mean to approximate the geometric mean).
  3. Repeat step 2 until the desired accuracy is achieved.

It can also be represented as:

x 0 ≈ S , {\displaystyle x_{0}\approx {\sqrt {S}},}
x n + 1 = 1 2 ( x n + S x n ) , {\displaystyle x_{n+1}={\frac {1}{2}}\left(x_{n}+{\frac {S}{x_{n}}}\right),}
S = lim n â†' ∞ x n . {\displaystyle {\sqrt {S}}=\lim _{n\to \infty }x_{n}.}

This algorithm works equally well in the p-adic numbers, but cannot be used to identify real square roots with p-adic square roots; one can, for example, construct a sequence of rational numbers by this method that converges to +3 in the reals, but to âˆ'3 in the 2-adics.

Example

To calculate √S, where S = 125348, to six significant figures, use the rough estimation method above to get

x 0 = 6 â‹… 10 2 = 600.000 x 1 = 1 2 ( x 0 + S x 0 ) = 1 2 ( 600.000 + 125348 600.000 ) = 404.457 x 2 = 1 2 ( x 1 + S x 1 ) = 1 2 ( 404.457 + 125348 404.457 ) = 357.187 x 3 = 1 2 ( x 2 + S x 2 ) = 1 2 ( 357.187 + 125348 357.187 ) = 354.059 x 4 = 1 2 ( x 3 + S x 3 ) = 1 2 ( 354.059 + 125348 354.059 ) = 354.045 x 5 = 1 2 ( x 4 + S x 4 ) = 1 2 ( 354.045 + 125348 354.045 ) = 354.045 {\displaystyle {\begin{aligned}{\begin{array}{rlll}x_{0}&=6\cdot 10^{2}&&=600.000\\[0.3em]x_{1}&={\frac {1}{2}}\left(x_{0}+{\frac {S}{x_{0}}}\right)&={\frac {1}{2}}\left(600.000+{\frac {125348}{600.000}}\right)&=404.457\\[0.3em]x_{2}&={\frac {1}{2}}\left(x_{1}+{\frac {S}{x_{1}}}\right)&={\frac {1}{2}}\left(404.457+{\frac {125348}{404.457}}\right)&=357.187\\[0.3em]x_{3}&={\frac {1}{2}}\left(x_{2}+{\frac {S}{x_{2}}}\right)&={\frac {1}{2}}\left(357.187+{\frac {125348}{357.187}}\right)&=354.059\\[0.3em]x_{4}&={\frac {1}{2}}\left(x_{3}+{\frac {S}{x_{3}}}\right)&={\frac {1}{2}}\left(354.059+{\frac {125348}{354.059}}\right)&=354.045\\[0.3em]x_{5}&={\frac {1}{2}}\left(x_{4}+{\frac {S}{x_{4}}}\right)&={\frac {1}{2}}\left(354.045+{\frac {125348}{354.045}}\right)&=354.045\end{array}}\end{aligned}}}

Therefore, √125348 ≈ 354.045.

Convergence

Let the relative error in xn be defined by

ε n = x n S âˆ' 1 {\displaystyle \varepsilon _{n}={\frac {x_{n}}{\sqrt {S}}}-1}

and thus

x n = S ⋅ ( 1 + ε n ) . {\displaystyle x_{n}={\sqrt {S}}\cdot (1+\varepsilon _{n}).}

Then it can be shown that

ε n + 1 = ε n 2 2 ( 1 + ε n ) {\displaystyle \varepsilon _{n+1}={\frac {\varepsilon _{n}^{2}}{2(1+\varepsilon _{n})}}}

and thus that

0 ≤ ε n + 2 ≤ min { ε n + 1 2 2 , ε n + 1 2 } {\displaystyle 0\leq \varepsilon _{n+2}\leq \min \left\{{\frac {\varepsilon _{n+1}^{2}}{2}},{\frac {\varepsilon _{n+1}}{2}}\right\}}

and consequently that convergence is assured provided that x0 and S are both positive.

Worst case for convergence

If using the rough estimate above with the Babylonian method, then the least accurate cases in ascending order are as follows:

S = 1 ; x 0 = 2 ; x 1 = 1.250 ; ε 1 = 0.250. S = 10 ; x 0 = 2 ; x 1 = 3.500 ; ε 1 < 0.107. S = 10 ; x 0 = 6 ; x 1 = 3.833 ; ε 1 < 0.213. S = 100 ; x 0 = 6 ; x 1 = 11.333 ; ε 1 < 0.134. {\displaystyle {\begin{aligned}S&=1;&x_{0}&=2;&x_{1}&=1.250;&\varepsilon _{1}&=0.250.\\S&=10;&x_{0}&=2;&x_{1}&=3.500;&\varepsilon _{1}&<0.107.\\S&=10;&x_{0}&=6;&x_{1}&=3.833;&\varepsilon _{1}&<0.213.\\S&=100;&x_{0}&=6;&x_{1}&=11.333;&\varepsilon _{1}&<0.134.\end{aligned}}}

Thus in any case,

ε 1 ≤ 2 âˆ' 2 . {\displaystyle \varepsilon _{1}\leq 2^{-2}.\,}
ε 2 < 2 âˆ' 5 < 10 âˆ' 1 . {\displaystyle \varepsilon _{2}<2^{-5}<10^{-1}.\,}
ε 3 < 2 âˆ' 11 < 10 âˆ' 3 . {\displaystyle \varepsilon _{3}<2^{-11}<10^{-3}.\,}
ε 4 < 2 âˆ' 23 < 10 âˆ' 6 . {\displaystyle \varepsilon _{4}<2^{-23}<10^{-6}.\,}
ε 5 < 2 âˆ' 47 < 10 âˆ' 14 . {\displaystyle \varepsilon _{5}<2^{-47}<10^{-14}.\,}
ε 6 < 2 âˆ' 95 < 10 âˆ' 28 . {\displaystyle \varepsilon _{6}<2^{-95}<10^{-28}.\,}
ε 7 < 2 âˆ' 191 < 10 âˆ' 57 . {\displaystyle \varepsilon _{7}<2^{-191}<10^{-57}.\,}
ε 8 < 2 âˆ' 383 < 10 âˆ' 115 . {\displaystyle \varepsilon _{8}<2^{-383}<10^{-115}.\,}

Rounding errors will slow the convergence. It is recommended to keep at least one extra digit beyond the desired accuracy of the xn being calculated to minimize round off error.

Digit-by-digit calculation



source : www.questionpaper4exam.com

This is a method to find each digit of the square root in a sequence. It is slower than the Babylonian method, but it has several advantages:

  • It can be easier for manual calculations.
  • Every digit of the root found is known to be correct, i.e., it does not have to be changed later.
  • If the square root has an expansion that terminates, the algorithm terminates after the last digit is found. Thus, it can be used to check whether a given integer is a square number.
  • The algorithm works for any base, and naturally, the way it proceeds depends on the base chosen.

Napier's bones include an aid for the execution of this algorithm. The shifting nth root algorithm is a generalization of this method.

Basic principle

First, let's consider the simplest possible case of finding the square root of a number Z, that is the square of a 2 digit number XY, where X is the tens digit and Y is the units digit. Specifically:

Z = (10X + Y)2 = 100X2 + 20XY + Y2

Now using the Digit-by-Digit algorithm, we first determine the value of X. X is the largest digit such that X2 is less or equal to Z from which we removed the 2 rightmost digits.

In the next iteration, we pair the digits, multiply X by 2, and place it in the tenth's place while we try to figure out what the value of Y is.

Since this is a simple case where the answer is a perfect square root XY, the algorithm stops here.

The same idea can be extended to any arbitrary square root computation next. Suppose we are able to find the square root of N by expressing it as a sum of n positive numbers such that

N = ( a 1 + a 2 + a 3 + ⋯ + a n ) 2 . {\displaystyle N=(a_{1}+a_{2}+a_{3}+\dotsb +a_{n})^{2}.}

By repeatedly applying the basic identity

( x + y ) 2 = x 2 + 2 x y + y 2 , {\displaystyle (x+y)^{2}=x^{2}+2xy+y^{2},}

the right-hand-side term can be expanded as

( a 1 + a 2 + a 3 + ⋯ + a n ) 2 = a 1 2 + 2 a 1 a 2 + a 2 2 + 2 ( a 1 + a 2 ) a 3 + a 3 2 + ⋯ + a n âˆ' 1 2 + 2 ( âˆ' i = 1 n âˆ' 1 a i ) a n + a n 2 = a 1 2 + [ 2 a 1 + a 2 ] a 2 + [ 2 ( a 1 + a 2 ) + a 3 ] a 3 + ⋯ + [ 2 ( âˆ' i = 1 n âˆ' 1 a i ) + a n ] a n . {\displaystyle {\begin{aligned}&(a_{1}+a_{2}+a_{3}+\dotsb +a_{n})^{2}\\=&\,a_{1}^{2}+2a_{1}a_{2}+a_{2}^{2}+2(a_{1}+a_{2})a_{3}+a_{3}^{2}+\dotsb +a_{n-1}^{2}+2\left(\sum _{i=1}^{n-1}a_{i}\right)a_{n}+a_{n}^{2}\\=&\,a_{1}^{2}+[2a_{1}+a_{2}]a_{2}+[2(a_{1}+a_{2})+a_{3}]a_{3}+\dotsb +\left[2\left(\sum _{i=1}^{n-1}a_{i}\right)+a_{n}\right]a_{n}.\end{aligned}}}

This expression allows us to find the square root by sequentially guessing the values of a i {\displaystyle a_{i}} s. Suppose that the numbers a 1 , … , a m âˆ' 1 {\displaystyle a_{1},\ldots ,a_{m-1}} have already been guessed, then the m-th term of the right-hand-side of above summation is given by Y m = [ 2 P m âˆ' 1 + a m ] a m , {\displaystyle Y_{m}=[2P_{m-1}+a_{m}]a_{m},} where P m âˆ' 1 = âˆ' i = 1 m âˆ' 1 a i {\displaystyle P_{m-1}=\sum _{i=1}^{m-1}a_{i}} is the approximate square root found so far. Now each new guess a m {\displaystyle a_{m}} should satisfy the recursion

X m = X m âˆ' 1 âˆ' Y m , {\displaystyle X_{m}=X_{m-1}-Y_{m},}

such that X m ≥ 0 {\displaystyle X_{m}\geq 0} for all 1 ≤ m ≤ n , {\displaystyle 1\leq m\leq n,} with initialization X 0 = N . {\displaystyle X_{0}=N.} When X n = 0 , {\displaystyle X_{n}=0,} the exact square root has been found; if not, then the sum of a i {\displaystyle a_{i}} s gives a suitable approximation of the square root, with X n {\displaystyle X_{n}} being the approximation error.

For example, in the decimal number system we have

N = ( a 1 â‹… 10 n âˆ' 1 + a 2 â‹… 10 n âˆ' 2 + ⋯ + a n âˆ' 1 â‹… 10 + a n ) 2 , {\displaystyle N=(a_{1}\cdot 10^{n-1}+a_{2}\cdot 10^{n-2}+\cdots +a_{n-1}\cdot 10+a_{n})^{2},}

where 10 n âˆ' i {\displaystyle 10^{n-i}} are place holders and the coefficients a i ∈ { 0 , 1 , 2 , … , 9 } {\displaystyle a_{i}\in \{0,1,2,\ldots ,9\}} . At any m-th stage of the square root calculation, the approximate root found so far, P m âˆ' 1 {\displaystyle P_{m-1}} and the summation term Y m {\displaystyle Y_{m}} are given by

P m âˆ' 1 = âˆ' i = 1 m âˆ' 1 a i â‹… 10 n âˆ' i = 10 n âˆ' m + 1 âˆ' i = 1 m âˆ' 1 a i â‹… 10 m âˆ' i âˆ' 1 , {\displaystyle P_{m-1}=\sum _{i=1}^{m-1}a_{i}\cdot 10^{n-i}=10^{n-m+1}\sum _{i=1}^{m-1}a_{i}\cdot 10^{m-i-1},}
Y m = [ 2 P m âˆ' 1 + a m â‹… 10 n âˆ' m ] a m â‹… 10 n âˆ' m = [ 20 âˆ' i = 1 m âˆ' 1 a i â‹… 10 m âˆ' i âˆ' 1 + a m ] a m â‹… 10 2 ( n âˆ' m ) . {\displaystyle Y_{m}=[2P_{m-1}+a_{m}\cdot 10^{n-m}]a_{m}\cdot 10^{n-m}=[20\sum _{i=1}^{m-1}a_{i}\cdot 10^{m-i-1}+a_{m}]a_{m}\cdot 10^{2(n-m)}.}

Here since the place value of Y m {\displaystyle Y_{m}} is an even power of 10, we only need to work with the pair of most significant digits of the remaining term X m âˆ' 1 {\displaystyle X_{m-1}} at any m-th stage. The section below codifies this procedure.

It is obvious that a similar method can be used to compute the square root in number systems other than the decimal number system. For instance, finding the digit-by-digit square root in the binary number system is quite efficient since the value of a i {\displaystyle a_{i}} is searched from a smaller set of binary digits {0,1}. This makes the computation faster since at each stage the value of Y m {\displaystyle Y_{m}} is either Y m = 0 {\displaystyle Y_{m}=0} for a m = 0 {\displaystyle a_{m}=0} or Y m = 2 P m âˆ' 1 + 1 {\displaystyle Y_{m}=2P_{m-1}+1} for a m = 1 {\displaystyle a_{m}=1} . The fact that we have only two possible options for a m {\displaystyle a_{m}} also makes the process of deciding the value of a m {\displaystyle a_{m}} at m-th stage of calculation easier. This is because we only need to check if Y m ≤ X m âˆ' 1 {\displaystyle Y_{m}\leq X_{m-1}} for a m = 1. {\displaystyle a_{m}=1.} If this condition is satisfied, then we take a m = 1 {\displaystyle a_{m}=1} ; if not then a m = 0. {\displaystyle a_{m}=0.} Also, the fact that multiplication by 2 is done by left bit-shifts helps in the computation.

Decimal (base 10)

Write the original number in decimal form. The numbers are written similar to the long division algorithm, and, as in long division, the root will be written on the line above. Now separate the digits into pairs, starting from the decimal point and going both left and right. The decimal point of the root will be above the decimal point of the square. One digit of the root will appear above each pair of digits of the square.

Beginning with the left-most pair of digits, do the following procedure for each pair:

  1. Starting on the left, bring down the most significant (leftmost) pair of digits not yet used (if all the digits have been used, write "00") and write them to the right of the remainder from the previous step (on the first step, there will be no remainder). In other words, multiply the remainder by 100 and add the two digits. This will be the current value c.
  2. Find p, y and x, as follows:
    • Let p be the part of the root found so far, ignoring any decimal point. (For the first step, p = 0).
    • Determine the greatest digit x such that x ( 20 p + x ) ≤ c {\displaystyle x(20p+x)\leq c} . We will use a new variable y = x(20p + x).
      • Note: 20p + x is simply twice p, with the digit x appended to the right).
      • Note: You can find x by guessing what c/(20·p) is and doing a trial calculation of y, then adjusting x upward or downward as necessary.
    • Place the digit x {\displaystyle x} as the next digit of the root, i.e., above the two digits of the square you just brought down. Thus the next p will be the old p times 10 plus x.
  3. Subtract y from c to form a new remainder.
  4. If the remainder is zero and there are no more digits to bring down, then the algorithm has terminated. Otherwise go back to step 1 for another iteration.

Examples

Find the square root of 152.2756.

            1  2. 3  4          /       \/  01 52.27 56             01                   1*1 <= 1 < 2*2                 x = 1           01                     y = x*x = 1*1 = 1           00 52                22*2 <= 52 < 23*3              x = 2           00 44                  y = (20+x)*x = 22*2 = 44              08 27             243*3 <= 827 < 244*4           x = 3              07 29               y = (240+x)*x = 243*3 = 729                 98 56          2464*4 <= 9856 < 2465*5        x = 4                 98 56            y = (2460+x)*x = 2464*4 = 9856                 00 00          Algorithm terminates: Answer is 12.34  

Find the square root of 2.

            1. 4  1  4  2         /       \/  02.00 00 00 00             02                  1*1 <= 2 < 2*2                 x = 1           01                    y = x*x = 1*1 = 1           01 00               24*4 <= 100 < 25*5             x = 4           00 96                 y = (20+x)*x = 24*4 = 96              04 00            281*1 <= 400 < 282*2           x = 1              02 81              y = (280+x)*x = 281*1 = 281              01 19 00         2824*4 <= 11900 < 2825*5       x = 4              01 12 96           y = (2820+x)*x = 2824*4 = 11296                 06 04 00      28282*2 <= 60400 < 28283*3     x = 2                               The desired precision is achieved:                               The square root of 2 is about 1.4142  

Binary numeral system (base 2)

Inherent to digit-by-digit algorithms is a search and test step: find a digit, e {\displaystyle \,e} , when added to the right of a current solution r {\displaystyle \,r} , such that ( r + e ) â‹… ( r + e ) ≤ x {\displaystyle \,(r+e)\cdot (r+e)\leq x} , where x {\displaystyle \,x} is the value for which a root is desired. Expanding: r â‹… r + 2 r e + e â‹… e ≤ x {\displaystyle \,r\cdot r+2re+e\cdot e\leq x} . The current value of r â‹… r {\displaystyle \,r\cdot r} â€"or, usually, the remainderâ€"can be incrementally updated efficiently when working in binary, as the value of e {\displaystyle \,e} will have a single bit set (a power of 2), and the operations needed to compute 2 â‹… r â‹… e {\displaystyle \,2\cdot r\cdot e} and e â‹… e {\displaystyle \,e\cdot e} can be replaced with faster bit shift operations.

Example

Here we obtain the square root of 81, which when converted into binary gives 1010001. The numbers in the left column gives the option between that number or zero to be used for subtraction at that stage of computation. The final answer is 1001, which in decimal is 9.

                1 0 0 1               ---------              √ 1010001           1      1                1               ---------         101     01                    0                --------         1001     100                    0                --------         10001    10001                  10001                 -------                      0  

This gives rise to simple computer implementations:

Using the notation above, the variable "bit" corresponds to e m 2 {\displaystyle e_{m}^{2}} which is ( 2 m ) 2 = 4 m {\displaystyle (2^{m})^{2}=4^{m}} , the variable "res" is equal to 2 r e m {\displaystyle 2re_{m}} , and the variable "num" is equal to the current X m {\displaystyle X_{m}} which is the difference of the number we want the square root of and the square of our current approximation with all bits set up to 2 m + 1 {\displaystyle 2^{m+1}} . Thus in the first loop, we want to find the highest power of 4 in "bit" to find the highest power of 2 in e {\displaystyle e} . In the second loop, if num is greater than res + bit, then X m {\displaystyle X_{m}} is greater than 2 r e m + e m 2 {\displaystyle 2re_{m}+e_{m}^{2}} and we can subtract it. The next line, we want to add e m {\displaystyle e_{m}} to r {\displaystyle r} which means we want to add 2 e m 2 {\displaystyle 2e_{m}^{2}} to 2 r e m {\displaystyle 2re_{m}} so we want res = res + bit<<1. Then update e m {\displaystyle e_{m}} to e m âˆ' 1 {\displaystyle e_{m-1}} inside res which involves dividing by 2 or another shift to the right. Combining these 2 into one line leads to res = res>>1 + bit. If X m {\displaystyle X_{m}} isn't greater than 2 r e m + e m 2 {\displaystyle 2re_{m}+e_{m}^{2}} then we just update e m {\displaystyle e_{m}} to e m âˆ' 1 {\displaystyle e_{m-1}} inside res and divide it by 2. Then we update e m {\displaystyle e_{m}} to e m âˆ' 1 {\displaystyle e_{m-1}} in bit by dividing it by 4. The final iteration of the 2nd loop has bit equal to 1 and will cause update of e {\displaystyle e} to run one extra time removing the factor of 2 from res making it our integer approximation of the root.

Faster algorithms, in binary and decimal or any other base, can be realized by using lookup tablesâ€"in effect trading more storage space for reduced run time.

Exponential identity



source : www.questionpaper4exam.com

Pocket calculators typically implement good routines to compute the exponential function and the natural logarithm, and then compute the square root of S using the identity found using the properties of logarithms ( ln ⁡ x n = n ln ⁡ x {\displaystyle \ln x^{n}=n\ln x} ) and exponentials ( e ln ⁡ x = x {\displaystyle e^{\ln x}=x} ):

S = e 1 2 ln ⁡ S . {\displaystyle {\sqrt {S}}=e^{{\frac {1}{2}}\ln S}.}

The denominator in the fraction corresponds to the nth root. In the case above the denominator is 2, hence the equation specifies that the square root is to be found. The same identity is used when computing square roots with logarithm tables or slide rules.

Bakhshali approximation



This method for finding an approximation to a square root was described in an ancient Indian mathematical manuscript called the Bakhshali manuscript. It is equivalent to two iterations of the Babylonian method beginning with N. The original presentation goes as follows: To calculate S {\displaystyle {\sqrt {S}}} , let N2 be the nearest perfect square to S. Then, calculate:

d = S âˆ' N 2 {\displaystyle d=S-N^{2}\,\!}
P = d 2 N {\displaystyle P={\frac {d}{2N}}}
A = N + P {\displaystyle A=N+P\,\!}
S ≈ A âˆ' P 2 2 A {\displaystyle {\sqrt {S}}\approx A-{\frac {P^{2}}{2A}}}

This can be also written as:

S ≈ N + d 2 N âˆ' d 2 8 N 3 + 4 N d = 8 N 4 + 8 N 2 d + d 2 8 N 3 + 4 N d = N 4 + 6 N 2 S + S 2 4 N 3 + 4 N S = N 2 ( N 2 + 6 S ) + S 2 4 N ( N 2 + S ) {\displaystyle {\sqrt {S}}\approx N+{\frac {d}{2N}}-{\frac {d^{2}}{8N^{3}+4Nd}}={\frac {8N^{4}+8N^{2}d+d^{2}}{8N^{3}+4Nd}}={\frac {N^{4}+6N^{2}S+S^{2}}{4N^{3}+4NS}}={\frac {N^{2}(N^{2}+6S)+S^{2}}{4N(N^{2}+S)}}}

Example

Find 9.2345 {\displaystyle {\sqrt {9.2345}}}

N = 3 {\displaystyle N=3\,\!}
d = 9.2345 âˆ' 3 2 = 0.2345 {\displaystyle d=9.2345-3^{2}=0.2345\,\!}
P = 0.2345 2 × 3 = 0.0391 {\displaystyle P={\frac {0.2345}{2\times 3}}=0.0391}
A = 3 + 0.0391 = 3.0391 {\displaystyle A=3+0.0391=3.0391\,\!}
9.2345 ≈ 3.0391 âˆ' 0.0391 2 2 × 3.0391 ≈ 3.0388 {\displaystyle {\sqrt {9.2345}}\approx 3.0391-{\frac {0.0391^{2}}{2\times 3.0391}}\approx 3.0388}

Vedic duplex method for extracting a square root



The Vedic duplex method from the book 'Vedic Mathematics' is a variant of the digit-by-digit method for calculating the square root. The duplex is the square of the central digit plus double the cross-product of digits equidistant from the center. The duplex is computed from the quotient digits (square root digits) computed thus far, but after the initial digits. The duplex is subtracted from the dividend digit prior to the second subtraction for the product of the quotient digit times the divisor digit. For perfect squares the duplex and the dividend will get smaller and reach zero after a few steps. For non-perfect squares the decimal value of the square root can be calculated to any precision desired. However, as the decimal places proliferate, the duplex adjustment gets larger and longer to calculate. The duplex method follows the Vedic ideal for an algorithm, one-line, mental calculation. It is flexible in choosing the first digit group and the divisor. Small divisors are to be avoided by starting with a larger initial group.

Basic principle

We proceed as with the digit-by-digit calculation by assuming that we want to express a number N as a square of the sum of n positive numbers as

N = ( a 0 + a 1 + ⋯ + a n âˆ' 1 ) 2 {\displaystyle N=(a_{0}+a_{1}+\cdots +a_{n-1})^{2}}
= a 0 2 + 2 a 0 âˆ' i = 1 n âˆ' 1 a i + a 1 2 + 2 a 1 âˆ' i = 2 n âˆ' 1 a i + ⋯ + a n âˆ' 1 2 . {\displaystyle =a_{0}^{2}+2a_{0}\sum _{i=1}^{n-1}a_{i}+a_{1}^{2}+2a_{1}\sum _{i=2}^{n-1}a_{i}+\cdots +a_{n-1}^{2}.}

Define divisor as q = 2 a 0 {\displaystyle q=2a_{0}} and the duplex for a sequence of m numbers as

d m = { a ⌈ m / 2 ⌉ 2 + âˆ' i = 1 ⌊ m / 2 ⌋ 2 a i a m âˆ' i + 1 for  m  odd âˆ' i = 1 m / 2 2 a i a m âˆ' i + 1 for  m  even . {\displaystyle d_{m}={\begin{cases}a_{\lceil m/2\rceil }^{2}+\sum _{i=1}^{\lfloor m/2\rfloor }2a_{i}a_{m-i+1}&{\text{for }}m{\text{ odd}}\\\sum _{i=1}^{m/2}2a_{i}a_{m-i+1}&{\text{for }}m{\text{ even}}.\\\end{cases}}}

Thus, we can re-express the above identity in terms of the divisor and the duplexes as

N âˆ' a 0 2 = âˆ' i = 1 n âˆ' 1 ( q a i + d i ) . {\displaystyle N-a_{0}^{2}=\sum _{i=1}^{n-1}(qa_{i}+d_{i}).}

Now the computation can proceed by recursively guessing the values of a m {\displaystyle a_{m}} so that

X m = X m âˆ' 1 âˆ' q a m âˆ' d m , {\displaystyle X_{m}=X_{m-1}-qa_{m}-d_{m},}

such that X m ≥ 0 {\displaystyle X_{m}\geq 0} for all 1 ≤ m ≤ n âˆ' 1 {\displaystyle 1\leq m\leq n-1} , with initialization X 0 = N âˆ' a 0 2 . {\displaystyle X_{0}=N-a_{0}^{2}.} When X m = 0 {\displaystyle X_{m}=0} the algorithm terminates and the sum of a i {\displaystyle a_{i}} s give the square root. The method is more similar to long division where X m âˆ' 1 {\displaystyle X_{m-1}} is the dividend and X m {\displaystyle X_{m}} is the remainder.

For the case of decimal numbers, if

N = ( a 0 â‹… 10 n âˆ' 1 + a 1 â‹… 10 n âˆ' 2 + ⋯ + a n âˆ' 2 â‹… 10 + a n âˆ' 1 ) 2 {\displaystyle N=(a_{0}\cdot 10^{n-1}+a_{1}\cdot 10^{n-2}+\cdots +a_{n-2}\cdot 10+a_{n-1})^{2}}

where a i ∈ { 0 , 1 , 2 , … , 9 } {\displaystyle a_{i}\in \{0,1,2,\ldots ,9\}} , then the initiation X 0 = N âˆ' a 0 2 â‹… 10 2 ( n âˆ' 1 ) {\displaystyle X_{0}=N-a_{0}^{2}\cdot 10^{2(n-1)}} and the divisor will be q = 2 a 0 â‹… 10 n âˆ' 1 {\displaystyle q=2a_{0}\cdot 10^{n-1}} . Also the product at any m-th stage will be q a m â‹… 10 n âˆ' m âˆ' 1 = 2 a 0 a m â‹… 10 2 n âˆ' m âˆ' 2 {\displaystyle qa_{m}\cdot 10^{n-m-1}=2a_{0}a_{m}\cdot 10^{2n-m-2}} and the duplexes will be d m = d ~ m â‹… 10 2 n âˆ' m âˆ' 3 {\displaystyle d_{m}={\tilde {d}}_{m}\cdot 10^{2n-m-3}} , where d ~ m {\displaystyle {\tilde {d}}_{m}} are the duplexes of the sequence a 1 , a 2 , … , a m {\displaystyle a_{1},a_{2},\ldots ,a_{m}} . At any m-th stage, we see that the place value of the duplex d ~ m {\displaystyle {\tilde {d}}_{m}} is one less than the product 2 a 0 a m {\displaystyle 2a_{0}a_{m}} . Thus, in actual calculations it is customary to subtract the duplex value of the m-th stage at (m+1)-th stage. Also, unlike the previous digit-by-digit square root calculation, where at any given m-th stage, the calculation is done by taking the most significant pair of digits of the remaining term X m âˆ' 1 {\displaystyle X_{m-1}} , the duplex method uses only a single most significant digit of X m âˆ' 1 {\displaystyle X_{m-1}} .

In other words, to calculate the duplex of a number, double the product of each pair of equidistant digits plus the square of the center digit (of the digits to the right of the colon).

  Number => Calculation = Duplex  3 ==> 32 = 9  14 ==>2(1·4) = 8   574 ==> 2(5·4) + 72 = 89  1,421 ==> 2(1·1) + 2(4·2) = 2 + 16 = 18   10,523 ==> 2(1·3) + 2(0·2) + 52 = 6+0+25 = 31   406,739 ==> 2(4·9)+ 2(0·3)+ 2(6·7) = 72+0+84  = 156    

In a square root calculation the quotient digit set increases incrementally for each step.

Example

Consider the perfect square 2809 = 532. Use the duplex method to find the square root of 2,809.

  • Set down the number in groups of two digits.
  • Define a divisor, a dividend and a quotient to find the root.
  • Given 2809. Consider the first group, 28.
    • Find the nearest perfect square below that group.
    • The root of that perfect square is the first digit of our root.
    • Since 28 > 25 and 25 = 52, take 5 as the first digit in the square root.
    • For the divisor take double this first digit (2 · 5), which is 10.
  • Next, set up a division framework with a colon.
    • 28: 0 9 is the dividend and 5: is the quotient. (Note: the quotient should always be a single digit number, and it should be such that the dividend in the next stage is non-negative.)
    • Put a colon to the right of 28 and 5 and keep the colons lined up vertically. The duplex is calculated only on quotient digits to the right of the colon.
  • Calculate the remainder. 28: minus 25: is 3:.
    • Append the remainder on the left of the next digit to get the new dividend.
    • Here, append 3 to the next dividend digit 0, which makes the new dividend 30. The divisor 10 goes into 30 just 3 times. (No reserve needed here for subsequent deductions.)
  • Repeat the operation.
    • The zero remainder appended to 9. Nine is the next dividend.
    • This provides a digit to the right of the colon so deduct the duplex, 32 = 9.
    • Subtracting this duplex from the dividend 9, a zero remainder results.
    • Ten into zero is zero. The next root digit is zero. The next duplex is 2(3·0) = 0.
    • The dividend is zero. This is an exact square root, 53.
  Find the square root of 2809.  Set down the number in groups of two digits.  The number of groups gives the number of whole digits in the root.  Put a colon after the first group, 28, to separate it.  From the first group, 28, obtain the divisor, 10, since  28>25=52 and by doubling this first root, 2x5=10.         Gross dividend:     28:  0  9. Using mental math:                Divisor: 10)     3  0   Square: 10)  28:  30  9      Duplex, Deduction:     25: xx 09  Square root:  5:   3. 0               Dividend:         30 00              Remainder:      3: 00 00  Square Root, Quotient:      5:  3. 0  

A two-variable iterative method



This method is applicable for finding the square root of 0 < S < 3 {\displaystyle 0<S<3\,\!} and converges best for S ≈ 1 {\displaystyle S\approx 1} . This, however, is no real limitation for a computer based calculation, as in base 2 floating point and fixed point representations, it is trivial to multiply S {\displaystyle S\,\!} by an integer power of 4, and therefore S {\displaystyle {\sqrt {S}}} by the corresponding power of 2, by changing the exponent or by shifting, respectively. Therefore, S {\displaystyle S\,\!} can be moved to the range 1 2 ≤ S < 2 {\displaystyle {\frac {1}{2}}\leq S<2} . Moreover, the following method does not employ general divisions, but only additions, subtractions, multiplications, and divisions by powers of two, which are again trivial to implement. A disadvantage of the method is that numerical errors accumulate, in contrast to single variable iterative methods such as the Babylonian one.

The initialization step of this method is

a 0 = S {\displaystyle a_{0}=S\,\!}
c 0 = S âˆ' 1 {\displaystyle c_{0}=S-1\,\!}

while the iterative steps read

a n + 1 = a n âˆ' a n c n / 2 {\displaystyle a_{n+1}=a_{n}-a_{n}c_{n}/2\,\!}
c n + 1 = c n 2 ( c n âˆ' 3 ) / 4 {\displaystyle c_{n+1}=c_{n}^{2}(c_{n}-3)/4\,\!}

Then, a n â†' S {\displaystyle a_{n}\rightarrow {\sqrt {S}}} (while c n â†' 0 {\displaystyle c_{n}\rightarrow 0} ).

Note that the convergence of c n {\displaystyle c_{n}\,\!} , and therefore also of a n {\displaystyle a_{n}\,\!} , is quadratic.

The proof of the method is rather easy. First, rewrite the iterative definition of c n {\displaystyle c_{n}\,\!} as

1 + c n + 1 = ( 1 + c n ) ( 1 âˆ' c n / 2 ) 2 {\displaystyle 1+c_{n+1}=(1+c_{n})(1-c_{n}/2)^{2}\,\!} .

Then it is straightforward to prove by induction that

S ( 1 + c n ) = a n 2 {\displaystyle S(1+c_{n})=a_{n}^{2}}

and therefore the convergence of a n {\displaystyle a_{n}\,\!} to the desired result S {\displaystyle {\sqrt {S}}} is ensured by the convergence of c n {\displaystyle c_{n}\,\!} to 0, which in turn follows from âˆ' 1 < c 0 < 2 {\displaystyle -1<c_{0}<2\,\!} .

This method was developed around 1950 by M. V. Wilkes, D. J. Wheeler and S. Gill for use on EDSAC, one of the first electronic computers. The method was later generalized, allowing the computation of non-square roots.

Iterative methods for reciprocal square roots



The following are iterative methods for finding the reciprocal square root of S which is 1 / S {\displaystyle 1/{\sqrt {S}}} . Once it has been found, find S {\displaystyle {\sqrt {S}}} by simple multiplication: S = S â‹… ( 1 / S ) {\displaystyle {\sqrt {S}}=S\cdot (1/{\sqrt {S}})} . These iterations involve only multiplication, and not division. They are therefore faster than the Babylonian method. However, they are not stable. If the initial value is not close to the reciprocal square root, the iterations will diverge away from it rather than converge to it. It can therefore be advantageous to perform an iteration of the Babylonian method on a rough estimate before starting to apply these methods.

  • Applying Newton's method to the equation ( 1 / x 2 ) âˆ' S = 0 {\displaystyle (1/x^{2})-S=0} produces a method that converges quadratically using three multiplications per step:
x n + 1 = x n 2 â‹… ( 3 âˆ' S â‹… x n 2 ) . {\displaystyle x_{n+1}={\frac {x_{n}}{2}}\cdot (3-S\cdot x_{n}^{2}).}
  • Another iteration is obtained by Halley's method, which is the Householder's method of order two. This converges cubically, but involves four multiplications per iteration:
y n = S â‹… x n 2 , {\displaystyle y_{n}=S\cdot x_{n}^{2}\,,\!}
x n + 1 = x n 8 â‹… ( 15 âˆ' y n â‹… ( 10 âˆ' 3 â‹… y n ) ) . {\displaystyle x_{n+1}={\frac {x_{n}}{8}}\cdot (15-y_{n}\cdot (10-3\cdot y_{n})).}

Goldschmidt’s algorithm

Some computers use Goldschmidt's algorithm to simultaneously calculate S {\displaystyle {\sqrt {S}}} and 1 / S {\displaystyle 1/{\sqrt {S}}} . Goldschmidt's algorithm finds S {\displaystyle {\sqrt {S}}} faster than Newton-Raphson iteration on a computer with a fused multiplyâ€"add instruction and either a pipelined floating point unit or two independent floating-point units. Two ways of writing Goldschmidt's algorithm are:

b 0 = S {\displaystyle b_{0}=S}
Y 0 ≈ 1 / S {\displaystyle Y_{0}\approx 1/{\sqrt {S}}} (typically using a table lookup)
y 0 = Y 0 {\displaystyle y_{0}=Y_{0}}
x 0 = S y 0 {\displaystyle x_{0}=Sy_{0}}

Each iteration:

b n + 1 = b n Y n 2 {\displaystyle b_{n+1}=b_{n}Y_{n}^{2}}
Y n + 1 = ( 3 âˆ' b n + 1 ) / 2 {\displaystyle Y_{n+1}=(3-b_{n+1})/2}
x n + 1 = x n Y n + 1 {\displaystyle x_{n+1}=x_{n}Y_{n+1}}
y n + 1 = y n Y n + 1 {\displaystyle y_{n+1}=y_{n}Y_{n+1}}

until b i {\displaystyle b_{i}} is sufficiently close to 1, or a fixed number of iterations.

which causes

S = lim n â†' ∞ x n . {\displaystyle {\sqrt {S}}=\lim _{n\to \infty }x_{n}.}
1 / S = lim n â†' ∞ y n . {\displaystyle 1/{\sqrt {S}}=\lim _{n\to \infty }y_{n}.}

Goldschmidt's equation can be rewritten as:

y 0 ≈ 1 / S {\displaystyle y_{0}\approx 1/{\sqrt {S}}} (typically using a table lookup)
x 0 = S y 0 {\displaystyle x_{0}=Sy_{0}}
h 0 = y 0 / 2 {\displaystyle h_{0}=y_{0}/2}

Each iteration: (All 3 operations in this loop are in the form of a fused multiplyâ€"add.)

r n = ( 1 / 2 ) âˆ' x n h n {\displaystyle r_{n}=(1/2)-x_{n}h_{n}}
x n + 1 = x n + x n r n {\displaystyle x_{n+1}=x_{n}+x_{n}r_{n}}
h n + 1 = h n + h n r n {\displaystyle h_{n+1}=h_{n}+h_{n}r_{n}}

until r i {\displaystyle r_{i}} is sufficiently close to 0, or a fixed number of iterations.

which causes

S = lim n â†' ∞ x n . {\displaystyle {\sqrt {S}}=\lim _{n\to \infty }x_{n}.}
1 / S = lim n â†' ∞ 2 h n . {\displaystyle 1/{\sqrt {S}}=\lim _{n\to \infty }2h_{n}.}

Taylor series



If N is an approximation to S {\displaystyle {\sqrt {S}}} , a better approximation can be found by using the Taylor series of the square root function:

N 2 + d = âˆ' n = 0 ∞ ( âˆ' 1 ) n ( 2 n ) ! d n ( 1 âˆ' 2 n ) n ! 2 4 n N 2 n âˆ' 1 = N + d 2 N âˆ' d 2 8 N 3 + d 3 16 N 5 âˆ' 5 d 4 128 N 7 + ⋯ {\displaystyle {\sqrt {N^{2}+d}}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!d^{n}}{(1-2n)n!^{2}4^{n}N^{2n-1}}}=N+{\frac {d}{2N}}-{\frac {d^{2}}{8N^{3}}}+{\frac {d^{3}}{16N^{5}}}-{\frac {5d^{4}}{128N^{7}}}+\cdots }

As an iterative method, the order of convergence is equal to the number of terms used. With two terms, it is identical to the Babylonian method. With three terms, each iteration takes almost as many operations as the Bakhshali approximation, but converges more slowly. Therefore, this is not a particularly efficient way of calculation. To maximize the rate of convergence, choose N so that | d | N 2 {\displaystyle {\frac {|d|}{N^{2}}}\,} is as small as possible.

Other methods



A completely different method for computing the square root is based on the CORDIC algorithm, which uses only very simple operations (addition, subtraction, bitshift and table lookup, but no multiplication). The square root of S may be obtained as the output x n {\displaystyle x_{n}} using the hyperbolic coordinate system in vectoring mode, with the following initialization:

x 0 = S + 1 {\displaystyle x_{0}=S+1}
y 0 = S âˆ' 1 {\displaystyle y_{0}=S-1}
ω 0 = 0 {\displaystyle \omega _{0}=0}

Continued fraction expansion



Quadratic irrationals (numbers of the form a + b c {\displaystyle {\frac {a+{\sqrt {b}}}{c}}} , where a, b and c are integers), and in particular, square roots of integers, have periodic continued fractions. Sometimes what is desired is finding not the numerical value of a square root, but rather its continued fraction expansion, and hence its rational approximation. Let S be the positive number for which we are required to find the square root. Then assuming a to be a number that serves as an initial guess and r to be the remainder term, we can write S = a 2 + r . {\displaystyle S=a^{2}+r.} Since we have S âˆ' a 2 = ( S + a ) ( S âˆ' a ) {\displaystyle S-a^{2}=({\sqrt {S}}+a)({\sqrt {S}}-a)} , we can express the square root of S as

S = a + r a + S . {\displaystyle {\sqrt {S}}=a+{\frac {r}{a+{\sqrt {S}}}}.}

By applying this expression for S {\displaystyle {\sqrt {S}}} to the denominator term of the fraction, we have

S = a + r a + ( a + r a + S ) = a + r 2 a + r a + S . {\displaystyle {\sqrt {S}}=a+{\frac {r}{a+(a+{\frac {r}{a+{\sqrt {S}}}})}}=a+{\frac {r}{2a+{\frac {r}{a+{\sqrt {S}}}}}}.}

Proceeding this way, we get a generalized continued fraction for the square root as

S = a + r | | 2 a + r | | 2 a + r | | 2 a + ⋯ {\displaystyle {\sqrt {S}}=a+{\frac {r|}{|2a}}+{\frac {r|}{|2a}}+{\frac {r|}{|2a}}+\cdots }

For any S a possible choice for a and r is a = 1 and r = S - 1, yielding

S = 1 + S âˆ' 1 | | 2 + S âˆ' 1 | | 2 + S âˆ' 1 | | 2 + ⋯ {\displaystyle {\sqrt {S}}=1+{\frac {S-1|}{|2}}+{\frac {S-1|}{|2}}+{\frac {S-1|}{|2}}+\cdots }

For example, for the square root of 2, we can take a = 1 and r = 1, giving us

2 = 1 + 1 | | 2 + 1 | | 2 + 1 | | 2 + ⋯ {\displaystyle {\sqrt {2}}=1+{\frac {1|}{|2}}+{\frac {1|}{|2}}+{\frac {1|}{|2}}+\cdots }

Taking the first three denominators give the rational approximation of √2 as [1;2,2,2] = 17/12 = 1.41667, correct up to first three decimal places. Taking the first five denominators gives the rational approximation to √2 as [1;2,2,2,2,2] = 99/70 = 1.4142857, correct up to first five decimal places. Taking more denominators give better approximations.

As another example, for the square root of 3, we can select a = 2 and r = -1, giving us

3 = 2 âˆ' 1 | | 4 âˆ' 1 | | 4 âˆ' 1 | | 4 âˆ' ⋯ {\displaystyle {\sqrt {3}}=2-{\frac {1|}{|4}}-{\frac {1|}{|4}}-{\frac {1|}{|4}}-\cdots }

The first three denominators gives √3 as 1.73214, correct up to the first four decimal places. Note that it is not necessary to choose an integer valued a. For instance, we can take a = √2 and r = 1, such that

3 = 2 + 1 | | 2 2 + 1 | | 2 2 + 1 | | 2 2 + ⋯ {\displaystyle {\sqrt {3}}={\sqrt {2}}+{\frac {1|}{|2{\sqrt {2}}}}+{\frac {1|}{|2{\sqrt {2}}}}+{\frac {1|}{|2{\sqrt {2}}}}+\cdots }

We can do the same for the whole numbers as well. For instance,

2 = 4 = 1 + 3 | | 2 + 3 | | 2 + 3 | | 2 + ⋯ {\displaystyle 2={\sqrt {4}}=1+{\frac {3|}{|2}}+{\frac {3|}{|2}}+{\frac {3|}{|2}}+\cdots }

Algorithm

The following iterative algorithm can be used to obtain the continued fraction expansion in canonical form (S is any natural number that is not a perfect square):

m 0 = 0 {\displaystyle m_{0}=0\,\!}
d 0 = 1 {\displaystyle d_{0}=1\,\!}
a 0 = ⌊ S ⌋ {\displaystyle a_{0}=\left\lfloor {\sqrt {S}}\right\rfloor \,\!}
m n + 1 = d n a n âˆ' m n {\displaystyle m_{n+1}=d_{n}a_{n}-m_{n}\,\!}
d n + 1 = S âˆ' m n + 1 2 d n {\displaystyle d_{n+1}={\frac {S-m_{n+1}^{2}}{d_{n}}}\,\!}
a n + 1 = ⌊ S + m n + 1 d n + 1 ⌋ = ⌊ a 0 + m n + 1 d n + 1 ⌋ . {\displaystyle a_{n+1}=\left\lfloor {\frac {{\sqrt {S}}+m_{n+1}}{d_{n+1}}}\right\rfloor =\left\lfloor {\frac {a_{0}+m_{n+1}}{d_{n+1}}}\right\rfloor \!.}

Notice that mn, dn, and an are always integers. The algorithm terminates when this triplet is the same as one encountered before. The algorithm can also terminate on ai when ai = 2 a0, which is easier to implement.

The expansion will repeat from then on. The sequence [a0; a1, a2, a3, …] is the continued fraction expansion:

S = a 0 + 1 a 1 + 1 a 2 + 1 a 3 + ⋱ {\displaystyle {\sqrt {S}}=a_{0}+{\cfrac {1}{a_{1}+{\cfrac {1}{a_{2}+{\cfrac {1}{a_{3}+\,\ddots }}}}}}}

Example, square root of 114 as a continued fraction

Begin with m0 = 0; d0 = 1; and a0 = 10 (102 = 100 and 112 = 121 > 114 so 10 chosen).

114 = 114 + 0 1 = 10 + 114 âˆ' 10 1 = 10 + ( 114 âˆ' 10 ) ( 114 + 10 ) 114 + 10 = 10 + 114 âˆ' 100 114 + 10 = 10 + 1 114 + 10 14 . {\displaystyle {\begin{aligned}{\sqrt {114}}&={\frac {{\sqrt {114}}+0}{1}}=10+{\frac {{\sqrt {114}}-10}{1}}=10+{\frac {({\sqrt {114}}-10)({\sqrt {114}}+10)}{{\sqrt {114}}+10}}\\&=10+{\frac {114-100}{{\sqrt {114}}+10}}=10+{\frac {1}{\frac {{\sqrt {114}}+10}{14}}}.\end{aligned}}}
m 1 = d 0 â‹… a 0 âˆ' m 0 = 1 â‹… 10 âˆ' 0 = 10 . {\displaystyle m_{1}=d_{0}\cdot a_{0}-m_{0}=1\cdot 10-0=10\,.}
d 1 = S âˆ' m 1 2 d 0 = 114 âˆ' 10 2 1 = 14 . {\displaystyle d_{1}={\frac {S-m_{1}^{2}}{d_{0}}}={\frac {114-10^{2}}{1}}=14\,.}
a 1 = ⌊ a 0 + m 1 d 1 ⌋ = ⌊ 10 + 10 14 ⌋ = ⌊ 20 14 ⌋ = 1 . {\displaystyle a_{1}=\left\lfloor {\frac {a_{0}+m_{1}}{d_{1}}}\right\rfloor =\left\lfloor {\frac {10+10}{14}}\right\rfloor =\left\lfloor {\frac {20}{14}}\right\rfloor =1\,.}

So, m1 = 10; d1 = 14; and a1 = 1.

114 + 10 14 = 1 + 114 âˆ' 4 14 = 1 + 114 âˆ' 16 14 ( 114 + 4 ) = 1 + 1 114 + 4 7 . {\displaystyle {\frac {{\sqrt {114}}+10}{14}}=1+{\frac {{\sqrt {114}}-4}{14}}=1+{\frac {114-16}{14({\sqrt {114}}+4)}}=1+{\frac {1}{\frac {{\sqrt {114}}+4}{7}}}.}

Next, m2 = 4; d2 = 7; and a2 = 2.

114 + 4 7 = 2 + 114 âˆ' 10 7 = 2 + 14 7 ( 114 + 10 ) = 2 + 1 114 + 10 2 . {\displaystyle {\frac {{\sqrt {114}}+4}{7}}=2+{\frac {{\sqrt {114}}-10}{7}}=2+{\frac {14}{7({\sqrt {114}}+10)}}=2+{\frac {1}{\frac {{\sqrt {114}}+10}{2}}}.}
114 + 10 2 = 10 + 114 âˆ' 10 2 = 10 + 14 2 ( 114 + 10 ) = 10 + 1 114 + 10 7 . {\displaystyle {\frac {{\sqrt {114}}+10}{2}}=10+{\frac {{\sqrt {114}}-10}{2}}=10+{\frac {14}{2({\sqrt {114}}+10)}}=10+{\frac {1}{\frac {{\sqrt {114}}+10}{7}}}.}
114 + 10 7 = 2 + 114 âˆ' 4 7 = 2 + 98 7 ( 114 + 4 ) = 2 + 1 114 + 4 14 . {\displaystyle {\frac {{\sqrt {114}}+10}{7}}=2+{\frac {{\sqrt {114}}-4}{7}}=2+{\frac {98}{7({\sqrt {114}}+4)}}=2+{\frac {1}{\frac {{\sqrt {114}}+4}{14}}}.}
114 + 4 14 = 1 + 114 âˆ' 10 14 = 1 + 14 14 ( 114 + 10 ) = 1 + 1 114 + 10 1 . {\displaystyle {\frac {{\sqrt {114}}+4}{14}}=1+{\frac {{\sqrt {114}}-10}{14}}=1+{\frac {14}{14({\sqrt {114}}+10)}}=1+{\frac {1}{\frac {{\sqrt {114}}+10}{1}}}.}
114 + 10 1 = 20 + 114 âˆ' 10 1 = 20 + 14 114 + 10 = 20 + 1 114 + 10 14 . {\displaystyle {\frac {{\sqrt {114}}+10}{1}}=20+{\frac {{\sqrt {114}}-10}{1}}=20+{\frac {14}{{\sqrt {114}}+10}}=20+{\frac {1}{\frac {{\sqrt {114}}+10}{14}}}.}

Now, loop back to the second equation above.

Consequently, the simple continued fraction for the square root of 114 is

114 = [ 10 ; 1 , 2 , 10 , 2 , 1 , 20 , 1 , 2 , 10 , 2 , 1 , 20 , 1 , 2 , 10 , 2 , 1 , 20 , … ] . {\displaystyle {\sqrt {114}}=[10;1,2,10,2,1,20,1,2,10,2,1,20,1,2,10,2,1,20,\dots ].\,} (sequence A010179 in the OEIS)

Its decimal value is approximately 10.67707 82520 31311 21....

Generalized continued fraction

A more rapid method is to evaluate its generalized continued fraction. From the formula derived there:

z = x 2 + y = x + y 2 x + y 2 x + y 2 x + ⋱ = x + 2 x â‹… y 2 ( 2 z âˆ' y ) âˆ' y âˆ' y 2 2 ( 2 z âˆ' y ) âˆ' y 2 2 ( 2 z âˆ' y ) âˆ' ⋱ {\displaystyle {\sqrt {z}}={\sqrt {x^{2}+y}}=x+{\cfrac {y}{2x+{\cfrac {y}{2x+{\cfrac {y}{2x+\ddots }}}}}}=x+{\cfrac {2x\cdot y}{2(2z-y)-y-{\cfrac {y^{2}}{2(2z-y)-{\cfrac {y^{2}}{2(2z-y)-\ddots }}}}}}}

and the fact that 114 is 2/3 of the way between 102=100 and 112=121 results in

114 = 1026 3 = 32 2 + 2 3 = 32 3 + 2 / 3 64 + 2 64 + 2 64 + 2 64 + ⋱ = 32 3 + 2 192 + 18 192 + 18 192 + ⋱ , {\displaystyle {\sqrt {114}}={\cfrac {\sqrt {1026}}{3}}={\cfrac {\sqrt {32^{2}+2}}{3}}={\cfrac {32}{3}}+{\cfrac {2/3}{64+{\cfrac {2}{64+{\cfrac {2}{64+{\cfrac {2}{64+\ddots }}}}}}}}={\cfrac {32}{3}}+{\cfrac {2}{192+{\cfrac {18}{192+{\cfrac {18}{192+\ddots }}}}}},}

which is simply the aforementioned [10;1,2, 10,2,1, 20,1,2, 10,2,1, 20,1,2, ...] evaluated at every third term. Combining pairs of fractions produces

114 = 32 2 + 2 3 = 32 3 + 64 / 3 2050 âˆ' 1 âˆ' 1 2050 âˆ' 1 2050 âˆ' ⋱ = 32 3 + 64 6150 âˆ' 3 âˆ' 9 6150 âˆ' 9 6150 âˆ' ⋱ , {\displaystyle {\sqrt {114}}={\cfrac {\sqrt {32^{2}+2}}{3}}={\cfrac {32}{3}}+{\cfrac {64/3}{2050-1-{\cfrac {1}{2050-{\cfrac {1}{2050-\ddots }}}}}}={\cfrac {32}{3}}+{\cfrac {64}{6150-3-{\cfrac {9}{6150-{\cfrac {9}{6150-\ddots }}}}}},}

which is now [10;1,2, 10,2,1,20,1,2, 10,2,1,20,1,2, ...] evaluated at the third term and every six terms thereafter.

Using Pell's equation



Pell's equation (also known as Brahmagupta equation since he was the first to give a solution to this particular equation) and its variants yield a method for efficiently finding continued fraction convergents of square roots of integers. However, it can be complicated to execute, and usually not every convergent is generated. The ideas behind the method are as follows:

  • If (p, q) is a solution (where p and q are integers) to the equation p 2 = S â‹… q 2 ± 1 {\displaystyle p^{2}=S\cdot q^{2}\pm 1\!} , then p q {\displaystyle {\frac {p}{q}}} is a continued fraction convergent of S {\displaystyle {\sqrt {S}}} , and as such, is an excellent rational approximation to it.
  • If (pa, qa) and (pb, qb) are solutions, then so is:
p = p a p b + S â‹… q a q b {\displaystyle p=p_{a}p_{b}+S\cdot q_{a}q_{b}\,\!}
q = p a q b + p b q a {\displaystyle q=p_{a}q_{b}+p_{b}q_{a}\,\!}
(compare to the multiplication of quadratic integers)
  • More generally, if (p1, q1) is a solution, then it is possible to generate a sequence of solutions (pn, qn) satisfying:
p m + n = p m p n + S â‹… q m q n {\displaystyle p_{m+n}=p_{m}p_{n}+S\cdot q_{m}q_{n}\,\!}
q m + n = p m q n + p n q m {\displaystyle q_{m+n}=p_{m}q_{n}+p_{n}q_{m}\,\!}

The method is as follows:

  • Find positive integers p1 and q1 such that p 1 2 = S â‹… q 1 2 ± 1 {\displaystyle p_{1}^{2}=S\cdot q_{1}^{2}\pm 1} . This is the hard part; It can be done either by guessing, or by using fairly sophisticated techniques.
  • To generate a long list of convergents, iterate:
p n + 1 = p 1 p n + S â‹… q 1 q n {\displaystyle p_{n+1}=p_{1}p_{n}+S\cdot q_{1}q_{n}\,\!}
q n + 1 = p 1 q n + p n q 1 {\displaystyle q_{n+1}=p_{1}q_{n}+p_{n}q_{1}\,\!}
  • To find the larger convergents quickly, iterate:
p 2 n = p n 2 + S â‹… q n 2 {\displaystyle p_{2n}=p_{n}^{2}+S\cdot q_{n}^{2}\,\!}
q 2 n = 2 p n q n {\displaystyle q_{2n}=2p_{n}q_{n}\,\!}
Notice that the corresponding sequence of fractions coincides with the one given by the Hero's method starting with p 1 q 1 {\displaystyle \textstyle {\frac {p_{1}}{q_{1}}}} .
  • In either case, p n q n {\displaystyle {\frac {p_{n}}{q_{n}}}} is a rational approximation satisfying
| p n q n âˆ' S | < 1 q n 2 â‹… S . {\displaystyle \left|{\frac {p_{n}}{q_{n}}}-{\sqrt {S}}\right|<{\frac {1}{q_{n}^{2}\cdot {\sqrt {S}}}}.}

Approximations that depend on the floating point representation



A number is represented in a floating point format as m × b p {\displaystyle m\times b^{p}} which is also called scientific notation. Its square root is m × b p / 2 {\displaystyle {\sqrt {m}}\times b^{p/2}} and similar formulae would apply for cube roots and logarithms. On the face of it, this is no improvement in simplicity, but suppose that only an approximation is required: then just b p / 2 {\displaystyle b^{p/2}} is good to an order of magnitude. Next, recognise that some powers, p, will be odd, thus for 3141.59 = 3.14159x103 rather than deal with fractional powers of the base, multiply the mantissa by the base and subtract one from the power to make it even. The adjusted representation will become the equivalent of 31.4159x102 so that the square root will be √31.4159 x 10.

If the integer part of the adjusted mantissa is taken, there can only be the values 1 to 99, and that could be used as an index into a table of 99 pre-computed square roots to complete the estimate. A computer using base sixteen would require a larger table, but one using base two would require only three entries: the possible bits of the integer part of the adjusted mantissa are 01 (the power being even so there was no shift, remembering that a normalised floating point number always has a non-zero high-order digit) or if the power was odd, 10 or 11, these being the first two bits of the original mantissa. Thus, 6.25 = 110.01 in binary, normalised to 1.1001 x 22 an even power so the paired bits of the mantissa are 01, while .625 = 0.101 in binary normalises to 1.01 x 2âˆ'1 an odd power so the adjustment is to 10.1 x 2âˆ'2 and the paired bits are 10. Notice that the low order bit of the power is echoed in the high order bit of the pairwise mantissa. An even power has its low-order bit zero and the adjusted mantissa will start with 0, whereas for an odd power that bit is one and the adjusted mantissa will start with 1. Thus, when the power is halved, it is as if its low order bit is shifted out to become the first bit of the pairwise mantissa.

A table with only three entries could be enlarged by incorporating additional bits of the mantissa. However, with computers, rather than calculate an interpolation into a table, it is often better to find some simpler calculation giving equivalent results. Everything now depends on the exact details of the format of the representation, plus what operations are available to access and manipulate the parts of the number. For example, Fortran offers an EXPONENT(x) function to obtain the power. Effort expended in devising a good initial approximation is to be recouped by thereby avoiding the additional iterations of the refinement process that would have been needed for a poor approximation. Since these are few (one iteration requires a divide, an add, and a halving) the constraint is severe.

Many computers follow the IEEE (or sufficiently similar) representation, and a very rapid approximation to the square root can be obtained for starting Newton's method. The technique that follows is based on the fact that the floating point format (in base two) approximates the base-2 logarithm. That is log 2 ⁡ ( m × 2 p ) = p + log 2 ⁡ ( m ) {\displaystyle \log _{2}(m\times 2^{p})=p+\log _{2}(m)}

So for a 32-bit single precision floating point number in IEEE format (where notably, the power has a bias of 127 added for the represented form) you can get the approximate logarithm by interpreting its binary representation as a 32-bit integer, scaling it by 2 âˆ' 23 {\displaystyle 2^{-23}} , and removing a bias of 127, i.e.

x int â‹… 2 âˆ' 23 âˆ' 127 ≈ log 2 ⁡ ( x ) . {\displaystyle x_{\text{int}}\cdot 2^{-23}-127\approx \log _{2}(x).}

For example, 1.0 is represented by a hexadecimal number 0x3F800000, which would represent 1065353216 = 127 â‹… 2 23 {\displaystyle 1065353216=127\cdot 2^{23}} if taken as an integer. Using the formula above you get 1065353216 â‹… 2 âˆ' 23 âˆ' 127 = 0 {\displaystyle 1065353216\cdot 2^{-23}-127=0} , as expected from log 2 ⁡ ( 1.0 ) {\displaystyle \log _{2}(1.0)} . In a similar fashion you get 0.5 from 1.5 (0x3FC00000).

To get the square root, divide the logarithm by 2 and convert the value back. The following program demonstrates the idea. Note that the exponent's lowest bit is intentionally allowed to propagate into the mantissa. One way to justify the steps in this program is to assume b {\displaystyle b} is the exponent bias and n {\displaystyle n} is the number of explicitly stored bits in the mantissa and then show that

( ( ( x int / 2 n âˆ' b ) / 2 ) + b ) â‹… 2 n = ( x int âˆ' 2 n ) / 2 + ( ( b + 1 ) / 2 ) â‹… 2 n . {\displaystyle (((x_{\text{int}}/2^{n}-b)/2)+b)\cdot 2^{n}=(x_{\text{int}}-2^{n})/2+((b+1)/2)\cdot 2^{n}.}

The three mathematical operations forming the core of the above function can be expressed in a single line. An additional adjustment can be added to reduce the maximum relative error. So, the three operations, not including the cast, can be rewritten as

where a is a bias for adjusting the approximation errors. For example, with a = 0 the results are accurate for even powers of 2 (e.g., 1.0), but for other numbers the results will be slightly too big (e.g.,1.5 for 2.0 instead of 1.414... with 6% error). With a = -0x4C000, the errors are between about -3.5% and 3.5%.

If the approximation is to be used for an initial guess for Newton's method to the equation ( 1 / x 2 ) âˆ' S = 0 {\displaystyle (1/x^{2})-S=0} , then the reciprocal form shown in the following section is preferred.

Reciprocal of the square root

A variant of the above routine is included below, which can be used to compute the reciprocal of the square root, i.e., x âˆ' 1 2 {\displaystyle x^{-{1 \over 2}}} instead, was written by Greg Walsh. The integer-shift approximation produced a relative error of less than 4%, and the error dropped further to 0.15% with one iteration of Newton's method on the following line. In computer graphics it is a very efficient way to normalize a vector.

Some VLSI hardware implements inverse square root using a second degree polynomial estimation followed by a Goldschmidt iteration.

Negative or complex square



If S < 0, then its principal square root is

S = | S | i . {\displaystyle {\sqrt {S}}={\sqrt {\vert S\vert }}\,\,i\,.}

If S = a+bi where a and b are real and b ≠ 0, then its principal square root is

S = | S | + a 2 + sgn ⁡ ( b ) | S | âˆ' a 2 i . {\displaystyle {\sqrt {S}}={\sqrt {\frac {\vert S\vert +a}{2}}}\,+\,\operatorname {sgn}(b){\sqrt {\frac {\vert S\vert -a}{2}}}\,\,i\,.}

This can be verified by squaring the root. Here

| S | = a 2 + b 2 {\displaystyle \vert S\vert ={\sqrt {a^{2}+b^{2}}}}

is the modulus of S. The principal square root of a complex number is defined to be the root with the non-negative real part.

See also



  • Alpha max plus beta min algorithm
  • Integer square root
  • Mental calculation
  • nth root algorithm
  • Recurrence relation
  • Shifting nth-root algorithm
  • Square root of 2

Notes



External links



  • Weisstein, Eric W. "Square root algorithms". MathWorld. 
  • Square roots by subtraction
  • Integer Square Root Algorithm by Andrija Radović
  • Personal Calculator Algorithms I : Square Roots (William E. Egbert), Hewlett-Packard Journal (may 1977) : page 22
  • Calculator to learn the square root


 
Sponsored Links