# Battur's Weblog

Зөв явбал, зөөлөн замбуулин...

## Saturday, February 11, 2017

### Хоёр Шуудай

Architectures for Software Systems нэртэй, унших материалаар дарсан хичээлийг энэ улиралд авч байгаа бөгөөд өнөөдөр багшийн ярианаас сонссон нэг сонирхолтой хошигнолыг энд тэмдэглэж үлдээе гэж бодлоо. Програм хангамжийн архитект 1) азаар дүүрэн нэг шуудай, 2) хоосон дахиад нэг шуудай, нийт хоёр шуудайтай ажлын гараагаа эхэлдэг аж. Гол зүйл энд юу вэ гэхээр, эхний шуудай хоосрохоос өмнө хоёр дахь шуудайг туршлагаар дүүргэх ёстой аж. Ингэж чадвал, сая өөрийгөө архитект болох зүг чигээн олж байна гэж дүгнэх хэрэгтэй гэнэ :) Тоглоом болгож хэлсэн ч гэсэн, үнэний жинтэй үгс гэж бодном.

## Sunday, January 22, 2017

### Quote of the day

Always design a thing by considering it in its next larger context—a chair in a room, a room in a house, a house in an environment, an environment in a city plan.

**Gottlieb Eliel Saarinen (1873–1950)**

## Saturday, January 14, 2017

### When Bad Requirements Happen to Good People

Super-famous and very good-looking authors Karl Wiegers and Joy Beatty did a fabulous job of describing this problem in a section titled "When Bad Requirements Happen to Good People" from their amazing book

*Software Requirements*, Third Edition (Microsoft Press, 2013), reprinted here with permission:When Bad Requirements Happen to Good People

The major consequence of requirements problems is rework—doing again something that you thought was already done—late in development or after release. Rework often consumes 30 to 50 percent of your total development cost, and requirements errors can account for 70 to 85 percent of the rework cost. Some rework does add value and improves the product, but excessive rework is wasteful and frustrating. Imagine how different your life would be if you could cut the rework effort in half! Your team members could build better products faster and perhaps even go home on time. Creating better requirements is an investment, not just a cost.

It can cost far more to correct a defect that’s found late in the project than to fix it shortly after its creation. Suppose it costs $1 (on a relative scale) to find and fix a requirement defect while you’re still working on the requirements. If you discover that error during design instead, you have to pay the $1 to fix the requirement error, plus another $2 or $3 to redo the design that was based on the incorrect requirement. Suppose, though, that no one finds the error until a user calls with a problem. Depending on the type of system, the cost to correct a requirement defect found in operation can be $100 or more on this relative scale. One of my consulting clients determined that they spent an average of $200 of labor effort to find and fix a defect in their information systems using the quality technique of software inspection, a type of peer review. In contrast, they spent an average of $4,200 to fix a single defect reported by the user, an amplification factor of 21. Preventing requirements errors and catching them early clearly has a huge leveraging effect on reducing rework.

Shortcomings in requirements practices pose many risks to project success, where success means delivering a product that satisfies the user’s functional and quality expectations at the agreed-upon cost and schedule.

## Tuesday, December 27, 2016

### Evolution of an Algorithm

In linear algebra, the

**Coppersmith–Winograd algorithm**, named after Don Coppersmith and Shmuel Winograd, was the asymptotically fastest known matrix multiplication algorithm until 2010. It can multiply two $n\times n$ matrices in $O(n^{2.375477})$ time. This is an improvement over the naïve $O(n^{3})$ time algorithm and the $O(n^{2.807355})$ time Strassen algorithm. Algorithms with better asymptotic running time than the Strassen algorithm are rarely used in practice, because the large constant factors in their running times make them impractical. It is possible to improve the exponent further; however, the exponent must be at least 2 (because an $n\times n$ matrix has $n^{2}$ values, and all of them have to be read at least once to calculate the exact result).
In 2010, Andrew Stothers gave an improvement to the algorithm, $O(n^{2.374}).$ In 2011, Virginia Williams combined a mathematical short-cut from Stothers' paper with her own insights and automated optimization on computers, improving the bound to $O(n^{2.3728642}).$ In 2014, François Le Gall simplified the methods of Williams and obtained an improved bound of $O(n^{2.3728639}).$

The Coppersmith–Winograd algorithm is frequently used as a building block in other algorithms to prove theoretical time bounds. However, unlike the Strassen algorithm, it is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware.

Source: Wikipedia, Coppersmith–Winograd algorithm

### Karatsuba Fast Multiplication Algorithm

The

The Karatsuba algorithm was the first multiplication algorithm asymptotically faster than the quadratic "grade school" algorithm. The Toom–Cook algorithm is a faster generalization of Karatsuba's method, and the Schönhage–Strassen algorithm is even faster, for sufficiently large

Pseudocode:

Implementation:

Source: http://introcs.cs.princeton.edu/java/99crypto/Karatsuba.java.html

**Karatsuba algorithm**is a fast multiplication algorithm. It was discovered by Anatoly Karatsuba in 1960 and published in 1962. It reduces the multiplication of two*n*-digit numbers to at most single-digit multiplications in general (and exactly when*n*is a power of 2). It is therefore faster than the classical algorithm, which requires*n*^{2}single-digit products. For example, the Karatsuba algorithm requires 3^{10}= 59,049 single-digit multiplications to multiply two 1024-digit numbers (*n*= 1024 = 2^{10}), whereas the classical algorithm requires (2^{10})^{2}= 1,048,576.The Karatsuba algorithm was the first multiplication algorithm asymptotically faster than the quadratic "grade school" algorithm. The Toom–Cook algorithm is a faster generalization of Karatsuba's method, and the Schönhage–Strassen algorithm is even faster, for sufficiently large

*n*. Source: https://en.wikipedia.org/wiki/Karatsuba_algorithmPseudocode:

procedure karatsuba(num1, num2) if (num1 < 10) or (num2 < 10) return num1*num2 // calculates the size of the numbers M = max(size_base10(num1), size_base10(num2)) N = M/2 // split the digit sequences about the middle high1, low1 = split_at(num1, N) high2, low2 = split_at(num2, N) // 3 calls made to numbers approximately half the size z0 = karatsuba(low1,low2) z1 = karatsuba((low1+high1),(low2+high2)) z2 = karatsuba(high1,high2) return (z2*10^(2*N))+((z1-z2-z0)*10^(N))+(z0)

Implementation:

public static BigInteger karatsuba(BigInteger x, BigInteger y) { // cutoff to brute force int M = Math.max(x.bitLength(), y.bitLength()); if (M <= 2000) return x.multiply(y); // optimize this parameter // number of bits divided by 2, rounded up int N = (M / 2) + (M % 2); // x = a + 2^N b, y = c + 2^N d // x = low1 + 2^N high1, y = low2 + 2^N high2 BigInteger high1 = x.shiftRight(N); BigInteger low1 = x.subtract(high1.shiftLeft(N)); BigInteger high2 = y.shiftRight(N); BigInteger low2 = y.subtract(high2.shiftLeft(N)); // compute sub-expressions BigInteger z0 = karatsuba(low1, low2); BigInteger z1 = karatsuba(low1.add(high1), low2.add(high2)); BigInteger z2 = karatsuba(high1, high2); return z0.add(z1.subtract(z0).subtract(z2).shiftLeft(N)).add(z2.shiftLeft(2*N)); }

Source: http://introcs.cs.princeton.edu/java/99crypto/Karatsuba.java.html

Subscribe to:
Posts (Atom)