## Dec 22, 2013

### On the Mutual Languages of States

Languages lie beyond the boarder of nations. Normally, if two states speak the same language, cultural connection is implied. The notable example is the US. and UK, both are built by English speaking people. In the real world, one state may speak various languages. Each language is spoken by some portion of the population. To model the language similarity between two states, we introduce a function $\phi(state1, state2)$ such that: 1) $\phi = 1$ if state1 = state2; 2) $\phi(state1, state2) = \phi(state2, state1)$; 3) if no mutual language spoken between state1 and state2, then $\phi(state1, state2) = 0$.

A very intuitive definition of $\phi$ could be the fraction of people speaking any mutual language of two states. Let $p_1, p_2$ be the population of the two states, respectively. $f_i^{(1)}, f_i^{(2)}$ be the population fractions of the  $i$-th mutual language. Then
$\phi(state1, state2) := \frac{p_1 \sum_i f_i^{(1)} + p_2 \sum_i f_i^{(2)}}{p_1+p_2}.$ For example, the mutual native languages spoken in the US and UK are English and Angloromani. In the US, the fractions of these two languages are: 89% and 0.043%. In the UK, the numbers are 98% and 0.16%. Taken into account the population of the two states, 308.476 millions and 61.899 millions, the similarity function $phi$ yields 0.9085.

Another interesting definition of $\phi$ (but 1). is not satified) would be the probability of two men from the two states respectively speak the same tongue.  It can be shown the probability is $\phi(state1, state2) := \sum_i f_i^{(1)} f_i^{(2)}.$ In the case of US and UK, $\phi$ yields 0.8722.

#### Asia

With this definition, let's explore the international language connection. Fig. 1 shows the adjacent matrix of Asia states. Fig. 2 shows the graph representation of it wit the edge colored according to the language similarity.

 Fig. 1, the language connection among Asian states: the adjacent matrix.

 Fig. 2, the language connections among Asian states: the graph (We have removed the self-loops). The edges are colored according to their values.
Fig. 3 again shows the language connections among Asian states. But the edges are dyed by the type of the primary mutual language shared by the two vertex states. The red edges in Fig. 3 represent English. Compare with Fig. 2, English is not very popular in Asia but still widely used.
 Fig. 3, the language connections among Asian states: the graph edges are colored according to the type of primary mutual languages shared by each pair of states.
Notice that some states enjoy closer language relation with their neighbours than others. We can define a language capacity of a state within some group: $LC(s) := \frac{1}{|G|-1}\sum_{s' \in G, s'\ne s} \phi(s, s')$ Fig. 4 shows the language capcities of Asian states.

 Fig. 4, the Language Capacity of the states in Asia. The two countries that have the largest two language capacities are China and Singapore.
We can gather further information from the language relation graph. We can split the vertices into graph communities. Using the FindGraphCommunities function in Mathematica, we can identify 7 communities. The first community is Southeast Asia including China, Singapore, Brunei etc. The most popular language is Chinese, the most widely used language is English. The second and third communities are the crossroad countries including Russia, the central asia stans, Iran, Iraq, Syria, Isreal etc. The most popular and most widely used languages include Russia and Kurdish. The fourth zone is south asia, including India, Pakistan, Bangladesh Nepal Bhutan and Myanmar. The most popular language is Hindi and the most widely used language is Tibetan. The fifth region is the Arabic states. The most popular and most widely used language is Arabic. Next region is Japan and the Koreas. The most popular language is Japanese and the most widely used language is Korean. The last region is Turkey and Uzbekistan. Both countries speak Turkish.

 Fig. 5, the language communities of the Asia states. The edges are colored by the communities.

#### Europe

Europe is another continent with flourish civilizations. The average language capacity in Europe is higher than in Asia.

The languages spoken in Asia may be very diverse, it is not true in other continent. Europe, for example,  is dominated by French, German, Italian, English and Romanian etc.
Finally, the world is dominated by four languages: English, French, Spanish and Portuguese - they may not be the most popular ones - they are widely used in communicating with other states. Apparently, a language beyond the boundary of a country is a proof of the cultural influence. As we know, the four dominant "foreign languages" are the results of the colonism.

## Dec 7, 2013

### Frenet Tube

Given a 3D curve $\vec{r}(t) = (x(t), y(t), z(t)), t \in I=[0,1]$,  Tubify[ {x, y, z},  a] turns it into a tube around the curve. The Mathematica Tube[] has similar function (works on lines). The essential thing is to find the unit circle in the normal plane for each point (Shown in Fig. 1). The tangential vector is simply $\vec{\alpha} = \vec{r}'/|\vec{r}'|$, where $\vec{r}' = \mathrm{d} \vec{r}/\mathrm{d}t$. The normal vector is $\vec \beta = \vec{\alpha}' / |\vec{\alpha}'| = \frac{\vec{r}'' |\vec{r}'|^2 - \vec{r}' \vec{r}'\cdot\vec{r}''}{|\vec{r}'| \sqrt{|\vec{r}''|^2 |\vec{r}'|^2 - (\vec{r}'\cdot\vec{r}'')^2} }$. The binormal vector is $\vec{\gamma} = \vec{\alpha} \times \vec{\beta} = \frac{\vec{r}' \times \vec{r}'' }{\sqrt{|\vec{r}''|^2 |\vec{r}'|^2 - (\vec{r}'\cdot\vec{r}'')^2} }$.

 Fig. 1: black, $\vec{\alpha}$, green $\vec{\beta}$, blue $\vec{\gamma}$.

Tubify[{x_, y_, z_}, r_] :=
({x[#1], y[#1], z[#1]}
+ (r[#1] Cos[#2])/
Norm[(Norm[{x'[#1], y'[#1], z'[#1]}]^2 {x''[#1], y''[#1],
z''[#1]} - {x'[#1], y'[#1], z'[#1]}.{x''[#1], y''[#1],
z''[#1]} {x'[#1], y'[#1], z'[#1]})] (Norm[{x'[#1], y'[#1],
z'[#1]}]^2 {x''[#1], y''[#1],
z''[#1]} - {x'[#1], y'[#1], z'[#1]}.{x''[#1], y''[#1],
z''[#1]} {x'[#1], y'[#1], z'[#1]})
+ (r [#1] Sin[#2] )/
Norm[{-z'[#1]*y''[#1] + z''[#1]*y'[#1],
z'[#1]*x''[#1] - z''[#1]*x'[#1], -y'[#1]*x''[#1] +
x'[#1]*y''[#1]}] {-z'[#1]*y''[#1] + z''[#1]*y'[#1],
z'[#1]*x''[#1] - z''[#1]*x'[#1], -y'[#1]*x''[#1] +
x'[#1]*y''[#1]}) &


All the arguments are functions.

Example 1:
Show[ParametricPlot3D[
Tubify[{(2 + Cos[3 #]) Cos[ #] &, (2 + Cos[3 #]) Sin[#] &,
Sin[3 #] &}, 0.25 &][u, v],
{u, 0, 2 Pi}, {v, 0, 2 Pi}, PlotStyle -> Opacity[0.2],
Mesh -> None, Boxed -> True, Axes -> False, BoxRatios -> Automatic,
PerformanceGoal -> "Quality", PlotPoints -> 80, MaxRecursion -> 0,
PlotLabel ->
Style[TraditionalForm /@ {(2 + Cos[3 t]) Cos[
t], (2 + Cos[3 t]) Sin[t], Sin[3 t]}, 20]
],
ParametricPlot3D[{(2 + Cos[3 u]) Cos[ u], (2 + Cos[3 u]) Sin[u],
Sin[3 u]}, {u, 0, 2 Pi},
PlotStyle -> Directive[{Red, Opacity[1], Thick}]], ImageSize -> 600
]


A similar code using Tube[]:

ParametricPlot3D[
{(2 + Cos[3 u]) Cos[ u], (2 + Cos[3 u]) Sin[u],
Sin[3 u]}, {u, 0, 2 Pi}, PlotStyle -> Opacity[0.2],
Mesh -> None, Boxed -> True, Axes -> False, BoxRatios -> Automatic,
PerformanceGoal -> "Quality", PlotPoints -> 80, MaxRecursion -> 0,
PlotLabel ->
Style[TraditionalForm /@ {(2 + Cos[3 t]) Cos[
t], (2 + Cos[3 t]) Sin[t], Sin[3 t]}, 20]
]/.Line[pts_, rest___] :> Tube[pts, 0.1, rest]


 Fig. 2

Example 2, trefoil knot:

Show[ParametricPlot3D[
Tubify[{Sin[#] + 2 Sin[2 #] &, Cos[#] - 2 Cos[2 #] &, -Sin[3 #] &},
0.25 &][u, v],
{u, 0, 2 Pi}, {v, 0, 2 Pi}, PlotStyle -> Opacity[0.3],
Mesh -> None, Boxed -> True, Axes -> False, BoxRatios -> Automatic,
PerformanceGoal -> "Quality", PlotPoints -> 100, MaxRecursion -> 0,
PlotLabel -> Style[TraditionalForm /@ trefoil[t], 20]
], ImageSize -> 600
]


 Fig. 3 Trefoil Knot

Similarly, we can also define the Frenet's Ribbon:
Ribbonize[{x_, y_, z_}, r_] :=
({x[#1], y[#1], z[#1]}
+ r [#1] #2 /
Norm[{-z'[#1]*y''[#1] + z''[#1]*y'[#1],
z'[#1]*x''[#1] - z''[#1]*x'[#1], -y'[#1]*x''[#1] +
x'[#1]*y''[#1]}] {-z'[#1]*y''[#1] + z''[#1]*y'[#1],
z'[#1]*x''[#1] - z''[#1]*x'[#1], -y'[#1]*x''[#1] +
x'[#1]*y''[#1]}) &


 Fig. 4, Trefoil Ribbon

 Fig. 5, Helix Ribbon

#### more examples:

 Fig. 6, A Seashell Surface
 Fig. 7, Double Helix (cf. DNA)
 Fig. 8, A Figure-eight Knot

ParametricPlot3D[
Tubify[{#/Pi Cos[#] &, #/Pi Sin[#] &, #/Pi &}, Sin[#/4] &][u, u*80],
{u, 0, 4 Pi}, PlotStyle -> Directive[{Red, Thick, Opacity[0.8]}],
Mesh -> None, Boxed -> True, Axes -> False, BoxRatios -> Automatic,
PlotRange -> All, PerformanceGoal -> "Quality", MaxRecursion -> 0,
PlotPoints -> 2560, PlotLabel -> Style["", 20],
ImageSize -> 500
]
 Fig. 9, a coilfied 3D curve

## Dec 4, 2013

### Chang'e 3's Unmanned Moon Landing Mission

Chang'e 3 is a Chinese soft-landing moon mission. It features a moon lander (Chang'e 3) and a moon rover (Yutu).

 Fig. 1 (left) the artist's impression of the moon goddess Chang'e; (right) a yutu or a jade rabbit
Chang'e was the wife of Yi the Archer (not be confused with Lord Yi, Xia dynasty). Yi the Archer and his wife lived in the dawn of the Hua Xia civilization. As a hero of his time, Yi killed 9 monsters and managed to see the Goddess Wangmu on the Kunlun Montain. Wangmu gave him the elixir of immortality. Chang'e stole the elixir and became the goddess of the moon.

The rabbit was actually a different myth regarding the moon shadow. Later on, the two myths merged and the rabbit became Chang'e's pet.

 Fig. 2, the ancients' impression of the lunar shadow as a rabbit.
So much about the names and myths. The Chang'e 3 mission is planned to land in Sinus Iridum. The coordinate of the center of Sinus Iridum is 44.1°N , 31.5°W. The nearest historical lunar site is the former Soviet Union's Luna 17. The final landing coordinate however is 44.12°N 19.51°W, in the Mare Imbrium, ~100 km from Sinus Iridum, ~400km from the Luna 17 site. It has been reported as in the planned 91 km x 356 km Sinus Iridum landing area.

 Fig. 2 the designated landing area.

 Fig. 3, Chang'E 3 mission shown together with the historical luna missions.
The closest crater near the site is Laplace F, a 3 mile crate.

## Dec 2, 2013

### The Hierarchy of Knowledge

In a conversation, my advisor categorized human knowledge with the following hierarchy:

### wisdom      insight knowledge information data

Each set of data comprises noise and signal. Information can be abstracted from data. But in a broader sense, information has been quantified with entropy by Shannon.

Knowledge sits above information. For example, Schroedinger's equation (plus the information of boundary conditions) itself contains little information. Yet by solving it, we can obtain the information of the electron inside Hydrogen atom.

Beyond knowledge, there is insight and wisdom, which are particularly important for researchers. But they are also very difficult to quantify.

With the hierarchy of knowledge, one may also find out the corresponding intellectuals:

### thinker scholar teacher analyst labour

 Fig. 1, After WWII, the number of scientific publications doubles in about every 25 years. If knowledge is measured by the number of scientific publications, then human knowledge increases 15 times in one century.  (Data from World Bank, World Development Indicators.  Scientific and technical journal articles refer to publications in the following fields: physics, biology, chemistry, mathematics, clinical medicine, biomedical research, engineering and technology, and earth and space sciences.)

## Nov 19, 2013

### The indexed ColorData palette

The indexed ColorData[ ] palette

## Nov 11, 2013

### the graph of the international trade relations

Mathematica has provided excellent database of the countries in the function CountryData. I am particularly interested in the trade relations. CountryData[country, "ImportPartners"] and CountryData[country, "ExportPartners"] produce the leading import and export partners of the country. We can construct a graph based on the global trade relations.

trade =
Tally[Sort /@
Flatten[Tuples[{{#},
Select[Flatten[{CountryData[#,
"ImportPartners"] /. _Missing -> Null,
CountryData[#, "ExportPartners"] /. _Missing -> Null}],
MemberQ[CountryData[], #] &]}] & /@ CountryData[], 1]];

trade[[All, 1, 1]] $UndirectedEdge] trade[[All, 1, 2]]], EdgeWeight -> trade[[All, 2]]]   Fig. 1, the graph of the international trade relations. We can make it more accessible by coloring the countries. #### by continent: Row[ {Graph[Thread[ trade[[All, 1, 1]] \[UndirectedEdge] trade[[All, 1, 2]]], EdgeWeight -> trade[[All, 2]], VertexShapeFunction -> ({ColorData[3, First@First@ Position[CountryData["Continents"], CountryData[#2, "Continent"] ]], Opacity[.8], EdgeForm[{Thin, LightGray}], Disk[#, .08], Darker@Red, Opacity[1.0], Text[Style[CountryData[#2, "CountryCode"], Bold], #1] } &), ImageSize -> 800], Graphics[ MapIndexed[{ColorData[3, First@#2], Text[Style[ StringJoin[ Riffle[StringCases[#1, RegularExpression["[A-Z][a-z]+"]], " "]], Bold, 22], {40, First@#2*60}]} &, CountryData["Continents"]], ImageSize -> {Automatic, 300}] }]   Fig. 2, (Colored by Continent) the graph of the international trade relations. #### by trade values,  Fig. 3, (Colored by Continent) the graph of the international trade relations.  Fig. 4, colored by the import and export amount separated. We can explore some further graph structures inside the international trade relations. What we focus here is the the graph communities. The communities characterize the clustering property of a graph. Apply the Mathematica function FindGraphCommunities[], we obtain several trade clusters. The leading 3 communities correspond to the three known trade zones: the Transatlantic Trade Zone including US. and the European Union; the Asia-Pacific Trade Zone including China, Japan, Korea Australia and the ASEANs; the South-America Trade Zone including Brazil, Argentina etc. Our conclusion is further justified if we color the countries inside the trade zone by their continent (Fig. 6).  Fig. 5, the graph communities of the international trade relations.  Fig. 6, trade communities and the geographic distribution. ## Nov 1, 2013 ### Nonlinear Least Square Regression ### The nonlinear least square fit The nonlinear least square fit (regression) minimizes the target action: \[ S(\theta) = \sum_i w_i (y_i - f(x_i; \theta))^2$ with positive weights $w_i \ge 0$, for a given set of data $\{ (x_1, y_1), (x_2,y_2),\cdots \}$ and a form function $f(x; \theta) \equiv f(x;\theta_1,\theta_2,\cdots \theta_p)$.

The minima lies in the extrema: $0 = \frac{\partial S}{\partial \theta_r} = -2 \sum_i w_i (y_i - f(x_i;\theta))\frac{\partial f(x_i;\theta)}{\partial \theta_r}. \quad r = 1,2,\cdots p$

### Gauss-Newton algorithm

The Newton's method for solving for a local root of a function $\varphi(x)$ is, $x_{n+1} = x_n - \frac{\varphi'(x_n)}{\varphi(x_n)}$ In this problem, $\varphi(\theta) = S'(\theta)$. Therefore, the Newton's method can be realized as $\theta^{(k+1)} = \theta^{(k)} - (H S(\theta^{(k)}))^{-1} \nabla S(\theta^{(k)})$ where $[H S(\theta)]_{rs} = {\partial^2 S}{\partial \theta_r \partial \theta_s}$ is the Hesse matrix (Jacob matrix) of $S$. It is easy to show $\left[ H S \right]_{rs} = 2 \sum_i w_i \frac{\partial f_i}{\partial\theta_r}\frac{\partial f_i}{\partial \theta_s} - 2 \sum_i w_i (y_i - f(x_i)) \frac{\partial^2 f_i}{\partial\theta_r\partial\theta_s}$ Ignore the second term, as the residue and the second derivative should be small. Then the Gauss-Newton algorithm gives the iteration procedure: $\theta^{(k+1)} = \theta^{(k)} + \frac{\mathbf{J}^T\cdot \mathbf{W}\cdot (\mathbf{y}-f(\mathbf{x};\theta^{(k)}) )}{\mathbf{J}^T\cdot\mathbf{W}\cdot\mathbf{J}}$

### Levenberg-Marquardt algorithm

Levenberg-Marquardt algorithm is one of the most important algorithm based on the Gauss-Newton algorithm. Let $\theta_r, r = 1,2,\cdots p$ be the fitting parameters such as $a,b,c$. Define the Jacobian $J_{i,r} = \frac{\partial f(x_i;\theta)}{\partial \theta_r}$ Define $W_{ij} = w_i \delta_{ij}$. The iterative procedure is: $\mathbf{\theta}^{(k+1)} = \mathbf{\theta}^{(k)} + \alpha \frac{\mathbf{J}^T \cdot \mathbf{W} \cdot (\mathbf{y} - f(\mathbf{x};\mathbf{\theta}^{(k)}))}{\lambda \mathbf{\Lambda} + \mathbf{J}^T \cdot \mathbf{W} \cdot \mathbf{J}}$ where $\mathbf{\Lambda}$ is a diagonal matrix, either chosen as the identity matrix or the diagonal term of $\mathbf{J}^T \cdot \mathbf{W} \cdot \mathbf{J}$, $0<\alpha<1$ is a small positive number.

Therefore, the solution $\bar\theta$ satisfies equation $\mathbf{J}^T_{\bar\theta} \cdot \mathbf{W} \cdot (\mathbf{y} - f(\mathbf{x};\bar{\mathbf{\theta}})) = 0$ or $\sum_i w_i \frac{\partial f(x_i; \bar\theta)}{\partial \theta_r} (y_i - f(x_i; \bar\theta)) = 0$ The choice of $\lambda$ is very important. Nonzero $\lambda$ can avoid local minimum but large $\lambda$ is likely to slow down the convergence. A useful strategy is to to choose a nonzero $\lambda$ to start with and decrease $\lambda$ as the iteration converges.

### Parameter Error Estimation

It is important to attach estimated error bars/bands to the fitted function (or the fitted parameters). Let's write the data as $y_i = f(x_i) + \xi_i$, where $\xi_i$ are the residuals. If we assume $\xi_i, i=1,2,3,\cdots n$ are $n$ independent random variables in Normal distribution. The likelihood of the fitted parameters can be written as $L(\theta_r, \sigma) = N_\sigma \exp\left[ -\frac{1}{2\sigma^2}\sum_i w_i (y_i-f(x_i;\theta_r))^2\right] \\ \simeq N'_\sigma \exp\left[-\frac{1}{2\sigma^2} \sum_{rs} (\theta_r-\bar\theta_r) A_{rs} (\theta_s-\bar\theta_s) \right]$ where $A_{rs} = \sum_i w_i \frac{\partial f(x_i;\bar\theta)}{\partial\theta_r} \frac{\partial f(x_i;\bar\theta)}{\partial\theta_s} = \left[ \mathbf{J}^T \cdot \mathbf{W} \cdot \mathbf{J} \right]_{rs}$  and $N_\sigma, N'_\sigma$ are normalization factors.

The variance $\sigma^2$ can be estimated by the estimator $\hat{\sigma}^2 = \frac{1}{n-p}\sum_i w_i (y_i - f(x_i; \bar\theta))^2 = \frac{S({\bar\theta})}{n-p}.$ The covariance matrix of the parameters is, $\text{Cov}(\theta) = \hat\sigma^2 A^{-1}$ It's also useful to find the prediction band. The width of the band is the variance of function $f(x; \theta)$. The "Delta Method" gives an approximative way to calculate $\text{Var}\left[ f(x; \theta) \right]$ from $\text{Cov}[ \theta ]$. $\sigma_f(x) = \text{Var}\left[ f(x; \bar\theta) \right] = \frac{\partial f^T(x;\bar\theta)}{\partial\mathbf{\theta}}\cdot\text{Cov}( \mathbf\theta) \cdot \frac{\partial f(x;\bar\theta)}{\partial\mathbf{\theta}} \\ = \hat\sigma^2 \sum_{r,s} \frac{\partial f(x;\bar\theta)}{\partial\theta_r}\left[ A^{-1} \right]_{rs}\frac{\partial f(x;\bar\theta)}{\partial\theta_s}$
Finally,the distribution of the residuals can be test by the standard $\chi^2$-test.

### Example: a power-law fitting

In our example, we will explore fitting some data to the power-law function $f(x) = a + b/x^c$.

 Fig. 1: (Color Lines) Black, the theoretical curve; Red, fit with uniform weights; Green fit with weights proportional to $x$

The corresponding covariance matrices for the parameters $a,b,c$.

The diagonal terms gives the variance of the parameters.

## Oct 16, 2013

### Zeros of the Legendre Polynomials

Legendre functions (Legendre polynomials) are the solutions of the following linear Ordinary Differential Equation (ODE), $\frac{\mathrm{d}}{d x}\left[ (1-x^2) \frac{\mathrm{d}}{\mathrm{d}x} P_n(x) \right] + n(n+1)P_n(x) = 0$

The Legendre function $P_n(x)$ is a degree $n$ polynomial. It has $n$ distinct roots within (-1,1), $x_{n,k}, k=1,2,\cdots n$, $x_{n,1} < x_{n,2} < \cdots < x_{n,n}$. There is no analytical solution for high order Legendre polynomials. It is useful to know the analytical bounds of the roots.

According to Bruns, Markoff, Stieltjes and Szego [1], the roots satisfy the following inequality, $\cos\frac{(n-k+1)\pi}{n+1} < x_{n,k} < \cos\frac{(n-k+\frac{3}{4})\pi}{n+\frac{1}{2}}$ for $k = 1, \cdots \left[\frac{n}{2}\right]$. For the other half, recall $x_{n,n-k+1} = -x_{n,k}$.

It may also be useful to give a pair of global bounds, $\cos\frac{(n-k+\frac{1}{2})\pi}{n+\frac{1}{2}} < x_{n,k} < \cos\frac{(n-k+1)\pi}{n+\frac{1}{2}}$ for $k=1,2,\cdots n$, although this is a coarse one.

The roots of the Legendre polynomials also admit asymptotic expansion due to Francesco Tricomi [2]. Let $\theta_{n,k} = \frac{n-k+3/4}{n+1/2}\pi$. Then the $k$-th root (in ascent order) is $x_{n,k} = \left\{1 - \frac{1}{8n^2} + \frac{1}{8n^3} - \frac{1}{384n^4}\left( 39 - \frac{28}{\sin^2\theta_{n,k}} \right) + \mathcal{O}(n^{-5})\right\} \cos\theta_{n,k},$ which can be improve by Newton's algorithm $x_{\nu+1} = x_{\nu} - \frac{1-x_\nu^2}{n} \frac{P_n(x_\nu)}{P_{n-1}(x_\nu) -x_\nu P_n(x_\nu)}$.

Another approximation due to Gatteschi and Pittaluga [3] is $x_{n,k} = \cos\left\{ \theta_{n,k} + \frac{1}{16(n+1/2)^2}(\cot\frac{1}{2}\theta_{n,k} - \tan\frac{1}{2}\theta_{n,k}) \right\} + \mathcal{O}(n^{-4}).$

 Let $x_{n,k}$ be the $k$-th root of $P_n(x)$, $\xi_{n,k}$ be its approximations.

The $x_{n,k} \simeq \cos\theta_{n,k}$ is a pretty good estimator of the location $k$ of Gaussian nodes from $x_{n,k}$.

Last but not least, the zeros are the nodes of the Gaussian quadrature. $\int_{-1}^{+1} \mathrm{d}x f(x) \simeq \sum_{k=1}^n w_{n,k} f(x_{n,k})$ where $w_{n,k} = \frac{2}{(1-x_{n,k}^2) \left[ P'_n(x_{n,k}) \right]^2}$ are called the Gaussian weights. Gaussian quadrature can be illustrated by the bellow figure ($n = 16$).

#### References:

[1] Gabriel Szego, Inequalities for the Zeros of Legendre Polynomials and Related Functions, Transactions of the American Mathematical Society, Vol. 39, No. 1 (1936)

[2] F. G. Tricomi, Sugli zeri dei polinomi sferici ed ultrasferici, Ann. Mat. Pura Appl., 31 (1950), pp. 93–97; F.G. Lether and P.R. Wenston, Minimax approximations to the zeros of $P_n(x)$ and Gauss-Legendre quadrature, Journal of Computational and Applied Mathematics Volume 59, Issue 2, 19 May 1995, Pages 245–252

[3] L. Gatteschi and G. Pittaluga, An asymptotic expansion for the zeros of Jacobi polynomials, in Mathematical Analysis. Teubner-Texte Math., 79 (1985), pp. 70–86

Source codes:

pomax.github.io: http://pomax.github.io/bezierinfo/legendre-gauss.html

Mathematica:
Needs["NumericalDifferentialEquationAnalysis"]

## Oct 9, 2013

### Smoothing a polygon

Ref. [1] has an interesting observation. For any closed polygon, if one takes the midpoint repeatedly, the polygon will be eventually smoothed out. The authors observed that the curves will become an elliptic.

The mapping is in fact independent in x and y directions. Therefore, after large enough iteration, the x and y coordinate will be "in phase", i.e. belong to the same eigensubspace.

randline[n_Integer, a_: - 1, b_: 1] := RandomReal[{a, b}, {n, 2}]

mpt[pts_, n_: 3] :=
Table[Sum[pts[[Mod[i + j - 1, Length@pts, 1]]], {j, 1, n}]/n, {i, 1,
Length[pts]}]

nl[npts_, nitr_, n_: 2, a_: - 1, b_: 1] :=
NestList[mpt[#, n] &, randline[npts, a, b], nitr];

randplot[npt_, nit_, nst_: 10, na_: 2, pt_: False, init_: False, a_: - 1,
b_: 1] :=
Module[{n, i0 = Min[Max[1, nst], nit + 1]},
n = nl[npt, nit, na, a, b];
Show[
If[pt,
Graphics[
Flatten[Table[{Hue[(i - i0)/(nit + 1 - i0)],
Line[closeline[n[[i]]]], PointSize[Medium],
Point[n[[i]]]}, {i, nst, nit + 1}], 1]],
Graphics[
Flatten[Table[{Hue[(i - nst)/(nit + 1 - i0)],
Line[closeline[n[[i]]]]}, {i, nst, nit + 1}], 1]]
], If[init,
Graphics[{Opacity[0.6], Line[closeline[n[[1]]]]}],
Graphics[]
]
]
]


I here also present the spectrum

reference:
[1]:http://www.cs.cornell.edu/cv/ResearchPDF/PolygonSmoothingPaper.pdf

## Oct 6, 2013

### The Pythagorean theorem in "Zhou Bi Suan Jing" (周髀算经)

The celebrated Phythagorean theorem gives the relation between the sides of a right triangle: $a^2 + b^2 = c^2$ where $a,b,c$ are the lengths of the sides with $c$ the one opposite to the right angle.

In China, the theorem is known as "Gou Gu Theorem" (勾股定理) or "Gou Gu Xian Theorem" (勾股弦定理). It first appears in the oldest Chinese mathematical book that survives today, the "Zhou Bi Suan Jing" (周髀算经). "Zhou" refers to the ancient dynasty Zhou (周); "Bi" means arm and according to the book, it refers to the gnomon of the sundial. The book is dedicated to astronomical observation and calculation. "Suan Jing" or "classic of arithmetic" were appended in later time to honor the achievement of the book in mathematics.

In Chapter I.1, In a dialog between "the Duke of Zhou" (周公) and "Gao of Shang" (商高), Gao said,
"故折矩, 以為句廣三, 股修四，徑隅五."
"So for the half of the rectangle,  if Gou is three, Gu is four, then Xian is Five."

Gou (勾/句) , Gu (股), Xian (弦) are the Chinese names of the three sides of a triangle. These texts essentially give a Pythagorean triplet (3, 4, 5), a triple of numbers that satisfies the Pythagorean equation $a^2 + b^2 = c^2$. Further explanation appears in Chaper I.2 in a dialog between "Rong Fang" (荣方) and "Chen Zi"  (陈子). Chen Zi in explaining a problem concerning of the measurement of the position of the sun, said,
"若求邪至日者, 以日下為句, 日高為股, 句股各自乘, 并而開方除之, 得邪至日."
"Suppose the distance to the sun is wanted. Let the distance to the projection of the sun on the ground be Gou, the distance of the sun to the ground be Gu. Square them respectively, add together and take the square root. Then the distance to the sun is obtained."

Now the Pythagorean theorem is given.

The author of Zhou Bi is unknown, nor the year was it written. But according to the texts, it is a collection of the astronomical and mathematical methods of the Shang and Zhou dynasty ( 1100 - 1000 BC). The Duke of Zhou is the brother of the creator of Zhou dynasty, which had been a state in the Shang dynasty. Gao of Shang was apparently a figure in the Shang dynasty, as suggested by his name. If this is true, the Pythagorean theorem was proposed as early as 1000BC, contrast of Pythagoras who lived between 570 BC - 495 BC. Other people argue that Zhou Bi was compiled by more than one author and settled around 200 BC.

#### references:

http://www-groups.dcs.st-and.ac.uk/history/HistTopics/Chinese_overview.html

.

## Sep 12, 2013

### Tangram

 Fig. 1 A standard tangram with rainbow colors: (red, orange, yellow, green, cyan, blue, purple)
The tangram (Chinese: 七巧板 - seven boards) is a set of boards (shown above) that can be put together to form specific shapes.  For example, different shapes of cat:
 Fig. 2 several shapes of cat

Mathematica codes to draw the tangram shown in Fig. 1:
Graphics[{EdgeForm[{Darker@Gray, Thick, Opacity[0.4]}],
Orange, Polygon[{{0, 0}, {0, 1}, {1/2, 1/2}}],
Green, Polygon[{{0, 1}, {1, 1}, {1/2, 1/2}}],
Red, Polygon[{{1, 1}, {1, 1/2}, {3/4, 3/4}}],
Yellow, Polygon[{{3/4, 3/4}, {1, 1/2}, {3/4, 1/4}, {1/2, 1/2}}],
Blue, Polygon[{{1/2, 1/2}, {3/4, 1/4}, {1/4, 1/4}}],
Cyan, Polygon[{{1/4, 1/4}, {3/4, 1/4}, {1/2, 0}, {0, 0}}],
Purple, Polygon[{{1/2, 0}, {1, 1/2}, {1, 0}}]
}]

A Mathematica app to play with tangram:

Framed@DynamicModule[{O1 = {1/6, 1/2}, O2 = {1/2, 5/6},
O3 = {11/12, 3/4}, O4 = {3/4, 1/2}, O5 = {1/2, 1/3},
O6 = {3/8, 1/8}, O7 = {5/6, 1/6}, p1 = {-.25, -.25},
p2 = {0, -.25}, p3 = {0.25, -.25}, p4 = {.5, -.25},
p5 = {.75, -.25}, p6 = {1, -.25}, p7 = {1.25, -.25}, v1, v2, v3,
v4, v5, v6, v7, r = 0.08, v0 = {1, 0},
g = Graphics[{Gray, PointSize[Medium], Point[{0, 0}]}],
gray = False},

{v1, v2, v3, v4, v5, v6,
v7} = ({r, 0} + #) & /@ {p1, p2, p3, p4, p5, p6, p7};
Column[{Row[{Text["check here to paint gray:"],
Checkbox[Dynamic[gray]]}],
LocatorPane[
Dynamic[{v1, v2, v3, v4, v5, v6, v7, O1, O2, O3, O4, O5, O6,
O7}], Dynamic[
Graphics[{EdgeForm[], If[gray, Gray, Orange],
Polygon[RotationTransform[{v0, Normalize[v1 - p1]},
O1] /@ {O1 - {1/6, 1/2} + {0, 0}, O1 - {1/6, 1/2} + {0, 1},
O1 - {1/6, 1/2} + {1/2, 1/2}}], If[gray, Gray, Green],
Polygon[RotationTransform[{v0, Normalize[v2 - p2]},
O2] /@ {O2 - {1/2, 5/6} + {0, 1}, O2 - {1/2, 5/6} + {1, 1},
O2 - {1/2, 5/6} + {1/2, 1/2}}], If[gray, Gray, Red],
Polygon[RotationTransform[{v0, Normalize[v3 - p3]},
O3] /@ {O3 - {11/12, 3/4} + {1, 1},
O3 - {11/12, 3/4} + {1, 1/2},
O3 - {11/12, 3/4} + {3/4, 3/4}}], If[gray, Gray, Yellow],
Polygon[RotationTransform[{v0, Normalize[v4 - p4]},
O4] /@ {O4 - {3/4, 1/2} + {3/4, 3/4},
O4 - {3/4, 1/2} + {1, 1/2}, O4 - {3/4, 1/2} + {3/4, 1/4},
O4 - {3/4, 1/2} + {1/2, 1/2}}], If[gray, Gray, Cyan],
Polygon[RotationTransform[{v0, Normalize[v5 - p5]},
O5] /@ {O5 - {1/2, 1/3} + {1/2, 1/2},
O5 - {1/2, 1/3} + {3/4, 1/4},
O5 - {1/2, 1/3} + {1/4, 1/4}}], If[gray, Gray, Blue],
Polygon[RotationTransform[{v0, Normalize[v6 - p6]},
O6] /@ {O6 - {3/8, 1/8} + {1/4, 1/4},
O6 - {3/8, 1/8} + {3/4, 1/4}, O6 - {3/8, 1/8} + {1/2, 0},
O6 - {3/8, 1/8} + {0, 0}}], If[gray, Gray, Purple],
Polygon[RotationTransform[{v0, Normalize[v7 - p7]},
O7] /@ {O7 - {5/6, 1/6} + {1/2, 0},
O7 - {5/6, 1/6} + {1, 1/2}, O7 - {5/6, 1/6} + {1, 0}}],
Opacity[0.6], Gray, PointSize[0.02],
Point[{O1, O2, O3, O4, O5, O6, O7}], Orange, Disk[p1, r],
Green, Disk[p2, r], Red, Disk[p3, r], Yellow, Disk[p4, r],
Cyan, Disk[p5, r], Blue, Disk[p6, r], Purple, Disk[p7, r],
`