Lab final exam: Due Sunday April 22, 18:00 EST

In the following exercises, we will explore various approximation methods applied to the function $$ f_a(x) = \begin{cases}x^a&\textrm{for}\,\,x>0\\0&\textrm{for}\,\,x\leq0\end{cases} $$ where $a>0$ is a parameter.
  1. Check the membership of $f_a$ into the scale $C^{k,\alpha}([-1,1])$, that is, find the largest $k+\alpha$ such that $f_a\in C^{k,\alpha}([-1,1])$, depending on $a>0$. Recall that $f\in C^{0,\alpha}([-1,1])$ for $0<\alpha\leq1$ if and only if $$ \sup_{-1\leq x \lt y\leq1}\frac{|f(x)-f(y)|}{|x-y|^\alpha}<\infty , $$ and that $f\in C^{k,\alpha}([-1,1])$ for $k\in\mathbb{N}$ and $0<\alpha\leq1$ if and only if $f\in C^k([-1,1])$ and $f^{(k)}\in C^{0,\alpha}([-1,1])$.
  2. Implement an algorithm to approximate $f_a$ by a piecewise constant function on the uniform parition of $[-1,1]$ consisting of $n$ subintervals. For example, one could use the value of $f_a$ at the midpoint of each subinterval (as happens in the midpoint rule). Plot a couple of illustrative examples with different $a$ and $n$. In the asymptotic regime where $n$ is large, we expect the error of the approximation to behave like $n^{-r}$ for some $r$, which we call the rate of convergence. Implement a procedure to approximately compute the error of the approximation in the maximum norm (You don't need to do anything too complicated), and estimate the rate of convergence for a number of differerent values of $a$, focusing around the point $a=1$ (e.g., for $a=0.5, 0.6,..., 1.5$). Note that what happens in the preasymptotic regime (i.e., when $n$ is moderate) might pollute your estimate of the convergence rate (especially if you are doing it automatically), so you need to remove small values of $n$ from considerations. To identify roughly where the asymptotic regime starts, some manual fiddling, combined with theoretical insights might be necessary. How does the rate of convergence depend on the regularity of the function? Have a guess at the dependence $r=r(a)$. Explain the behaviour in the context of approximation theory.

    Note: The following exercises are the same as this one, except that they concern different approximation procedures. The general remarks, suggestions, and questions from this exercise will also apply to them.

  3. Implement an algorithm to interpolate $f_a$ by a continuous piecewise linear function on the uniform parition of $[-1,1]$ consisting of $n$ subintervals (as happens in the trapezoidal rule). Plot a couple of illustrative examples with different $a$ and $n$. Implement a procedure to approximately compute the error of the approximation in the maximum norm, and estimate the rate of convergence for a number of differerent values of $a$, focusing around the point $a=2$. Is the behavior different from the preceding exercise? Please explain.
  4. Implement an algorithm to compute the Bernstein polynomial of degree $n$ for $f_a$. Do not forget to scale the original setting on $[0,1]$ to $[-1,1]$. Plot a couple of illustrative examples with different $a$ and $n$. Implement a procedure to approximately compute the error of the approximation in the maximum norm, and estimate the rate of convergence for a number of differerent values of $a$. What is new compared to the previous exercises? Explain the behaviour.
  5. Implement an algorithm to compute the Chebyshev interpolation polynomial of degree $n$ for $f_a$. Plot a couple of illustrative examples with different $a$ and $n$. Implement a procedure to approximately compute the error of the approximation in the maximum norm, and estimate the rate of convergence for a number of differerent values of $a$. How does the rate of convergence depend on the regularity of the function? Please explain the behaviour.

How and what to submit

Please submit a link to your Jupyter notebook on GitHub by email, before Sunday April 22, 18:00 EST. The subject line of the email should be: