Equivalence of Rectangular and Trapezoidal Approximations

    Apr 12, 2024

    Discusses the equivalence and error estimation in integrating piecewise continuous functions using rectangular and trapezoidal approximations.

    For the integration of piecewise continuous functions, the results of "rectangular approximation" and "trapezoidal approximation" are the same, with the difference lying only in the speed of approximation. The essence of this problem lies in "error estimation".

    We only need to analyze the "error reduction process" associated with the "approximation process".

    To better understand the error, I will specifically calculate and analyze the integrations using rectangular and trapezoidal approximations for continuous functions on closed intervals.

    Given fC[a,b]f \in \mathcal{C}[a, b], let's start with the expression for rectangular approximation integration:

    limnk=1nf((nk)a+kbn)(ab)n=limnk=1n1f((nk)a+kbn)(ab)n+f(b)(ab)n(1)\lim_{n\to\infty}\sum_{k=1}^n f\left(\frac{(n-k)a+kb}{n}\right)\cdot\frac{(a-b)}{n} = \\ \lim_{n\to\infty}\sum_{k=1}^{n-1} f\left(\frac{(n-k)a+kb}{n}\right)\cdot\frac{(a-b)}{n}+f(b)\cdot\frac{(a-b)}{n} \quad {(1)}

    Now, let's look at the expression for trapezoidal approximation:

    limnk=1nf((nk+1)a+(k1)bn)+f((nk)a+kbn)2(ab)n(2)\lim_{n\to\infty}\sum_{k=1}^n\frac{f\left(\frac{(n-k+1)a+(k-1)b}{n}\right)+f\left(\frac{(n-k)a+kb}{n}\right)}{2}\cdot\frac{(a-b)}{n}\dots \quad {(2)}

    Let's rewrite the above expression, pay attention to the change in the upper limit of summation, observe:

    limnk=1nf((nk+1)a+(k1)bn)+f((nk)a+kbn)2(ab)n=limnk=1nf((nk+1)a+(k1)bn)2(ab)n+limnk=1nf((nk)a+kbn)2(ab)n=limnk=0n1f((nk)a+kbn)2(ab)n+limnk=1nf((nk)a+kbn)2(ab)n=limn[(f(a)2(ab)n+k=1n1f((nk)a+kbn)2(ab)n)+(k=1n1f((nk)a+kbn)2(ab)n+f(b)2(ab)n)]=limn(f(a)2(ab)n+f(b)2(ab)n)+limn(2k=1n1f((nk)a+kbn)2(ab)n)=limn(f(a)+f(b)2(ab)n+k=1n1f((nk)a+kbn)(ab)n)=limnk=1n1f((nk)a+kbn)(ab)n+f(a)+f(b)2(ab)n(3)\begin{align*} &\lim_{n\to\infty}\sum_{k=1}^n\frac{f\left(\frac{(n-k+1)a+(k-1)b}{n}\right)+f\left(\frac{(n-k)a+kb}{n}\right)}{2}\cdot\frac{(a-b)}{n} \\ &=\lim_{n\to\infty}\sum_{k=1}^n\frac{f\left(\frac{(n-k+1)a+(k-1)b}{n}\right)}{2}\cdot\frac{(a-b)}{n} \\ &\qquad\quad +\lim_{n\to\infty}\sum_{k=1}^n\frac{f\left(\frac{(n-k)a+kb}{n}\right)}{2}\cdot\frac{(a-b)}{n} \\ &=\lim_{n\to\infty}\sum_{k=0}^{n-1}\frac{f\left(\frac{(n-k)a+kb}{n}\right)}{2}\cdot\frac{(a-b)}{n} \\ &\qquad\quad +\lim_{n\to\infty}\sum_{k=1}^n\frac{f\left(\frac{(n-k)a+kb}{n}\right)}{2}\cdot\frac{(a-b)}{n} \\ &=\lim_{n\to\infty}\left[\left(\frac{f(a)}{2}\cdot\frac{(a-b)}{n}+\sum_{k=1}^{n-1}\frac{f\left(\frac{(n-k)a+kb}{n}\right)}{2}\cdot\frac{(a-b)}{n}\right) \right. \\ &\qquad\quad \left. +\left(\sum_{k=1}^{n-1}\frac{f\left(\frac{(n-k)a+kb}{n}\right)}{2}\cdot\frac{(a-b)}{n}+\frac{f(b)}{2}\cdot\frac{(a-b)}{n}\right)\right] \\ &=\lim_{n\to\infty}(\frac{f(a)}{2}\cdot\frac{(a-b)}{n}+\frac{f(b)}{2}\cdot\frac{(a-b)}{n})+ \\ & \lim_{n\to\infty}(2\cdot\sum_{k=1}^{n-1}\frac{f\left(\frac{(n-k)a+kb}{n}\right)}{2}\cdot\frac{(a-b)}{n}) \\ &=\lim_{n\to\infty}(\frac{f(a)+f(b)}{2}\cdot\frac{(a-b)}{n}+\sum_{k=1}^{n-1}f(\frac{(n-k)a+kb}{n})\cdot\frac{(a-b)}{n}) \\ &=\lim_{n\to\infty}\sum_{k=1}^{n-1}f\left(\frac{(n-k)a+kb}{n}\right)\cdot\frac{(a-b)}{n}+\frac{f(a)+f(b)}{2}\cdot\frac{(a-b)}{n} \dots \quad {(3)} \end{align*}

    Comparing equation 3{3} with equation 1{1}, did you notice? When integrating using two different approximation methods, the difference lies in:

    f(a)+f(b)2(ab)nf(b)(ab)n=f(a)f(b)2(ab)n\frac{f(a)+f(b)}{2}\cdot\frac{(a-b)}{n}-f(b)\cdot\frac{(a-b)}{n} \\ =\frac{f(a)-f(b)}{2}\cdot\frac{(a-b)}{n}

    We denote this expression as Δ(n)\Delta(n). Notice, it only involves one variable nn. And it's easy to see:

    • I.I. If f(a)=f(b)f(a)=f(b), then Δ(n)0\Delta(n)\equiv0.

    • II.II. If f(a)f(b)f(a)\neq f(b), then Δ(n)=O(1n)\Delta(n)=O(\frac{1}{n}).

    The first case might give you a feeling of reading an O. Henry novel. Surprisingly, only if f(a)=f(b)f(a)=f(b), no matter how the graph of ff fluctuates within the interval, rectangular and trapezoidal approximations are completely consistent, resulting in equal errors. As expected, when you write out the two expressions and observe each term of the summation, you'll find that the terms in both expressions are roughly the same.

    The second case leads to a deeper conclusion. Let's denote the error of equation (1)(1) as δ1(n)\delta_1(n) and the error of equation (3)(3) as δ2(n)\delta_2(n), we have:

    δ1(n)δ2(n)=Δ(n)=O(1n)\delta_1(n)-\delta_2(n)=\Delta(n)=O(\frac{1}{n})

    What does this mean? It means that δ1(n)\delta_1(n) and δ2(n)\delta_2(n) are at most one of them has an error term smaller than o(1n)o(\frac{1}{n})! From your intuition, you might have guessed that the error of the trapezoidal approximation is smaller. Correct, the conclusion was proved.