The Hilbert transform of a function
is defined as
where PV stands for "principal value." An efficient method for computing
this transform is via the method introduced in Reference 2.
It is based on an expansion of f(x) in terms of rational eigenfunctions
of the Hilbert transform operator.
There are two versions of this algorithm. The first is used when H(f)(y) is to be evaluated as a function of y, for arbitrary y. The second algorithm defines an operator that maps function values to Hilbert transform values, i.e., f(x) --> H(f)(x), on the same set of collocation points (defined below). The second code is therefore useful for solving integral or integro-differential equations. (It may be used, for example, to solve the Benjamin-Ono equation as described in Reference 1.)
The first code, hilb1.m, takes an arbitrary function f(x) (defined as a string function) as input. It then computes H(f)(y) at prescribed value(s) of y (which may be a scalar, vector, or matrix.) See Example 1 below.
The second code, hilb2.m, takes as input the function values f(x) at special collocation points, defined below. It then returns values of H(f)(x) at these collocation points. See Example 2 below.
Both of these codes take as additional input parameters the number of terms in the expansion, as well as a scaling parameter b (called L in Reference 2.) Choosing b optimally requires a certain amount of trial-and-error.
The codes hilb1.m and hilb2.m are essentially based on the algorithm described in Reference 2. There is one important difference however. We have used the following set of collocation points
x(n) = b*tan(pi*(n+1/2)/(2*N)), n = -N,..., N-1
instead of the points used in Reference 2, which were
x(n) = b*tan(pi*n/(2*N)), n = -N, ..., N-1.
If you think about it, this is using the midpoint rule discretization of the integral formula for the Fourier coefficients rather than the trapezoidal rule discretization. This was done for two reasons: First, it avoids the nuisance of dealing with a collocation point at infinity, and second, we have found that it actually yields more accurate results in many cases.
The Hilbert transform of the Gaussian f(x) = exp(-x^2) is H(f)(y) = -2/sqrt(pi) D(y), where D(y) is Dawson's function. This may be used to check the accuracy of hilb1.m. (Notes: (a) This example, as presented here, requires Matlab's Symbolic Toolbox to compute the Dawson function. (b) The value b = 5 used here is near optimal; see Reference 2.)
>> f = 'exp(-x.^2)'; >> y = [1 10 100]; >> h = real(hilb1(f,32,5,y)) h = -6.0716e-01 -5.6705e-02 -5.6422e-03 >> hex = -2/sqrt(pi)*mfun('dawson',y) hex = -6.0716e-01 -5.6705e-02 -5.6422e-03 >> relerror = (h-hex)./hex relerror = -1.4080e-14 -6.2187e-13 -4.6118e-16The accuracy is of course not always as good as this; please see Reference 2 for a discussion of the convergence rate of this algorithm.
>> b = 1; >> N = 8; >> n = [-N:N-1]'; >> x = b*tan(pi*(n+1/2)/(2*N)); >> F = 1./(1+x.^4); >> h = real(hilb2(F,x,N,b)); >> xx = [-12:0.02:12]; >> hex = -xx.*(1+xx.^2)./(1+xx.^4)./sqrt(2); >> plot(xx,hex,x,h,'o')
The plot shows the exact transform (solid line) along with the approximate transform values at the collocation points (circles).
1. R.L. James and J.A.C. Weideman, "Pseudospectral methods for the Benjamin-Ono equation," Proceedings of the 7th IMACS International Conference on Computer Methods for PDES, R. Vihnevetsky, D. Knight, and G. Richter (eds)., IMACS, New Brunswick, pp. 371-377 (1992).
2. J.A.C. Weideman, "Computing the Hilbert transform on the real line", Math. Comp., Vol. 64, pp. 745-762 (1995).
Last updated: February 28, 1998.