Research Activities
My research has mainly been focused on wave propagation, various surface phenomena, theoretical studies of surfaces, and complex systems. Extensive use of computer simulation methods has been required for large parts of this research. My expertise is in the fields of optics (statistical optics, nanoplasmonic), computational physics including computational electromagnetism, physics of complex systems, and statistical mechanics. In what follows, I briefly summarize my research and discuss my contributions to these fields.
My experience includes the following topics:
- Optics of disordered systems
- Optics of granular thin films (nanoplasmonic)
- Self-affine scale invariant structures
- Complex random networks
- Economical and social systems
Optics of disordered systems
As students we learn that when light impinges onto the flat interface of a semi-infinite homogeneous medium, the scattered and transmitted fields are fully determined by the Fresnel formulae. However, if the interface has a nontrivial structure, and/or the medium shows some degree of bulk randomness, i.e. not being homogeneous, the distribution of the scattered light is much harder to predict. Lord Rayleigh, well over one hundred years ago, was probably the first to study this problem theoretically. Since then, partly because of the wide range of practical (and military) applications, large research resources have been allocated to the topic. After many years of research, the field is still vibrant and keeps fascinating mathematicians, physicists and engineers alike.
My research in this field has been concentrated around rough surface
scattering, and in particular on multiple scattering phenomena and
coherent effects that those give rise to. Given the scattering
geometry and the surface topography -- or more precisely its
statistical properties -- one wants to predict the angular
distribution of the scattered light. This is called the forward
scattering problem. To achieve this, one has in principle to solve the
Maxwell's equations subjected to the appropriate boundary conditions
at the surface. For a surface random system, this is presently too
hard of a problem in general. It is almost like doing chemistry by
always having to solve the Schrödinger equation; it is practically
impossible. As a consequence, much of my research related to this
problem has been devoted to finding approximate solutions to it,
judging their quality, and to try to locate simple geometries where
certain optical phenomena can be observed under favorable experimental
conditions. One simplification that we have often applied is to study
one-dimensional scattering geometries, i.e. scattering systems
where the surface roughness is a function of one variable,
, and the incident light being polarized either in, or
out of, the incident -plane (i.e. p- or s-polarization). Under these
conditions, the scattering problem still shows many of the
characteristic features of the general problem. Moreover, it can be
formulated by a scalar wave equation -- resulting in dramatic
simplifications, in particularly for numerical simulations.
In this latter case the Maxwell equations can be solved
|
Figure 2 depicts one of our simulation results for the angular distribution of the incoherently (diffusely) scattered light from a one-dimensional film geometry where the lower surfaces was rough and the incoming s-polarized light was incident at an angle of from the (mean) surface normal. The intensity distribution shows a rich structure; the peak at is attributed to the enhanced backscattering phenomenon, while those located at and make out the satellite peak phenomenon. By various types of perturbation theories and numerical simulations, both these phenomena (and others) have been studied with the aim of uncovering their origin and to identify under which experimental conditions they can occur. The peak structure that can be observed from Fig. 2 is a result of constructive interference of multiple scattered light paths and their reciprocal partners (coherent effect). Such coherent interference effects will systematically only take place in certain well defined directions determined by the scattering geometry and the optical properties of the media involved.
|
Furthermore, we have extensively studied the properties of the so-called intensity-intensity correlation functions. These correlation functions tell us statistically how two intensities from two experiments conducted on one and the same scattering system are interrelated. Similar to bulk random systems, these correlation functions can be classified as short, long and infinitely range correlation function. The former is independent of the length of the illuminated section of the rough surface and has been observed experimentally. The two latter, however, scale as one over the surface length and can therefore more difficultly be accessed experimentally. Surface random systems, in contrast to bulk random systems, do in addition give rise to intermediate range correlation functions. The various types of correlation functions show a rich (peak) structure, including the memory effect and the reciprocal memory effect. Our main finding on intensity-intensity correlations functions relates the statistical properties of the scattered fields to if certain correlation functions are non-zero or not. In particular, we can distinguish if the random process satisfied by the scattered (or transmitted) field is (i) Gaussian, (ii) complex circular Gaussian, (iii) or non-Gaussian.
Lately, together with industrial partners, we have been involved in an initiative trying to derive approximative expressions for parameters used in the optical industry to characterize optical materials. These expressions have been utilized to determine optimal production parameters in order to achieve certain optical properties of the resulting products. In particular, we have worked on the quantities haze and gloss. Crudely speaking, they are relative measures of how diffuse or specular, respectively, a material is (as ``seen'' by the incident light of given wavelength). Our analytic approximations were compared to rigorous computer simulation results, and good agreement was found over large regions of parameter space.
|
I have also been involved in optical inverse scattering problems that deal with the question of how to construct randomly rough surfaces with well defined scattering properties. This we refer to as a designer surface problem. The starting point of this research is the assumption that the rough surface can be written as an infinite sum of a characteristic profile or groove weighted by random amplitudes. The angular distribution of the scattered or transmitted light from such surfaces does depend on the distribution from which these amplitudes are drawn. After specifying a desired angular distribution for the scattered or transmitted light, we try to ``design'' a surface, by adjusting the distribution of the random amplitudes such that the resulting angular distribution coincides with the desired one. Technically, the geometrical optics limit of the Kirchhoff approximation enabled us to derive analytic expressions for the (often complicated) distributions satisfied by the amplitudes. These expressions were then used to generate ensembles of rough surfaces that were used in rigorous (Monte Carlo) simulations in order to judge the ``quality of the design''. The results were often found to be quite satisfactory. Hence, we were able to design surfaces with well defined scattering properties, an often desired capability in practical applications. One example is given in Fig. 3 where the surface was designed to act as a two-dimensional uniform diffuser.
While with Electromagnetic Geoservices AS, I worked on forward modeling and inversion of extremely low frequency () electromagnetic data. The main application of their technology is in the petroleum industry where one tries to distinguish hydrocarbon filled reservoirs from those that are (salt) water filled. This can by achieved due to the pronounced difference in reservoir resistivity between the two situation.
Optics of granular thin films (nanoplasmonic)
When a metal, say, under ultra-high vacuum conditions is evaporated onto a dielectric substrate, granular thin metal films may result depending on the wetting properties. The result of Ag evaporated onto MgO is depicted in Fig. 4(a). Such thin discontinuous films are characterized by small nano-meter sized islands distributed over the entire surface of the substrate.
|
|
(a) |
(b) |
Optical techniques has the potential of being used as in situ, non-invasive monitoring tools that are inexpensive and easily adaptable to changing environments. In our research, we have focused on the optical properties of such nano-sized thin films. The goal was to be able to monitor the parameters characterizing the island film (particle sizes, aspect ratio, mean island-island separation etc.) during the growth process.
To this end, we focused on the positions of the plasmon resonances of the system. The degeneracy of the transverse and longitudinal modes of the free-standing particles were lifted due to the presence of the surface and the interaction between the islands. It turned out that the positions of these resonances were rather sensitive to the geometrical parameters. In order to be able to calculate the optical properties of such island films, one needs to know their polarizeabilities. A pure dipole interaction was insufficient to reproduce the experimental spectra (Fig. 4(b)). However, mainly due to the interaction with the substrate, higher order multipoles had also to be taken into account. Retardation effects over the size of the particles could, however, be neglected to a good approximation. Hence the scattering problem is reduced to solving the Laplace equation for which a complete set of function exists when the particles have a spherical or spheroidal shape. Thus the problem amounted to finding the unknown coefficients of the multipole expansion. They are determined by a system of linear equations that results from imposing the boundary conditions on the interface of the particle and substrate.
To generate the coefficients matrix of this system in an accurate way turned out to be rather challenging. The cause of the problem was severe numerical cancellation taking place when integrating essentially the product of two highly oscillating associated Legendre functions (of high order). To handle these cancellations, extended precision had to be used in parts of the calculations (often using 30-40 significant digits!).1 The reward, however, was that we in the end were able to extract accurate quantitative information about the layer structure by fitting the model to the experimental -spectra (Fig. 4(b)). The parameters characterizing the layer obtained in this way were later compared to the results of more costly ex situ synchrotron radiation measurements, and more than satisfactory agreement was found.
We also studied metal deposition on absorbing oxide substrates, like TiO2 and ZnO. For the former system, we started with a numerical study, and found as expected two plasmon resonances. Here, first to our surprise, we in addition observed an additional third peak (Fig. 5(a)). After having rejected this as a numerical artifact, we realized by generating potential maps that this peak in fact could be attributed to a quadrupole resonance (Fig. 5(b)). The existence of this resonance was later confirmed experimentally.
|
Self-affine scale invariant structures
Nature is full of complex structures at all scales; from the scale of the universe down to the building blocks of how it is put together. Self-affinity is a term that is used to characterize a certain class of surfaces. In particular, if the surface height at position x is denoted h(x), then the surface is said to be self-affine if the statistical properties of h(x) and its rescaled version, , are identical (scale invariance). Here H is a parameter characterizing this scale invariance and it is known as the Hurst or roughness exponent. Over the years, it has been apparent that self-affine surfaces are abundant in nature. What makes such an invariance particularly interesting, is the fact that the properties of the system can be studied at a given length scale, say, the laboratory scale, and the results obtained there rescaled (within the self-affine regime) to the scale of interest, typically beyond laboratory scales.
|
In order to take advantage of the powerful concept of rescaling, one does need to know the appropriate Hurst exponent of the system at hand and the range of scales over which it applies. To obtain such kind of information, we have developed a wavelet based technique for the determination of the Hurst exponent from numerical (experimental) data. This method we termed the average wavelet coefficient method (AWC), and it was tested for consistency against well-established self-affine structures with success. The methodology is particularly powerful when the self-affine scaling regime does not extend over all available scales, and/or the scaling shows some cross-over phenomena. Technically this cross-over detection ability is related to the fact that wavelets at different scales are orthogonal. With most other Hurst exponent measuring techniques, including detrended fluctuation analysis (dfa) and the Fourier spectrum method, this cross-over scale cannot be easily extracted since the behavior around this scale is smeared out. Later the AWC-method was generalized to handle scaling in higher dimensions, as well as multi-affine structures. Furthermore, based on wavelets, we developed a new fast algorithm for generating self-affine profiles. This methods is, relative to other methods, most powerful for long profiles due to its speed and reliability in generation (extremely) long profiles with the desired scaling property extending over the entire range of scales without any need for additional modification of the algorithm.
The identification of physical systems showing self-affine scale invariance is not interesting in its own right. However, the ultimate interest of scientists is in the end to study the consequences of such an invariance on the physical parameters used to characterize the properties of the system. For instance, together with Drs. Stephane Roux and Damien Vandembroucq we studied wave scattering from self-affine surfaces. We were able to, within the single-scattering approximation, to solve for the angular distribution of the scattered intensity in closed form. This expression was given in terms of the Lévy (or stable) distribution known from statistics and the parameters defining the self-affinity and the scattering geometry. The prediction of this model was compared to rigorous Monte-Carlo simulations, and later to experimental scattering data obtained for self-affine rolled Aluminum surfaces (Figs. 6 and 7). The conclusion was that the model outperformed previous models, and it could be used to extract with confidence the Hurst exponent and the so-called topothesy of the underlying surface. For the future, we will try to also include multiple scattering into this model by introducing a truncation of the Lévy distribution. Empirically, we hope to be able to relate the degree of multiple scattering to the truncation parameter. Furthermore, we have studied the implications of scale invariance in economical systems and how such structures can be used to optimize portfolios and investment strategies (more on this below).
|
Complex random networks
The word network, we are familiar with from daily life in contexts like, say, computer networks and social networks. In technical terms it is used for a set of countable objects, called nodes or vertices, where relations or dependencies (links or edges) are defined among them (an example of a power grid network is given in Fig. 9(a)). For instance, sociologists have studied such network structures for years. They are interested in, for instance, friendship-networks, where individuals are the nodes, and the existence of a friendship between them corresponds to the links. With the advent of the computer, however, the amount of data contained in typical networks of interest, became just too large for using the human eye as the analyzing tool. Up till then, the visual inspection of networks drawn on a piece of paper had been common, but such strategy is insufficient for large networks. It was at this point in the history of network analysis that the (statistical) physicist enter the arena equipped with his/her statistical physics toolkit.
In many real-world networks like the WWW, social networks, protein interaction networks etc. it is of interest to be able to find group of nodes (home-pages, friends, proteins), that are highly linked among themselves and less to the rest of the network. Such set of nodes is said to form a community or a cluster. For instance, within a cell such communities might represent proteins allocated specialized tasks or related functions. We have developed an algorithm, based on diffusion or the paradigm of the random walk, that gives an overview of the network structure at the global scale. Notice that in order to obtain such information, one needs a global measure; a local one is insufficient. Our algorithm prescribes in a systematic way a walker current (real number) to every node of the network. The main idea was that nodes belonging to the same cluster would have similar current values. It has been interesting to notice that the search engine Google seems (at least) initially to have use a similar random walk picture for their ``Page Ranking'', a number used to define the order in which search results were presented. This feature has undoubtably contributed substantially to the great success of this search engine.
|
Technically the (outgoing) walker currents (per unit link weight) prescribed to the nodes come from the master equation obtained by considering the artificial random walk (diffusion) process on the network. This equation is the result of the conservation of walkers. Starting from any initial state, walkers will with time redistribute themselves throughout the network so that eventually the stationary state is reached. The time evolution of the system can be described in terms of decaying modes, that formally comes about as eigenvalues and eigenvectors of the transfer or diffusion matrices. By considering less and less slowly decaying modes (corresponding to smaller and smaller eigenvalues) more and more detailed community structures can be mapped out.
We applied, for instance, this algorithm to a coarse grained version of the Internet (autonomous system network) consisting of several thousand nodes. The two slowest decaying modes gave the star-like structure of Fig. 8. This structure could be identified as the geographical and political sub-division of nodes. The main structure of the network, signaled by different signs of , was found to correspond to American and Russian nodes.
Furthermore, we have extended the algorithm to handle a higher number of decaying modes in a natural way. To this end, one needed a measure for deciding to accept a potential subdivision or not, and we ended up using the clustering coefficient . The number of modes () was increased till no more clusters were identified (inset to Fig. 9). For the autonomous system network, this number was found to be 13. Recently, the procedure was extended to weighted networks as well. We have found that the additional link weight information makes the identification of community structures more robust. In some cases, to include weighted links into the analysis is essential for the identification of the underlying community structure.
Following the increasing interest in the study of dynamical processes taking place on networks, we have adopted the above mentioned particle flow model as a simple and generic prototype model for flow on graphs. In particular, it was used to study the influence on dynamics on cascading failures (Figs. 9. It was found that the role of dynamics can be significant and should be included when evaluating network robustness.
|
Economical and social systems
Social and economical systems are examples of complex systems known to everyone. It is, however, only recently that it has become mainstream for physicists to analyse and try to model such complicated and fascinating systems. The application of methodology from physics, particular taken from statistical physics, has created buzz words line ``econphysics'' and ``sociophysics''.
In this field, I have experience from both the analysis of empirical data and the construction and study of (toy) models used to identify and analyze specific mechanisms. Quite a bit of our research in this field has been devoted to what we have called inverse statistics. It is a concept that was first developed fruitfully for turbulence. We later adapted it to economical time series. In contrast to using the return distribution as a (fixed time horizon) estimate of performance, the inverse statistics approach provides a time varying measure that is useful, for instance, in portfolio optimizations. In particular, if the desired performance goal is a return of, say, 5%, then the inverse statistics method describes this, using historic data, as the distribution of waiting times needed to achieve this goal. Such distribution for the DJIA is presented in Fig. 10 and one observes the pronounced asymmetry between positive and negative levels of returns (). This means that one tends to loose money faster than one earns them in this marketplace. The asymmetric behavior is found to be typical for stock indicies. Moreover, this and other features of the inverse statistics point towards the non-geometrical Brownian motion character of stock fluctuations. As a paradox, we have established that within the statistical uncertainty, no similar well-expressed asymmetry is identifiable for (individual) stocks that are part of the DJIA index. How is it possible that such asymmetry appears in the index, but not in the stocks that the index is based upon? For the moment, we believed that this difference can be attributed to a kind of collective synchronization phenomenon taking place among the individual stocks of the index. However, we do not have the full detailed understanding of the matter yet, but work is in progress to resolve it.
|
Our works on deregulated power markets have established that the price process of these commodities are highly anti-persistent (mean reverting). Unlike in a stock market, this does not represent an arbitrage opportunity since produced electric power cannot presently be stored effectively and, therefore, money can not be gained directly from the knowledge of this additional information. Moreover, we have performed stylized facts studies of such markets, trying to uncover their typical characteristics. Furthermore, we have work in progress focusing on the development of pricing models for (Asian and exotic) power options. This is more challenging than for stock markets due to the mean reverting character (anti-correlation) and sesonality of the underlying price process. Recently we have started to study blackout dynamics in power systems. This phenomenon is important to fully understand (and prevent) due to the dependence of contemporary societies on a continuous and stable supply of electric power. It has been suggested that blackouts in power systems can be modeled by self organized criticality. We have, however, partly questioned this statement.