Tuesday, September 15, 2009

Interpolation - Wikipedia, the free encyclopedia

Interpolation - Wikipedia, the free encyclopedia: "Interpolation
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For other uses, see Interpolation (disambiguation).

In the mathematical subfield of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points.

In engineering and science one often has a number of data points, as obtained by sampling or experimentation, and tries to construct a function which closely fits those data points. This is called curve fitting or regression analysis. Interpolation is a specific case of curve fitting, in which the function must go exactly through the data points.

A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function. Suppose we know the function but it is too complex to evaluate efficiently. Then we could pick a few known data points from the complicated function, creating a lookup table, and try to interpolate those data points to construct a simpler function. Of course, when using the simple function to calculate new data points we usually do not receive the same result as when using the original function, but depending on the problem domain and the interpolation method used the gain in simplicity might offset the error.

It should be mentioned that there is another very different kind of interpolation in mathematics, namely the 'interpolation of operators'. The classical results about interpolation of operators are the Riesz-Thorin theorem and the Marcinkiewicz theorem. There also are many other subsequent results.
An interpolation of a finite set of points on an epitrochoid. Points through which curve is splined are red; the blue curve connecting them is interpolation.
Contents
[hide]

* 1 Example
* 2 Piecewise constant interpolation
* 3 Linear interpolation
* 4 Polynomial interpolation
* 5 Spline interpolation
* 6 Interpolation via Gaussian processes
* 7 Other forms of interpolation
* 8 Related concepts
* 9 References
* 10 External links

[edit] Example

For example, suppose we have a table like this, which gives some values of an unknown function f.
Plot of the data points as given in the table.
x f(x)
0 0
1 0 . 8415
2 0 . 9093
3 0 . 1411
4 −0 . 7568
5 −0 . 9589
6 −0 . 2794

Interpolation provides a means of estimating the function at intermediate points, such as x = 2.5.

There are many different interpolation methods, some of which are described below. Some of the concerns to take into account when choosing an appropriate algorithm are: How accurate is the method? How expensive is it? How smooth is the interpolant? How many data points are needed?

[edit] Piecewise constant interpolation
Piecewise constant interpolation, or nearest-neighbor interpolation.
For more details on this topic, see Nearest-neighbor interpolation.

The simplest interpolation method is to locate the nearest data value, and assign the same value. In one dimension, there are seldom good reasons to choose this one over linear interpolation, which is almost as cheap, but in higher dimensions, in multivariate interpolation, this can be a favourable choice for its speed and simplicity.

[edit] Linear interpolation
Plot of the data with linear interpolation superimposed
Main article: Linear interpolation

One of the simplest methods is linear interpolation (sometimes known as lerp). Consider the above example of determining f(2.5). Since 2.5 is midway between 2 and 3, it is reasonable to take f(2.5) midway between f(2) = 0.9093 and f(3) = 0.1411, which yields 0.5252.

Generally, linear interpolation takes two data points, say (xa,ya) and (xb,yb), and the interpolant is given by:

y = y_a + (x-x_a)\frac{(y_b-y_a)}{(x_b-x_a)} at the point (x,y)

Linear interpolation is quick and easy, but it is not very precise. Another disadvantage is that the interpolant is not differentiable at the point xk.

The following error estimate shows that linear interpolation is not very precise. Denote the function which we want to interpolate by g, and suppose that x lies between xa and xb and that g is twice continuously differentiable. Then the linear interpolation error is

|f(x)-g(x)| \le C(x_b-x_a)^2 \quad\mbox{where}\quad C = \frac18 \max_{y\in[x_a,x_b]} |g''(y)|.

In words, the error is proportional to the square of the distance between the data points. The error of some other methods, including polynomial interpolation and spline interpolation (described below), is proportional to higher powers of the distance between the data points. These methods also produce smoother interpolants.

[edit] Polynomial interpolation
Plot of the data with polynomial interpolation applied
Main article: Polynomial interpolation

Polynomial interpolation is a generalization of linear interpolation. Note that the linear interpolant is a linear function. We now replace this interpolant by a polynomial of higher degree.

Consider again the problem given above. The following sixth degree polynomial goes through all the seven points:

f(x) = − 0.0001521x6 − 0.003130x5 + 0.07321x4 − 0.3577x3 + 0.2255x2 + 0.9038x.

Substituting x = 2.5, we find that f(2.5) = 0.5965.

Generally, if we have n data points, there is exactly one polynomial of degree at most n−1 going through all the data points. The interpolation error is proportional to the distance between the data points to the power n. Furthermore, the interpolant is a polynomial and thus infinitely differentiable. So, we see that polynomial interpolation solves all the problems of linear interpolation.

However, polynomial interpolation also has some disadvantages. Calculating the interpolating polynomial is computationally expensive (see computational complexity) compared to linear interpolation. Furthermore, polynomial interpolation may not be so exact after all, especially at the end points (see Runge's phenomenon). These disadvantages can be avoided by using spline interpolation.

[edit] Spline interpolation
Plot of the data with Spline interpolation applied
Main article: Spline interpolation

Remember that linear interpolation uses a linear function for each of intervals [xk,xk+1]. Spline interpolation uses low-degree polynomials in each of the intervals, and chooses the polynomial pieces such that they fit smoothly together. The resulting function is called a spline.

For instance, the natural cubic spline is piecewise cubic and twice continuously differentiable. Furthermore, its second derivative is zero at the end points. The natural cubic spline interpolating the points in the table above is given by

f(x) = \left\{ \begin{matrix} -0.1522 x^3 + 0.9937 x, & \mbox{if } x \in [0,1], \\ -0.01258 x^3 - 0.4189 x^2 + 1.4126 x - 0.1396, & \mbox{if } x \in [1,2], \\ 0.1403 x^3 - 1.3359 x^2 + 3.2467 x - 1.3623, & \mbox{if } x \in [2,3], \\ 0.1579 x^3 - 1.4945 x^2 + 3.7225 x - 1.8381, & \mbox{if } x \in [3,4], \\ 0.05375 x^3 -0.2450 x^2 - 1.2756 x + 4.8259, & \mbox{if } x \in [4,5], \\ -0.1871 x^3 + 3.3673 x^2 - 19.3370 x + 34.9282, & \mbox{if } x \in [5,6]. \\ \end{matrix} \right.

In this case we get f(2.5)=0.5972.

Like polynomial interpolation, spline interpolation incurs a smaller error than linear interpolation and the interpolant is smoother. However, the interpolant is easier to evaluate than the high-degree polynomials used in polynomial interpolation. It also does not suffer from Runge's phenomenon.

[edit] Interpolation via Gaussian processes

Gaussian process is a powerful non-linear interpolation tool. Many popular interpolation tools are actually equivalent to particular Gaussian processes. Gaussian processes can be used not only for fitting an interpolant that passes exactly through the given data points but also for regression, i.e. for fitting a curve through noisy data. In the geostatistics community Gaussian process regression is also known as Kriging.

[edit] Other forms of interpolation

Other forms of interpolation can be constructed by picking a different class of interpolants. For instance, rational interpolation is interpolation by rational functions, and trigonometric interpolation is interpolation by trigonometric polynomials. The discrete Fourier transform is a special case of trigonometric interpolation. Another possibility is to use wavelets.

The Whittaker–Shannon interpolation formula can be used if the number of data points is infinite.

Multivariate interpolation is the interpolation of functions of more than one variable. Methods include bilinear interpolation and bicubic interpolation in two dimensions, and trilinear interpolation in three dimensions.

Sometimes, we know not only the value of the function that we want to interpolate, at some points, but also its derivative. This leads to Hermite interpolation problems.

[edit] Related concepts

The term extrapolation is used if we want to find data points outside the range of known data points.

In curve fitting problems, the constraint that the interpolant has to go exactly through the data points is relaxed. It is only required to approach the data points as closely as possible (within some other constraints). This requires parameterizing the potential interpolants and having some way of measuring the error. In the simplest case this leads to least squares approximation.

Approximation theory studies how to find the best approximation to a given function by another function from some predetermined class, and how good this approximation is. This clearly yields a bound on how well the interpolant can approximate the unknown function."

Thursday, September 10, 2009

MRPT : Rao-Blackwelized Particle Filter (RBPF) approach to map building (SLAM).

The MRPT project: mrpt::slam::CMetricMapBuilderRBPF Class Reference:

Visit link for more details


"mrpt::slam::CMetricMapBuilderRBPF Class Reference
#include <mrpt/slam/CMetricMapBuilderRBPF.h>

Inheritance diagram for mrpt::slam::CMetricMapBuilderRBPF:

List of all members.
Detailed Description
This class implements a Rao-Blackwelized Particle Filter (RBPF) approach to map building (SLAM).

Internally, the list of particles, each containing a hypothesis for the robot path plus its associated metric map, is stored in an object of class CMultiMetricMapPDF.

This class processes robot actions and observations sequentially (through the method CMetricMapBuilderRBPF::processActionObservation) and exploits the generic design of metric map classes in MRPT to deal with any number and combination of maps simultaneously: the likelihood of observations is the product of the likelihood in the different maps, etc.

A number of particle filter methods are implemented as well, by selecting the appropriate values in TConstructionOptions::PF_options. Not all the PF algorithms are implemented for all kinds of maps.

For an example of usage, check the application 'rbpf-slam', in 'apps/RBPF-SLAM'. See also the wiki page.

Note:
Since MRPT 0.7.1 the semantics of the parameters 'insertionLinDistance' and 'insertionAngDistance' changes: the entire RBFP is now NOT updated unless odometry increments surpass the threshold (previously, only the map was NOT updated). This is done to gain efficiency.

Since MRPT 0.6.2 this class implements full 6D SLAM. Previous versions worked in 2D + heading only.

See also:
CMetricMap

Definition at line 62 of file CMetricMapBuilderRBPF.h."

Monday, September 7, 2009

SOM ( System on Module )

Accelerated System Design using FPGA Based System-On-Modules (SOM) - Wikipedia, the free encyclopedia: "System-On-Module (SOM)

For companies designing or redesigning an Embedded Systems product there is the tendency to utilize as many third-party components as possible for functions that are well-understood and readily available in the marketplace[citation needed]. This is a direct result of the increasing commoditization of advanced functionalities in the electronic component market. This paradigm is what makes companies use prepackaged power-modules for example, thereby avoiding the complexity and cost of designing 'in house'. A plethora of vendors supply ready built units with all the functionality required (except for specific or niche applications). Thus, it makes no sense for a company to design its own hardware when it has the choice of buying fully-tested, reliable products at lowere cost.[neutrality disputed]"

Loading other than shell

TS-7400 System on Module w/ Ultra-Fast Linux Bootup: "Since the TS-7400 actually boots to an initrd with a read-only mounted filesystem, it is possible to have something other than a shell prompt running after bootup by editing the /linuxrc shell script on the initrd. Additional TS-7400 software features include:"

(initrd)

Linux initial RAM disk (initrd) overview: "What's an initial RAM disk?

The initial RAM disk (initrd) is an initial root file system that is mounted prior to when the real root file system is available. The initrd is bound to the kernel and loaded as part of the kernel boot procedure. The kernel then mounts this initrd as part of the two-stage boot process to load the modules to make the real file systems available and get at the real root file system.

The initrd contains a minimal set of directories and executables to achieve this, such as the insmod tool to install kernel modules into the kernel.

In the case of desktop or server Linux systems, the initrd is a transient file system. Its lifetime is short, only serving as a bridge to the real root file system. In embedded systems with no mutable storage, the initrd is the permanent root file system. This article explores both of these contexts."

Sunday, September 6, 2009

MIPS Pipeline Stages

File:MIPS Architecture (Pipelined).svg - Wikipedia, the free encyclopedia

MIP V ( 3D Graphics )


MIPS V is the fifth version of the architecture, announced on 21 October 1996 at the Microprocessor Forum 1996.[12] MIPS V was designed to improve the performance of 3D graphics applications. In the mid-1990s, a major use of non-embedded MIPS microprocessors were graphics workstations from SGI. MIPS V was complemented by the integer-only MIPS Digital Media Extensions (MDMX) multimedia extensions, which were announced on the same date as MIPS"

NON RISC speed boost

"However, modern non-RISC designs achieves this speed by other means (such as queues in the CPU)."

Initial Bariers to MIPS

MIPS architecture - Wikipedia, the free encyclopedia: "One major barrier to pipelining was that some instructions, like division, take longer to complete and the CPU therefore has to wait before passing the next instruction into the pipeline. One solution to this problem is to use a series of interlocks that allows stages to indicate that they are busy, pausing the other stages upstream. Hennessy's team viewed these interlocks as a major performance barrier since they had to communicate to all the modules in the CPU which takes time, and appeared to limit the clock speed. A major aspect of the MIPS design was to fit every sub-phase, including cache-access, of all instructions into one cycle, thereby removing any needs for interlocking, and permitting a single cycle throughput.

Although this design eliminated a number of useful instructions such as multiply and divide it was felt that the overall performance of the system would be dramatically improved because the chips could run at much higher clock rates. This ramping of the speed would be difficult with interlocking involved, as the time needed to set up locks is as much a function of die size as clock rate. The elimination of these instructions became a contentious point."

MIPS architecture

MIPS architecture - Wikipedia, the free encyclopedia: "MIPS architecture

MIPS Designer MIPS Computer Systems
Bits 64-bit (32→64)
Introduced 1981
Design RISC
Type Register-Register
Encoding Fixed
Branching Condition register
Endianness Bi
Extensions MDMX, MIPS-3D
Registers
32

* 31 32-bit GPRs (R0=0)
* 32 32-bit FP regs (paired DP)
* MIPS III has 32 64-bit GPRs and FPRs

MIPS (originally an acronym for Microprocessor without Interlocked Pipeline Stages) is a reduced instruction set computing (RISC) instruction set architecture (ISA) developed by MIPS Computer Systems (now MIPS Technologies). The early MIPS architectures were 32-bit, while later versions were 64-bit. Multiple revisions of the MIPS instruction set exist, including MIPS I, MIPS II, MIPS III, MIPS IV, MIPS V, MIPS32, and MIPS64. The current revisions are MIPS32 (for 32-bit implementations) and MIPS64 (for 64-bit implementations).[1][2] MIPS32 and MIPS64 define a control register set as well as the instruction set."

The Performance Equation


The Performance Equation


The following equation is commonly used for expressing a computer's performance ability:

The CISC approach attempts to minimize the number of instructions per program, sacrificing the number of cycles per instruction. RISC does the opposite, reducing the cycles per instruction at the cost of the number of instructions per program."

Diminishing benefits of RISC

Over time, improvements in chip fabrication techniques have improved performance exponentially, according to Moore's law, whereas architectural improvements have been comparatively small. Modern CISC implementations have adopted many of the performance improvements introduced by RISC, such as single-clock instructions. Compilers have also become more sophisticated, and are better able to exploit complex instructions on CISC architectures. The RISC-CISC distinction has blurred significantly in practice.

Automobile CPU ( POWER PC )

"Today the PowerPC is one of the most commonly used CPUs for automotive applications (some cars have more than 10 of them inside). It was also the CPU used in most Apple Macintosh machines from 1994 to 2006. (Starting in February 2006, Apple switched their main production line to Intel x86 processors.)"

Basic of MIPS

Reduced instruction set computer - Wikipedia, the free encyclopedia: "At about the same time, John L. Hennessy started a similar project called MIPS at Stanford University in 1981. MIPS focused almost entirely on the pipeline, making sure it could be run as 'full' as possible. Although pipelining was already in use in other designs, several features of the MIPS chip made its pipeline far faster. The most important, and perhaps annoying, of these features was the demand that all instructions be able to complete in one cycle. This demand allowed the pipeline to be run at much higher data rates (there was no need for induced delays) and is responsible for much of the processor's performance. However, it also had the negative side effect of eliminating many potentially useful instructions, like a multiply or a divide."

Reduced instruction set computer - Wikipedia, the free encyclopedia

Reduced instruction set computer - Wikipedia, the free encyclopedia: "In a CPU with register windows, there are a huge number of registers, e.g. 128, but programs can only use a small number of them, e.g. 8, at any one time. A program that limits itself to 8 registers per procedure can make very fast procedure calls: The call simply moves the window 'down' by 8, to the set of 8 registers used by that procedure, and the return moves the window back. (On a normal CPU, most calls must save at least a few registers' values to the stack in order to use those registers as working space, and restore their values on return.)"

Reduced instruction set computer - Wikipedia, the free encyclopedia

Reduced instruction set computer - Wikipedia, the free encyclopedia: "Many early RISC designs also shared the characteristic of having a branch delay slot. A branch delay slot is an instruction space immediately following a jump or branch. The instruction in this space is executed, whether or not the branch is taken (in other words the effect of the branch is delayed). This instruction keeps the ALU of the CPU busy for the extra time normally needed to perform a branch. Nowadays the branch delay slot is considered an unfortunate side effect of a particular strategy for implementing some RISC designs, and modern RISC designs generally do away with it (such as PowerPC, more recent versions of SPARC, and MIPS).

[edit]"

Reduced instruction set computer - Wikipedia, the free encyclopedia

Reduced instruction set computer - Wikipedia, the free encyclopedia: "Alternatives

RISC was developed as an alternative to what is now known as CISC. Over the years, other strategies have been implemented as alternatives to RISC and CISC. Some examples are VLIW, MISC, OISC, massive parallel processing, systolic array, reconfigurable computing, and dataflow architecture.

[edit]"

Reduced instruction set computer - Wikipedia, the free encyclopedia

Reduced instruction set computer - Wikipedia, the free encyclopedia: "The goal was to make instructions so simple that they could easily be pipelined, in order to achieve a single clock throughput at high frequencies."

Saturday, September 5, 2009

Reduced instruction set computer - Wikipedia, the free encyclopedia

Reduced instruction set computer - Wikipedia, the free encyclopedia: "Non-RISC design philosophy
For more details on this topic, see CPU design.

In the early days of the computer industry, programming was done in assembly language or machine code, which encouraged powerful and easy to use instructions. CPU designers therefore tried to make instructions that would do as much work as possible. With the advent of higher level languages, computer architects also started to create dedicated instructions to directly implement certain central mechanisms of such languages. Another general goal was to provide every possible addressing mode for every instruction, known as orthogonality, to ease compiler implementation. Arithmetic operations could therefore often have results as well as operands directly in memory (in addition to register or immediate)."

Thursday, September 3, 2009

Linear function - Wikipedia, the free encyclopedia

Linear function - Wikipedia, the free encyclopedia: "Linear function
From Wikipedia, the free encyclopedia
Jump to: navigation, search

In mathematics, the term linear function can refer to either of two different but related concepts: a first-degree polynomial function of one variable; or a map between two vector spaces that preserves vector addition and scalar multiplication."