Introduction to grid computing

The world-wide-web changed the way we share and distribute information but has
done very little to make computing power or data storage more assessable. So
the aim of the Grid is to build on existing Internet protocols and to develop
`middleware` which will allow simple and transparent use of resources wherever
they may be world wide. We will also need to change our applications so that
they can take advantage of the Grid infrastructure and run efficiently in this
complex environment.

There are many challenges to developing a Grid that will deliver the kind of
robust, high-performance system required. Computing in the LHC era, for
instance, will require computing clusters with tens of thousands of nodes, and
each experiment will accumulate data at a rate of about one million gigabytes
per year. To cope with this scale of computing and data, experiments will have
to put globally distributed resources at the physicists’ fingertips. Particle
physicists are therefore heavily involved in providing requirements for the
Grid, in developing higher levels of the middleware, and in providing a
real-world use case for the early deployment of software. They are working with
researchers in computer science and many other fields, often pursuing novel
solutions to these awesome challenges.

The Oxford E-Research Centre web
site has a short but more detailed introduction.

For the big picture and overview please see
The Anatomy of the Grid: Enabling Scalable Virtual Organizations

Categories: Grid