Geometry.Net - the online learning center
Home  - Computer - Parallel Computing Bookstore
Page 2     21-40 of 167    Back | 1  | 2  | 3  | 4  | 5  | 6  | 7  | 8  | 9  | Next 20

         Parallel Computing:     more books (100)
  1. Distributed Computing: Fundamentals, Simulations, and Advanced Topics (Wiley Series on Parallel and Distributed Computing) by Hagit Attiya, Jennifer Welch, 2004-03-25
  2. An Introduction to Parallel and Vector Scientific Computing (Cambridge Texts in Applied Mathematics) by Ronald W. Shonkwiler, Lew Lefton, 2006-08-14
  3. Dependable Computing Systems: Paradigms, Performance Issues, and Applications (Wiley Series on Parallel and Distributed Computing) by Hassan B. Diab, Albert Y. Zomaya, 2005-10-05
  4. The Art of Parallel Programming, Second Edition
  5. Parallel Computing: Theory and Practice by Michael J. Quinn, 1993-09-01
  6. Parallel Metaheuristics: A New Class of Algorithms(Wiley Series on Parallel and Distributed Computing) by Enrique Alba, 2005-09-23
  7. Parallel Computing by G. R. Joubert, Italy) Parco200 (2001 Naples, et all 2002-06-15
  8. Parallel Computing Technologies: 9th International Conference, PaCT 2007, Pereslavl-Zalessky, Russia, September 3-7, 2007, Proceedings (Lecture Notes in Computer Science)
  9. Distributed and Parallel Systems: Cluster and Grid Computing (The Springer International Series in Engineering and Computer Science)
  10. Computing with Parallel Architecture: T.Node (Eurocourses: Computer and Information Science)
  11. Network-Based Parallel Computing Communication Architecture, and Applications
  12. Scalable ParallelComputing: Technology, Architecture,Programming by Kai Hwang, Zhiwei Xu, 1998-02-01
  13. Advanced Parallel And Distributed Computing: Evaluation, Improvement And Practice (Distributed, Cluster and Grid Computing)
  14. Network and Parallel Computing: IFIP International Conference, NPC 2007, Dalian, China, September 18-21, 2007, Proceedings (Lecture Notes in Computer Science)

Translate this page Computing and service center of University of Paderborn, Germany. Hosting and participating several

22. The History Of The Development Of Parallel Computing
The History of the Development of parallel computing. = Gregory V. Wilson gvw@cs
The History of the Development of Parallel Computing
==================================================== Gregory V. Wilson From the crooked timber of humanity No straight thing was ever made ====================================================
[1] IBM introduces the 704. Principal architect is Gene Amdahl; it is the first commercial machine with floating-point hardware, and is capable of approximately 5 kFLOPS.
[2] IBM starts 7030 project (known as STRETCH) to produce supercomputer for Los Alamos National Laboratory (LANL). Its goal is to produce a machine with 100 times the performance of any available at the time. [3] LARC (Livermore Automatic Research Computer) project begins to design supercomputer for Lawrence Livermore National Laboratory (LLNL). [4] Atlas project begins in the U.K. as joint venture between University of Manchester and Ferranti Ltd. Principal architect is Tom Kilburn.
[5] Digital Equipment Corporation (DEC) founded.
[6] Control Data Corporation (CDC) founded.

23. IPCA - Parallel:distribution
Sisal compilers and interpreter, user guide, programs, publications, tools and Sisal miniFAQ.
Internet Parallel Computing Archive
parallel distribution
News ...
  • Optimizing Sisal Compiler (OSC) V13.0.3 Native Compiler and Debugger (1995-Nov-29 10:40:00, 1.6M)
    Contains compiler and run-time library software for running SISAL programs on various machines. Contains software for SISAL compiler (osc); run time support library written in C; utility program for multiprocessing; manual pages and utilities. Ported to: SGI IRIS with IRIX 4.04; Cray C90 with UNICOS 7.c; Meiko CS-2 with Solaris 2.1; IBM RS6000 with AIX; Sun 3 with UNIX 4.2; Sun Sparc 10 with Solaris 2.3; DEC Decstation with UNLTRIX V4.3 R44; Mac with MachTen 2.1.1; PC x486 with LINUX and Cray T3D with UNICOS Bugs to
    Authors: Pat Miller ( ); Scott Denton ( ); Rea Simpson; David Cann; S. Harikrishnan and Rod Oldehoeft. CRG/OSC Development Crew, Lawrence Livermore National Laboratory, L-306, Livermore, CA 94550, USA. Tel: +1 (510) 423-0309
  • OSC small installation script (1993-Jun-10 00:00:00, 3.8K)
    Patches for update rather than entire release.

24. Particle Applications - Pipeline Computing
An overview of how multiple particle systems can be simulated using parallel computing.
Module 5. Particle Applications - Pipeline Computing
Many questions in science can be answered by viewing a physical system as a collection of particles that obey certain laws. A familiar example is that the universe can be viewed as a collection of astronomical bodies which obey Newton's laws of gravitation. The laws in the above examples can be written as equations that are typical of the class of equations known as Ordinary Differential Equations (ODEs). These equations have well-known solution techniques which can be easily expressed in a data parallel way. In the case of particle systems, solving the equations also involves calculating a function involving interactions between all pairs of particles. We will discuss a variety of ways to approach such "all pairs" calculations in a data parallel way, as well as discuss such issues as deciding which parts of a calculation to parallelize and how to achieve load balancing in a program.
5.1 Particle applications
The application that we will discuss is the universe of astronomical particles under Newton's laws of motion, commonly known as the N-body problem. We suppose that our system consists of N particles, each with a mass, moving with some velocity through 3-dimensional space. Part of Newton's system is the recognition that mass and velocity of the particles are what affect the system; in particular, we can disregard the diameters and shapes of the particles and treat them as point masses Velocity is, of course, defined to be the change in position over time, and acceleration to be the change in velocity over time.

25. European Centre For Parallel Computing At Vienna
VCPC is an HPC centre in Austria which provides parallel computing resources to academia and industry, including national and international projects.
Welcome How to Get to VCPC Staff Job Offers ... External Services European Centre for Parallel Computing at Vienna
Tel: +43 1 4277 38819 Fax: +43 1 4277 38818 E-mail:
Updated: Thu 08 Apr 2004 13:58:02

26. GridServer - Grid Computing For Business Critical Applications
Commercial enterprise that purchases idle PC capacity and resells it to users with complex parallel computing tasks.
Legal Notice
GridServer, GridClient, LiveCluster and Guaranteed Distributed Computing are trademarks of DataSynapse, Inc.
Site Credits

DataSynapse Launches GRIDesign:
Rapid Assessment Methodology
DataSynapse, the fastest-growing provider of grid computing software for business-critical applications,
now offers GRIDesign , a rapid application assessment and design methodology designed to help companies determine how to successfully deploy a production grid and realize the inherent advantages of a service-oriented, on demand application infrastructure. find out more For further information on DataSynapse solutions: Learn About GridServer Industry Solutions Client Success Stories Download White Papers ...

of separate computers. Get Up To Speed
Utility Computing:
Virtualization Hear a bottom-up, technical view of virtualization featuring Tony Bishop, Chief Business Architect at DataSynapse. SIA 2004 Technology Management Conference Visit DataSynapse in the IBM, SUN and Meta Matrix booths.

27. GI-FG 0.1.3: CFPs, Programs, Etc.
Home Pages of Journals That Cover parallel computing Information and Computation; IEEE Transactions on Computers; TCS (Theoretical Computer Science);
Gesellschaft für Informatik e. V.
GI-Fachgruppe 0.1.3
»Parallele und verteilte Algorithmen«

(Special Interest Group on Parallel and Distributed Algorithms)
Actual Information Archive of »Expired« Entries)
Last update: January 22, 2002 Files are compressed by the gzip command. (Download information for gzip for various platforms can be found from here If you have contributions, comments, suggestions, questions, etc., please send an e-mail to Rolf Wanka Sections (Click to jump there) Dates and Programs
  • STACS 2002 HTTP ) (Symp. on Theoretical Aspects of Computer Science) ¤ March 14-16, 2002 Programme
  • LATIN 2002 HTTP ) (Latin American Theoretical INformatics) ¤ April 3-6, 2002 Programme
Dates of Conferences Not Yet Having a Program (as far as we know)
  • 45. Workshop über Komplexitätstheorie, Datenstrukturen und effiziente Algorithmen (Tübingen) ( HTTP February 19, 2002

28. Distributed Computing Over P2P Networks
Open framework that will allow P2P file sharing networks to make Distributed parallel computing.

29. Internet Parallel Computing Archive perch.cs.yale.edu8001/ Colgate parallel computing UParCC). Other Web Sites The Colgate parallel computing Laboratory. The Undergraduate parallel computing Consortium (UParCC). UParCC is
Hosted by WoTUG at
Computer Science Department
University of Kent at Canterbury , UK
Edited 1993-2000 by Dave Beckett News IPCA Mirrors Add Search Mail ...
  • Environments and Systems
    Hardware Vendors Languages ... OSes
  • Topical Information
    Jobs IPCA Information Usage ...
  • occam language
    Occam For All project; Compilers: KRoC SPOC SGS-Thompson ... TDS and Toolset ; Docs: libraries
  • Reference
    Biblios Books Consultants ...
  • Transputer processor
    Bibliographies DS-Links Networks, Routers and Transputers ... Article Archive and Crisis in HPC Workshop Southampton Belfast Cardiff ...
  • WoTUG and NATUG
    WoTUG 22 conference
    Biblios NATUG ... Other lists Last Modified: 1st March 2000 Dave Beckett and WoTUG
  • 30. Computer Science Departement - Bordeaux 1 University
    Computer Science Department. Research areas include combinatorics, algorithmics, logic, automata, parallel computing, symbolic programming, and graphics.
    Université Bordeaux I
    UFR Mathématiques et Informatique
    Computer Science Department
    The Department

    31. Parallel Computing Toolkit: Product Information
    parallel computing Toolkit brings parallel computation to anyone having access to more than one computer on a network or anyone working on multiprocessor
    PreloadImages('/common/images2003/btn_products_over.gif','/common/images2003/btn_purchasing_over.gif','/common/images2003/btn_services_over.gif','/common/images2003/btn_new_over.gif','/common/images2003/btn_company_over.gif','/common/images2003/btn_webresource_over.gif'); Products Parallel Computing Toolkit Features Who's It For? ... Give us feedback Sign up for our newsletter:
    Unleash the Power of Parallel Computing
    Parallel Computing Toolkit brings parallel computation to anyone having access to more than one computer on a network or anyone working on multiprocessor machines.
    It implements many parallel programming primitives and includes high-level commands for parallel execution of operations such as animation, plotting, and matrix manipulation. Also supported are many popular new programming approaches such as parallel Monte Carlo simulation, visualization, searching, and optimization. The implementations for all high-level commands in Parallel Computing Toolkit are provided in Mathematica source form, so they can serve as templates for building additional parallel programs.
    Licensing Information On a network

    32. Multiprocessors, Clusters, Grids, And Parallel Computing: What's The Difference?
    Intel Optimizing Center Update. More Newsletters. Privacy Statement. Print Print. Multiprocessors, Clusters, Grids, and parallel computing What s the Difference?

    33. Prof. Frank Dehne -
    Carleton University, Ottawa. parallel computing.

    34. Parallel Computing With Linux
    parallel computing With Linux. 2 at NASA s Goddard Space Flight Centerextends the utility of Linux to the realm of high performance parallel computing.
    Parallel Computing With Linux
    By Forrest Hoffman and William Hargrove Linuxis just now making a significant impact on the computing industry, but it has been a powerful tool for computer scientists and computational scientists for a number of years. Aside from the obvious benefits of working with a freely-available, reliable, and efficient open source operating system [ ], the advent of Beowulf-style cluster computingpioneered by Donald Becker, Thomas Sterling, et al. [ ]. If a computational problem can be solved in a loosely-coupled distributed memory environment, a Beowulf clusteror Pile of PCs (POP)may be the answer; and it "weighs in" at a price point traditional parallel computer manufacturers cannot touch. Figure 1: The Stone SouperComputer at Oak Ridge National Laboratory. We became involved in cluster computing more than two years ago, after developing a proposal for the construction of a Beowulf cluster to support a handful of research projects. The proposal was rejected, but because we had already begun development of a new high-resolution landscape ecology application, we decided to build a c lusterout of surplus PCs (primarily Intel 486s) destined for salvage. We began intercepting excess machines at federal facilities in Oak Ridge, Tennessee, and processing them into usable nodes. By September 1997, we had a functional parallel computer system built out of no-cost hardware. Today we have a constantly-evolving 126 node highly heterogeneous Beowulf-style cluster, called the Stone SouperComputer (see

    35. Introduction To Parallel Computing
    Introduction to parallel computing. Table of Contents. What is parallel computing? Traditionally, software has been written for serial
    Introduction to Parallel Computing
    Table of Contents
  • Overview
  • What is Parallel Computing?
  • Why Use Parallel Computing?
  • Concepts and Terminology ...
  • References and More Information
    What is Parallel Computing?
    • Traditionally, software has been written for serial computation:
      • To be executed by a single computer having a single Central Processing Unit (CPU);
      • Problems are solved by a series of instructions, executed one after the other by the CPU. Only one instruction may be executed at any moment in time.
    • In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem.
    • The compute resources can include:
      • A single computer with multiple processors;
      • An arbitrary number of computers connected by a network;
      • A combination of both.
    • The computational problem usually demonstrates characteristics such as the ability to be:
      • Broken apart into discrete pieces of work that can be solved simultaneously;
      • Execute multiple program instructions at any moment in time;
      • Solved in less time with multiple compute resources than with a single compute resource.
  • 36. Ada95
    In context of parallel computing, brief mention of Fortran90 and High Performance Fortran (HPF). Short document.
    High Performance Scientific Computing with Ada95
    object-oriented and parallel
    Why Ada95?
    Ada95 is the first standardised object oriented programming language (since 15-Feb-1995, ISO/IEC 8652:1995). It provides powerful data abstraction mechanisms, hierarchical libraries, inheritance and polymorphism. Generic packages and the possibility of extensions to types and packages facilitate software reuse.
    What about
    Just a few months from the next millenium, the latest Fortran standard does not even offer the features of Ada83 (ANSI/MIL-STD 1815A). There is no real exception handling, there are no generics, no OOP features, no tasking, no child libraries, and typing is as soft as ever...
    But there is High Performance Fortran?!
    Indeed, there is HPF with all the drawbacks of the Fortran language. HPF is data parallel and provides predefined subprograms which allow the programmer to distribute large datasets over several processors and to manipulate parts of these datasets in parallel. Much of all this is done by the compiler but you still have to carefully identify those parts of the program which you can parallelise in this way; usually special parallel statements are scattered throughout the codes. You have virtually no control over the details.
    Now there is Ada95!

    parallel computing Computer Science DivisionFor an introduction to parallel computing at UC Berkeley, see A Common Focus for Diverse, Interdisciplinary Goals from the Computer Science and Engineering at

    ParCo2003 Home Pageparallel computing 2003 2 5 September 2003.

    39. High Performance Computing Tools Group - University Of Houston, Dept. Of Compute
    At the University of Houston. Research in parallel computing, compilers, performance benchmarking, and parallel languages.
    This page uses frames, but your browser doesn't support them.

    40. ParCo2003 Home Page
    ParCo2001 parallel computing 2003 2 5 September 2003. At that conference the publication of the international parallel computing journal was announced.
    Parallel Computing 2003
    2 - 5 September 2003 ParCo
    is the longest running series of international conferences in Europe on the development and application of parallel computers. The first conference was held in Berlin in 1983. At that conference the publication of the international Parallel Computing journal was announced. The conference thus marks two decades of progress in the dynamic field of high-speed computing. From the outset the prime goal was to maintain a high scientific standard both in the conference itself and in the published and refereed proceedings. This was also the reason for running the conference on a bi-annual basis in order to allow researchers to produce significant new results. The high standard of conference presentations and of the refereed proceedings have become the hall-mark of ParCo The scientific program, in combination with the industrial exhibition and industrial session, regularly gives an overview of the state-of-the-art of the development, application and future trends in parallel computing. The conference organisers plan the conferences such that a maximum opportunity is created for delegates to meet and interact with fellow researchers. As organisers we do hope that delegates will participate in the scientific activities, will visit the industrial exhibition and sessions and will use the social activities to make new and renew old contacts. The informal nature of the conference allows for easy interaction with fellow delegates.

    Page 2     21-40 of 167    Back | 1  | 2  | 3  | 4  | 5  | 6  | 7  | 8  | 9  | Next 20

    free hit counter