搜索资源列表
13-1
- 并行算法:通过消息传递MPI实现排序,初学者比较好的示例-Parallel Algorithm: MPI message passing through the achievement of sequencing, a relatively good example for beginners
AnySrcMsg
- 一个基于分布式并行计算的消息传递代码~~有兴趣者可以按照此模板继续扩充自己想要的功能-A distributed parallel computing based on message passing code ~ ~ Those interested can follow this template you want to continue to expand the functions of
gauss
- 一个不错的全主元高斯消去法并行算法的MPI源程序-a MPI source code for Gaussian elimination s parallel algorithm
Q1
- MPI message passing project
MPItutorial4
- MPI tutorial Tutorial on MPI The MessagePassing Interface
scalapack-1.8.0
- ScaLapack是一个并行计算软件包,适用于分布存储的MIMD并行机.ScaLapack提供若干线性代数来解功能,具有高效、可移植。可伸缩、高可靠性的优点,利用它的求解库可以开发出基于线性代数运算的并行应用程序.文章对ScaLapack的结构、功能、数据布局等方面进行了讨论。 -A library of high-performance linear algebra routines for distributed-memory message-passing MIMD computers
MPI_HyperQuicksort
- MPI (message passing protocol) based quicksort. HyperQuicksort version (original proposed by C.J Quinn) Tested and Developed in H.U.T by MonteCristo
MPI1Exercises.tar
- Some examples of MPI (Message Passing Interface) in order to understand its use.
Automatic-parallel-compiled
- 这是一篇很有价值的博士论文,对于并行化编译器中并行程序自动生成和性能优化技术进行了较深入的研究。 并行化的最终日标是生成符合日标机体系结构特点的高效并行程序,因此如何产生高效并行代码是并行化编译研究的一项重要内容。 这篇文章以并行化编译器KAP为研究背景,以分布内存结构为目标,研究了并行化过程中的通信优化和消息、传递类型并行程序自动生成问题;以共享内存结构为目标,研究了并行化产生的openMP程序的编译优化问题。通过测试确定了影响openMP程序性能的主要因素,从并行化生成OpenMP并
18-4
- cannon矩阵并行算法,mpi编写,计算过程中要通过消息传递完成不同节点间的通信-cannon matrix parallel algorithms, mpi writing, the calculation process should be completed by the message-passing communication between different nodes
source-code-of-parallel-computing
- 并行计算源码,多个并行计算源码,基于MPI的并行计算源码-message passing interface
mpich2_1.4.1-1.debian.tar
- MPI是一种消息传递编程模型,并成为这种编程模型的代表。事实上,标准MPI虽然很庞大,但是它的最终目的是服务于进程间通信这一目标的-Message Passing Interface
LU_MPI
- LU DECOMPOSITION USING MPI (Message Passing Interface)
13898366Players
- The MPI standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message passing programs in Fortran and C. The MPI effort was conducted and similar spirits of the High-Performance Fortr
wodk
- MPI is a language-independent communications protocol used to program parallel computers. Both point-to-point and collective communication are supported. MPI "is a message-passing application programmer interface, together with protocol and semantic
bit_field
- An example of send and receive structure using Message Passing Interface with cmake
chapter4
- 一、基于消息传递的程序设计基础 二、利用机群实现程序的并行化 三、基于消息传递的并行程序评价 -First, the based messaging program design based on, use a fleet program parallelization, based on message passing parallel program evaluation
chapter5
- 一、基于消息传递的程序设计基础 二、利用机群实现程序的并行化 三、基于消息传递的并行程序评价 -First, the based messaging program design based on, use a fleet program parallelization, based on message passing parallel program evaluation
fft
- 基于C语言的快速傅里叶变换FFT消息传递MPI编程,可以在并行计算机中编译与运行-Fast Fourier Transform (FFT) based on the C language MPI message passing programming parallel computers can be compiled and run
Message-Passing-Interface-(MPI).pdf
- C++ MPI message passing interface tutorial from Lawrence Livermore National Laboratory