Partitioned global address space

In computer science, partitioned global address space (PGAS) is a parallel programming model paradigm.[1][2] The novelty of PGAS is that the portions of the shared memory space may have an affinity for a particular process, thereby exploiting locality of reference in order to improve performance.A PGAS memory model is featured in various parallel programming languages and libraries, including: Coarray Fortran, Unified Parallel C, Split-C, Fortress, Chapel, X10, UPC++, Coarray C++, Global Arrays, DASH and SHMEM.In contrast to message passing, PGAS programming models frequently offer one-sided communication operations such as Remote Memory Access (RMA), whereby one processing element may directly access memory with affinity to a different (potentially remote) process, without explicit semantic involvement by the passive target process.PGAS offers more efficiency and scalability than traditional shared-memory approaches with a flat address space, because hardware-specific data locality can be explicitly exposed in the semantic partitioning of the address space.
computer scienceparallel programming modeladdress spaceprocessing elementshared memorylocality of referenceCoarray FortranUnified Parallel CFortressChapelGlobal ArraysFortranFortran 2008message passingdata localityC programming languageDARPA HPCS projecthigh-performance computingexascaleRemote Procedure CallSun MicrosystemsAdaptevamanycorenetwork on a chipscratchpad memoryConcurrencyNon-blocking synchronizationNon-uniform memory accessCache-only memory architectureWayback MachineParallel computingDistributed computingMassively parallelCloud computingMultiprocessingManycore processorComputer networkSystolic arrayInstructionThreadMemoryPipelineMultithreadingTemporalSimultaneousSimultaneous and heterogenousSpeculativePreemptiveHardware scoutPRAM modelPEM modelAnalysis of parallel algorithmsAmdahl's lawGustafson's lawCost efficiencyKarp–Flatt metricSlowdownSpeedupProcessInstruction windowMemory coherenceCache coherenceCache invalidationBarrierSynchronizationApplication checkpointingProgrammingStream processingDataflow programmingModelsImplicit parallelismExplicit parallelismNon-blocking algorithmHardwareFlynn's taxonomyArray processingDataflow architecturePipelined processorSuperscalar processorVector processorMultiprocessorsymmetricasymmetricshareddistributeddistributed sharedComputer clusterBeowulf clusterGrid computerHardware accelerationAteji PXCharm++C++ AMPGPUOpenOpenMPOpenCLOpenHMPPOpenACCParallel ExtensionspthreadsRaftLibAutomatic parallelizationDeadlockDeterministic algorithmEmbarrassingly parallelParallel slowdownRace conditionSoftware lockoutScalabilityStarvation