Computer multitasking

In a time-sharing system, multiple human operators use the same processor as if it was dedicated to their use, while behind the scenes the computer is serving many users by multitasking their individual programs.Real-time systems such as those designed to control industrial robots, require timely processing; a single processor might be shared between calculations of machine movement, communications, and user interface.[2] Often multitasking operating systems include measures to change the priority of individual tasks, so that important jobs receive more processor time than those considered less significant.This optimizes CPU utilization by keeping it engaged with the execution of tasks, particularly useful when one program is waiting for I/O operations to complete.Its architecture featured a central memory and a Program Distributor feeding up to twenty-five autonomous processing units with code and data, and allowing concurrent operation of multiple clusters.Preemptive multitasking allows the computer system to more reliably guarantee to each process a regular "slice" of operating time.It also allows the system to deal rapidly with important external events like incoming data, which might require the immediate attention of one or another process.As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution.[citation needed] Threads were born from the idea that the most efficient way for cooperating processes to exchange data would be to share their entire memory space.Inadequate memory protection mechanisms, either due to flaws in their design or poor implementations, allow for security vulnerabilities that may be potentially exploited by malicious software.[16] Various concurrent computing techniques are used to avoid potential problems caused by multiple tasks attempting to access the same resource.Modern operating systems generally include detailed mechanisms for prioritizing processes, while symmetric multiprocessing has introduced new complexities and capabilities.
Modern desktop operating systems are capable of handling large numbers of different processes at the same time. This screenshot shows Linux Mint running simultaneously Xfce desktop environment, Firefox , a calculator program, the built-in calendar, Vim , GIMP , and VLC media player .
Multitasking of Microsoft Windows 1.01 released in 1985, here shown running the MS-DOS Executive and Calculator programs
Kubuntu (KDE Plasma 5) four Virtual desktops running multiple programs at the same time
Linux MintFirefoxVLC media playerMicrosoft Windows 1.01computingconcurrentprocessescentral processing unitsmain memorycontext switchpre-emptive multitaskingcooperative multitaskingparallel executionmultiprocessorinput/outputtime-sharingmultiprogrammingschedulerReal-timethreadsmemory protectionprotection ringsCPU timeperipheralsBull Gamma 60batch processingvirtual memoryvirtual machineoperating systemsMicrosoft Windowsclassic Mac OSRISC OSbusy waitingVirtual desktopsthe PDP-6 MonitorMulticsUnix-likeSolarisderivativesI/O boundCPU boundbusywaitMicrowareMotorola 6809TRS-80 Color Computer 2Sinclair QDOSSinclair QLWindows NT 3.1Windows 95UNIX System VNeXTSTEPMac OS XWindows 9xWindows NT familyx86-64Itaniumreal-time computingprocess timeparent processesfibersmachines with multiple processorsmultithreading in hardwarememory management unitSystem Vswap filesecondary storageconcurrent computingI/O processorsasymmetric multiprocessingsymmetric multiprocessingProcess stateTask switchingSmart ComputingComparisonForensic engineeringHistoryTimelineUsage shareUser features comparisonDisk operating systemDistributed operating systemEmbedded operating systemHobbyist operating systemJust enough operating systemMobile operating systemNetwork operating systemObject-oriented operating systemReal-time operating systemSupercomputer operating systemKernelArchitecturesExokernelHybridMicrokernelMonolithicMultikernelvkernelRump kernelUnikernelDevice driverLoadable kernel moduleUser space and kernel spaceProcess managementCooperativePreemptiveInterruptProcessProcess control blockThreadSchedulingalgorithmsFixed-priority preemptiveMultilevel feedback queueRound-robinShortest job nextMemory managementresourceBus errorGeneral protection faultMemory pagingProtection ringSegmentation faultStoragefile systemsDefragmentationDevice fileFile attributeJournalPartitionVirtual file systemVirtual tape libraryComputer networkLive CDLive USBUser interfaceParallel computingDistributed computingMassively parallelCloud computingHigh-performance computingMultiprocessingManycore processorSystolic arrayInstructionMemoryPipelineMultithreadingTemporalSimultaneousSimultaneous and heterogenousSpeculativeHardware scoutPRAM modelPEM modelAnalysis of parallel algorithmsAmdahl's lawGustafson's lawCost efficiencyKarp–Flatt metricSlowdownSpeedupInstruction windowMemory coherenceCache coherenceCache invalidationBarrierSynchronizationApplication checkpointingProgrammingStream processingDataflow programmingModelsImplicit parallelismExplicit parallelismConcurrencyNon-blocking algorithmHardwareFlynn's taxonomyArray processingDataflow architecturePipelined processorSuperscalar processorVector processorsymmetricasymmetricshareddistributeddistributed sharedComputer clusterBeowulf clusterGrid computerHardware accelerationAteji PXChapelCharm++Coarray FortranC++ AMPGlobal ArraysGPUOpenOpenMPOpenCLOpenHMPPOpenACCParallel ExtensionspthreadsRaftLibAutomatic parallelizationDeadlockDeterministic algorithmEmbarrassingly parallelParallel slowdownRace conditionSoftware lockoutScalabilityStarvation