Error detection and correction

As the prose books of the Bible were hardly ever written in stichs, the copyists, in order to estimate the amount of work, had to count the letters.[2][3] Between the 7th and 10th centuries CE a group of Jewish scribes formalized and expanded this to create the Numerical Masorah to ensure accurate reproduction of the sacred text.[4] The effectiveness of their error correction method was verified by the accuracy of copying through the centuries demonstrated by discovery of the Dead Sea Scrolls in 1947–1956, dating from c. 150 BCE-75 CE.[6] A description of Hamming's code appeared in Claude Shannon's A Mathematical Theory of Communication[7] and was quickly generalized by Marcel J. E.If the channel characteristics cannot be determined, or are highly variable, an error-detection scheme may be combined with a system for retransmissions of erroneous data.An alternate approach for error control is hybrid automatic repeat request (HARQ), which is a combination of ARQ and error-correction coding.This is because Shannon's proof was only of existential nature, and did not show how to construct codes that are both optimal and have efficient encoding and decoding algorithms.Error detection is most commonly realized using a suitable hash function (or specifically, a checksum, cyclic redundancy check or other algorithm).A repetition code is very inefficient and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g., 1010 1010 1010 in the previous example would be detected as correct).A cyclic redundancy check (CRC) is a non-secure hash function designed to detect accidental changes to digital data in computer networks.CRCs are particularly easy to implement in hardware and are therefore commonly used in computer networks and storage devices such as hard disk drives.Applications that require low latency (such as telephone conversations) cannot use automatic repeat request (ARQ); they must use forward error correction (FEC).Applications where the transmitter immediately forgets the information as soon as it is sent (such as most television cameras) cannot use ARQ; they must use FEC because when an error occurs, the original data is no longer available.[16] In a typical TCP/IP stack, error control is performed at multiple levels: The development of error-correction codes was tightly coupled with the history of deep-space missions due to the extreme dilution of signal power over interplanetary distances, and the limited power availability aboard space probes.The concatenated Reed–Solomon–Viterbi (RSV) code allowed for very powerful error correction, and enabled the spacecraft's extended journey to Uranus and Neptune.The different kinds of deep space and orbital missions that are conducted suggest that trying to find a one-size-fits-all error correction system will be an ongoing problem.[20] RAID systems use a variety of error correction techniques to recover data when a hard drive completely fails.Filesystems such as ZFS or Btrfs, as well as some RAID implementations, support data scrubbing and resilvering, which allows bad blocks to be detected and (hopefully) recovered before they are used.Dynamic random-access memory (DRAM) may provide stronger protection against soft errors by relying on error-correcting codes.
To clean up transmission errors introduced by Earth's atmosphere (left), Goddard scientists applied Reed–Solomon error correction (right), which is commonly used in CDs and DVDs. Typical errors include missing pixels (white) and false signals (black). The white stripe indicates a brief period when transmission was interrupted.
Exception handlingFact-checkingProblem solvinginformation theorycoding theorycomputer sciencetelecommunicationsreliable deliverydigital datacommunication channelschannel noisecopyistsHebrew Biblegroup of Jewish scribesDead Sea Scrollserror correction codesRichard HammingHamming's codeClaude ShannonMarcel J. E. Golayredundancysystematicchannel modelsmemorylessburstsautomatic repeat requesthybrid automatic repeat requesttimeoutsdata frameStop-and-wait ARQGo-Back-N ARQSelective Repeat ARQcapacityback channellatencynetwork congestionForward error correctionredundant dataerror-correcting codebackchannellower-layercellular networkfiber-optic communicationflash memoryhard diskconvolutional codesblock codesViterbi decoderoptimal decodingblock-by-blockrepetition codesHamming codesmultidimensional parity-check codesReed–Solomon codesTurbo codeslow-density parity-check codesShannon's theoreminformation ratesignal-to-noise ratiochannel capacitydiscrete memoryless channelcode rateefficientHybrid ARQerasure channelrateless erasure codehash functionchecksumcyclic redundancy checkburst errorsminimum distance codingpreimage attackrepetition codenumbers stationsParity bittransverse redundancy checkslongitudinal redundancy checksmodular arithmeticones'-complementcheck digitsDamm algorithmLuhn algorithmVerhoeff algorithmdivisorpolynomial long divisionfinite fielddividendremaindercomputer networkshard disk drivesCryptographic hash functiondata integritymessage authentication codeDigital signatureError correction codeHamming distancereturn channelTCP/IPEthernet frameCRC-32Packetsnetwork routinglink layernetwork stackthree-way handshaketimeoutReed–Muller codesbell curveVoyager 1Voyager 2JupiterSaturnViterbi-decodedconcatenatedGolay (24,12,8) codeReed–Solomon codeUranusNeptuneConsultative Committee for Space Data SystemsLDPC codescommunication channeltransponderhigh-definition televisionmodulationmagnetic tape data storageoptimal rectangular codegroup coded recordingfile formatsarchive formatsparity filesReed-Solomon codescompact discsdata scrubbingECC memoryDynamic random-access memorysoft errorsradiationtriple modular redundancysingle-event upsetoperating systemsLinux kernelPCI busmemory scrubbingBerger codeBurst error-correcting codeLink adaptationList of hash functionsMishneh TorahOxford University PressPless, Vera S.Cambridge University PressTsinghua UniversityLinux Magazinekernel.orgPrentice HallWayback MachineDavid J.C. MacKayfountain codes