Integer (computer science)

Computer hardware nearly always provides a way to represent a processor register or memory address as an integer.[1] An integer value is typically specified in the source code of a program as a sequence of digits optionally prefixed with + or −.Unlike mathematical integers, a typical datum in a computer has some minimal and maximum possible value.Two's complement arithmetic is convenient because there is a perfect one-to-one correspondence between representations and values (in particular, no separate +0 and −0), and because addition, subtraction and multiplication do not need to distinguish between signed and unsigned types.Typically, hardware will support both signed and unsigned types, but only a small, fixed set of widths.Other languages that do not support this concept as a top-level construct may have libraries available to represent very large numbers using arrays of smaller variables, such as Java's BigInteger class or Perl's "bigint" package.This type can be stored in memory using a single bit, but is often given a full byte for convenience of addressing and speed of access.A four-bit quantity is known as a nibble (when eating, being smaller than a bite) or nybble (being a pun on the form of the word byte).The meanings of terms derived from word, such as longword, doubleword, quadword, and halfword, also vary with the CPU and OS.For example, if a programmer using the C language incorrectly declares as int a variable that will be used to store values greater than 215−1, the program will fail on computers with 16-bit integers.For example, 64-bit versions of Microsoft Windows support existing 32-bit binaries, and programs compiled for Linux's x32 ABI run in 64-bit mode yet use 32-bit memory addresses.Integer literals can be written as regular Arabic numerals, consisting of a sequence of digits and with negation indicated by a minus sign before the value.
data typeintegersbinary digitsregistersource codedigit group separatorsbinary numeral systemendiannessbinary-coded decimalGray codeways to represent signed numberstwo's complementone-to-one correspondenceno separate +0 and −0additionsubtractionmultiplicationoffset binarysign-magnitudeones' complementprogramming languageolder computer architecturesbinary-coded decimal (BCD)nibblesignedPascalDelphiFORTRANSigned:code unitscharacter encodingUTF-16−2,147,483,648 to 2,147,483,647UTF-32true colorFourCC32-bit computingUnix epoch64-bit computingSmalltalkHaskellPythonbignumsBooleanhexadecimalOctet (computing)computer networkingWord (computer architecture)architectureembedded processors36-bit word lengthstdint.hMicrosoft Windowsx32 ABIWindows APISignednessintegerPlatformsUnsignedWindowsC++/CLIECMA-372Mac OS XSQL ServerVB.NETMicrosoft .NETJava platformLong (disambiguation)C programming languagestandard libraryInteger literalInteger literalsArabic numeralsminus signdigit groupingassembly languagesunderscoresUnix modesSmallBASICPython 2TuringArbitrary-precision arithmeticC data typesInteger overflowSigned number representationsWayback MachineData typesUninterpretedBit arrayArbitrary-precision or bignumComplexDecimalFixed pointFloating pointMinifloatHalf precisionbfloat16Single precisionDouble precisionQuadruple precisionOctuple precisionExtended precisionLong doubleRationalPointerAddressphysicalvirtualReferenceCharacterStringnull-terminatedCompositeAlgebraic data typegeneralizedAssociative arrayDependentInductiveIntersectionObjectmetaobjectOption typeProductRecord or StructRefinementtaggedBottom typeCollectionEnumerated typeExceptionFunction typeOpaque data typeRecursive data typeSemaphoreStreamStrongly typed identifierTop typeType classEmpty typeUnit typeAbstract data typeBoxingData structureGenericmetaclassParametric polymorphismPrimitive data typeInterfaceSubtypingType constructorType conversionType systemType theoryVariable