Integer (computer science)
Computer hardware nearly always provides a way to represent a processor register or memory address as an integer.[1] An integer value is typically specified in the source code of a program as a sequence of digits optionally prefixed with + or −.Unlike mathematical integers, a typical datum in a computer has some minimal and maximum possible value.Two's complement arithmetic is convenient because there is a perfect one-to-one correspondence between representations and values (in particular, no separate +0 and −0), and because addition, subtraction and multiplication do not need to distinguish between signed and unsigned types.Typically, hardware will support both signed and unsigned types, but only a small, fixed set of widths.Other languages that do not support this concept as a top-level construct may have libraries available to represent very large numbers using arrays of smaller variables, such as Java's BigInteger class or Perl's "bigint" package.This type can be stored in memory using a single bit, but is often given a full byte for convenience of addressing and speed of access.A four-bit quantity is known as a nibble (when eating, being smaller than a bite) or nybble (being a pun on the form of the word byte).The meanings of terms derived from word, such as longword, doubleword, quadword, and halfword, also vary with the CPU and OS.For example, if a programmer using the C language incorrectly declares as int a variable that will be used to store values greater than 215−1, the program will fail on computers with 16-bit integers.For example, 64-bit versions of Microsoft Windows support existing 32-bit binaries, and programs compiled for Linux's x32 ABI run in 64-bit mode yet use 32-bit memory addresses.Integer literals can be written as regular Arabic numerals, consisting of a sequence of digits and with negation indicated by a minus sign before the value.