The decimal numeral system (also called base ten or occasionally denary) has ten as its base. It is the numerical base most widely used by modern civilizations.
Decimal notation often refers to a base-10 positional notation.
Decimals also refer to decimal fractions, either separately or in contrast to vulgar fractions. In this context, a decimal is a tenth part, and decimals become a series of nested tenths. There was a notation in use like 'tenth-metre', meaning the tenth decimal of the metre, currently an Angstrom. The contrast here is between decimals and vulgar fractions, and decimal divisions and other divisions of measures, like the inch. It is possible to follow a decimal expansion with a vulgar fraction; this is done with the recent divisions of the troy ounce, which has three places of decimals, followed by a trinary place.
A decimal representation of a non-negative real number r is an expression of the form
where a0 is a nonnegative integer, and a1, a2, … are integers satisfying 0 ≤ ai ≤ 9, called the digits of the decimal representation. The sequence of digits specified may be finite, in which case any further digits ai are assumed to be 0. Some authors forbid decimal representations with an infinite sequence of digits 9.[1] This restriction still allows a decimal representation for each non-negative real number, but additionally makes such a representation unique. The number defined by a decimal representation is often written more briefly as
That is to say, a0 is the integer part of r, not necessarily between 0 and 9, and a1, a2, a3, … are the digits forming the fractional part of r.
Both notations above are, by definition, the following limit of a sequence:

.