format    pointer to a nullterminated character string specifying how to interpret the data. The format string consists of ordinary multibyte characters (except % ), which are copied unchanged into the output stream, and conversion specifications. Each conversion specification has the following format:
 introductory
% character
 (optional) one or more flags that modify the behavior of the conversion:

 : the result of the conversion is leftjustified within the field (by default it is rightjustified)

+ : the sign of signed conversions is always prepended to the result of the conversion (by default the result is preceded by minus only when it is negative)
 space: if the result of a signed conversion does not start with a sign character, or is empty, space is prepended to the result. It is ignored if
+ flag is present.

# : alternative form of the conversion is performed. See the table below for exact effects otherwise the behavior is undefined.

0 : for integer and floating point number conversions, leading zeros are used to pad the field instead of space characters. For integer numbers it is ignored if the precision is explicitly specified. For other conversions using this flag results in undefined behavior. It is ignored if  flag is present.
 (optional) integer value or
* that specifies minimum field width. The result is padded with space characters (by default), if required, on the left when rightjustified, or on the right if leftjustified. In the case when * is used, the width is specified by an additional argument of type int . If the value of the argument is negative, it results with the  flag specified and positive field width. (Note: This is the minimum width: The value is never truncated.)
 (optional)
. followed by integer number or * , or neither that specifies precision of the conversion. In the case when * is used, the precision is specified by an additional argument of type int . If the value of this argument is negative, it is ignored. If neither a number nor * is used, the precision is taken as zero. See the table below for exact effects of precision.
 (optional) length modifier that specifies the size of the argument
 conversion format specifier
The following format specifiers are available:
Conversion specifier  Explanation  Argument type 
length modifier  hh (C++11).
 h  (none)  l  ll (C++11).
 j (C++11).
 z (C++11).
 t (C++11).
 L 
%  writes literal % . The full conversion specification must be %% .  N/A  N/A  N/A  N/A  N/A  N/A  N/A  N/A  N/A 
c  writes a single character. The argument is first converted to unsigned char . If the l modifier is used, the argument is first converted to a character string as if by %ls with a wchar_t[2] argument.
 N/A  N/A  int  wint_t  N/A  N/A  N/A  N/A  N/A 
s  writes a character string The argument must be a pointer to the initial element of an array of characters. Precision specifies the maximum number of bytes to be written. If Precision is not specified, writes every byte up to and not including the first null terminator. If the l specifier is used, the argument must be a pointer to the initial element of an array of wchar_t , which is converted to char array as if by a call to wcrtomb with zeroinitialized conversion state.
 N/A  N/A  char*  wchar_t*  N/A  N/A  N/A  N/A  N/A 
d i  converts a signed integer into decimal representation []dddd. Precision specifies the minimum number of digits to appear. The default precision is 1 . If both the converted value and the precision are 0 the conversion results in no characters.
 signed char  short  int  long  long long  intmax_t  signed size_t
 ptrdiff_t  N/A 
o  converts a unsigned integer into octal representation oooo. Precision specifies the minimum number of digits to appear. The default precision is 1 . If both the converted value and the precision are 0 the conversion results in no characters. In the alternative implementation precision is increased if necessary, to write one leading zero. In that case if both the converted value and the precision are 0 , single 0 is written.
 unsigned char  unsigned short  unsigned int  unsigned long  unsigned long long  uintmax_t  size_t  unsigned version of ptrdiff_t
 N/A 
x X  converts an unsigned integer into hexadecimal representation hhhh. For the x conversion letters abcdef are used. For the X conversion letters ABCDEF are used. Precision specifies the minimum number of digits to appear. The default precision is 1 . If both the converted value and the precision are 0 the conversion results in no characters. In the alternative implementation 0x or 0X is prefixed to results if the converted value is nonzero.
 N/A 
u  converts an unsigned integer into decimal representation dddd. Precision specifies the minimum number of digits to appear. The default precision is 1 . If both the converted value and the precision are 0 the conversion results in no characters.
 N/A 
f F  converts floatingpoint number to the decimal notation in the style []ddd.ddd. Precision specifies the minimum number of digits to appear after the decimal point character. The default precision is 6 . In the alternative implementation decimal point character is written even if no digits follow it. For infinity and notanumber conversion style see notes.
 N/A  N/A  double 
double (C++11)
 N/A  N/A  N/A  N/A  long double 
e E  converts floatingpoint number to the decimal exponent notation. For the e conversion style []d.ddde ±dd is used. For the E conversion style []d.dddE ±dd is used. The exponent contains at least two digits, more digits are used only if necessary. If the value is 0 , the exponent is also 0 . Precision specifies the minimum number of digits to appear after the decimal point character. The default precision is 6 . In the alternative implementation decimal point character is written even if no digits follow it. For infinity and notanumber conversion style see notes.
 N/A  N/A  N/A  N/A  N/A  N/A 
a A (C++11).
 converts floatingpoint number to the hexadecimal exponent notation. For the a conversion style []0x h.hhhp ±d is used. For the A conversion style []0X h.hhhP ±d is used. The first hexadecimal digit is 0 if the argument is not a normalized floating point value. If the value is 0 , the exponent is also 0 . Precision specifies the minimum number of digits to appear after the decimal point character. The default precision is sufficient for exact representation of the value. In the alternative implementation decimal point character is written even if no digits follow it. For infinity and notanumber conversion style see notes.
 N/A  N/A  N/A  N/A  N/A  N/A 
g G  converts floatingpoint number to decimal or decimal exponent notation depending on the value and the precision. For the g conversion style conversion with style e or f will be performed. For the G conversion style conversion with style E or F will be performed. Let P equal the precision if nonzero, 6 if the precision is not specified, or 1 if the precision is 0 . Then, if a conversion with style E would have an exponent of X :
 if P > X ≥ −4, the conversion is with style
f or F and precision P − 1 − X.
 otherwise, the conversion is with style
e or E and precision P − 1.
Unless alternative representation is requested the trailing zeros are removed, also the decimal point character is removed if no fractional part is left. For infinity and notanumber conversion style see notes.
 N/A  N/A  N/A  N/A  N/A  N/A 
n  returns the number of characters written so far by this call to the function. The result is written to the value pointed to by the argument. The specification may not contain any flag, field width, or precision.
 signed char*  short*  int*  long*  long long*  intmax_t*  signed size_t*
 ptrdiff_t*  N/A 
p  writes an implementation defined character sequence defining a pointer.  N/A  N/A  void*  N/A  N/A  N/A  N/A  N/A  N/A 
The floating point conversion functions convert infinity to inf or infinity . Which one is used is implementation defined.
Notanumber is converted to nan or nan(char_sequence) . Which one is used is implementation defined.
The conversions F , E , G , A output INF , INFINITY , NAN instead.
Even though %c expects int argument, it is safe to pass a char because of the integer promotion that takes place when a variadic function is called.
The correct conversion specifications for the fixedwidth character types (int8_t , etc) are defined in the header <cinttypes> (although PRIdMAX , PRIuMAX , etc is synonymous with %jd , %ju , etc).
The memorywriting conversion specifier %n is a common target of security exploits where format strings depend on user input and is not supported by the boundschecked printf_s family of functions.
There is a sequence point after the action of each conversion specifier; this permits storing multiple %n results in the same variable or, as an edge case, printing a string modified by an earlier %n within the same call.
If a conversion specification is invalid, the behavior is undefined.
