You can always use the sizeof() function (shown in linked article above) to get sure how much memory one type takes.
This does not necessarily reflect alignment requirements, hence if you pack a 4 byte type and a 1 byte type together in a struct you might find the result takes 8 bytes in memory or potentially more.
Outside of the standard memory footprint considerations, there are also performance considerations. Traditionally some of the types are not directly supported by a platforms underlying instruction set and so multiple instructions need to be executed to emulate the desired behaviour.
This is still noticeable when building an application targeting x86 rather than x86-64. The 8 byte types (either long or long long depending on platform) require such software level emulation on x86 since the maximum standard register and word size is only 32bits. This emulation may include multiple 4 byte memory read and write instructions, multiple chained together addition and subtraction instructions or even non-trivial subroutines for multiplication and division. The same 8 byte types on x86-64 are manipulated with just single instructions and outside of bandwidth differences take approximately the same time to execute as x86 does for 32bit types.
But in C, the range of int values is from -32,768 to 32,767 or -2,147,483,648 to 2,147,483,647
The range is determined by the number of bits (bytes) the compiler uses for the type. Now days it is usually 4 bytes so "-2,147,483,648 to 2,147,483,647". In the old days or on some platforms it is only 2 bytes so "-32,768 to 32,767". If you built an x86/x86-64 application then the range will likely be -2,147,483,648 to 2,147,483,647. If you built targeting an architecture where it is 2 bytes then inputting a value larger than 32,767 will likely produce a compile warning (loss of precision due to implicit conversion) and integer overflow would occur.
In modern C/C++ language revisions there are a set of standard header files containing macros, constants and typedefs that explicitly define the sizes and numeric ranges of the various types. These should be used rather than hard coding constant values or making assumptions.
If all you care about is numeric range then I strongly recommend using the sized integer types as those guarantee you specific numeric ranges.
Fixed width integer types (since C++11) - cppreference.com
For example with a performance critical application involving numbers in the range of a 32bit unsigned integer you might want to use uint_fast32_t so you are guaranteed the fastest type for processing them. If you want to pack lots of 64bit signed integer ranged values densely in memory you would use int_least64_t. Something like int32_t would be used for very specific platform coupled operations where it is known that a 32bit integer type is natively supported and that it is needed for interfacing with platform specific libraries, data or intrinsic functions or if one understands the architecture enough for explicit optimizations.