“Swift provides its own versions of all fundamental C and Objective-C types, including
Int for integers,
Float for floating-point values,
Bool for Boolean values, and
Stringfor textual data. Swift also provides powerful versions of the three primary collection types,
Dictionary, as described in Collection Types.
Like C, Swift uses variables to store and refer to values by an identifying name. Swift also makes extensive use of variables whose values can’t be changed. These are known as constants, and are much more powerful than constants in C. Constants are used throughout Swift to make code safer and clearer in intent when you work with values that don’t need to change.
In addition to familiar types, Swift introduces advanced types not found in Objective-C, such as tuples. Tuples enable you to create and pass around groupings of values. You can use a tuple to return multiple values from a function as a single compound value.
Swift also introduces optional types, which handle the absence of a value. Optionals say either “there is a value, and it equals x” or “there isn’t a value at all”. Using optionals is similar to using
nil with pointers in Objective-C, but they work for any type, not just classes. Not only are optionals safer and more expressive than
nil pointers in Objective-C, they’re at the heart of many of Swift’s most powerful features.
Swift is a type-safe language, which means the language helps you to be clear about the types of values your code can work with. If part of your code requires a
String, type safety prevents you from passing it an
Int by mistake. Likewise, type safety prevents you from accidentally passing an optional
String to a piece of code that requires a non-optional
String. Type safety helps you catch and fix errors as early as possible in the development process.”
|Integer||Integers represent whole numbers, numbers that have no fractional componentsInt or UInt are both data types used to represent Integers and used for numbers like 1, 2, 3, 4, 5.Unsigned Integers can hold larger positive values, but they are unable to hold negative values.Ints can be anything from -2,147,483,648 to 2,147,483,647 (on a 32 bit platform) or -9223372036854775808 to 9223372036854775807 (in a 64 bit platform). Whereas, UInts can be 0 to 4,294,967,295 (on a 32 bit platform)
0 to 18446744073709551615 ( (2 to the power 64) – 1, on a 64 bit platform).Unsigned uses the leading bit as a part of the value, while the signed version uses the left-most-bit to identify if the number is positive or negative. Two-Complement Representation explains why we don’t get our full 2 to the power 64 positive numbers for example, but one less that that.In Swift both Ints and UInts come in four different sizes; 8, 16, 32 and 64-bit. For example, we can use Int8, Int16, Int32 or Int64 to define 8, 16, 32 or 64-bit signed integers, or similarly UInt32 or UInt64 to define 32 or 64-bit unsigned integer variables.
|100, 200, 326|
|Float||Float in Swift represents a Floating-Point Number which is a type of number that has a fractional component such as 1.61803398875 or 6.62607004 and as such are usually written using a decimal point.||3.14-455.3344|
|Double||Double In contrast to the
|Bool||Bool represents a Boolean value, which naturally can be either TRUE or FALSE (YES or NO). We can use Bool to check whether certain condition met or not.|
|String||String is basically an array of Characters (where a character is a single letter). We can define string data type by adding double quotes to our text like “Hello World”.||“Hello Vegas”|
|Character||A Character represents a single alphabetical character like “X” or “2”.||
|Optional||This is a special type that contains other types and can either be empty or have a value inside it.|
Size of a Data Type
A Type’s size is measured in terms of the bit (a 0 or 1) and a type will be able to store values of up to just about 2bits, or 2 to the power of the number of available bits.
“There are four well-known ways to represent signed numbers in a binary computing system. The most common is two’s complement, which allows a signed integral type with n bits to represent numbers from −2(n−1) through 2(n−1)−1. Two’s complement arithmetic is convenient because there is a perfect one-to-one correspondence between representations and values (in particular, no separate +0 and −0), and because addition, subtraction and multiplication do not need to distinguish between signed and unsigned types. Other possibilities include offset binary, sign-magnitude, and ones’ complement” (Wikipedia).
But in the real world, an Int32, for example, will have a range of −2147483648 to 2147483647, or from −(231) to 231 − 1
See Powers of Two: http://www.thealmightyguru.com/Pointless/PowersOf2.html
Numeric Literal Values
In the case of the integer values, the first form we can use to write integer values in is good old decimal. Decimal is the form you’ll be most familiar with where values are represented in a base-10 format without any sort of prefix or extra notation. For example:
let decimalInteger = 42 // 42 = (4 * 10) + (2 * 1)
As well as decimal notation (using the good-old base-10 number system), there are three other notations that we can use to write integer values in Swift: binary, octal and hexadecimal.
Binary Integer Literals
When we write an integer value in binary notation in Swift, we’re representing the value in a base-2 notation and write it with a leading zero, followed by a lowercase
b (for binary) followed by the value written in base-2. For example, if I wanted to initialize a constant with the decimal value 42 written in binary notation I would write:
let binaryInteger = 0b101010 // The equivalent of decimal 42.
// 42 = (1 * 32) + (1 * 8) + (1 * 2)
Octal Integer Literals
The next option we have is octal notation. Octal notation represents values in a base-8 notation and to indicate the values is in octal notation we write a leading zero followed by a lowercase
o (for octal) followed by the value in base-8. For example, if we wanted to initialise a constant with the octal equivalent of decimal 42 we would write:
let octalInteger = 0o52 // The equivalent of decimal 42.
// 42 = (5 * 8) + (2 * 1)
Hexadecimal Integer Literals
The final choice we have is hexadecimal notation. When we write a literal value in hexadecimal form we prefix the value with a zero followed by a lowercase
xfrom hexadecimal) followed by the number in hexadecimal. So if we wanted to declare a final constant, again with the equivalent of the decimal value 42, but this time written in hexadecimal we would write :
let hexadecimalInteger = 0x2A // The equivalent of decimal 42.
// 42 = (2 * 16) + (10 * 1)
As with integer values, it is relatively common to write floating-point literals within our code and in the case of floating-point numbers we actually have slightly fewer choices when it comes to formats as we are limited to either decimal notation or hexadecimal notation.
Decimal Floating-Point Literals
The first notation, decimal, is the one you be most familiar with. As you know this format uses a decimal point to separate the integer part of the number from the fractional part. For example:
let decimalDouble = 3.14159 // Pi
One thing to note is that in Swift, you must always have a number on both sides of the decimal point so writing something like
.5 in Swift is invalid.
Hexadecimal Floating-Point Literals
In addition to decimal notation, we can also write floating point numbers in Swift using hexadecimal notation. To write floating point numbers in hexadecimal notation you prefix the number with a zero, followed by a lower-case
0x). When writing floating-point numbers in this way, both the whole number and fractional parts are both written in hexadecimal and are separated by a decimal point. For example, I could write
pi in the following format:
let hexadecimalDouble = 0x3.374F
- - - - - -
Now, if you can remember back to school, when you were writing either very larger or very small numbers you may have used something called scientific notation. Scientific notation is a kind of short hand, a more convenient way of writing floating point numbers using an optional exponent and given that I’ve mentioned it here, it won’t surprise you that we also have the option of using this notation in Swift. To write decimal floating point literals in scientific notation we use an upper or lowercase letter
e to separate the base value from the exponent. The total value of the floating point number is then equivalent of multiplying the base value, (the part of the number before the
e), by 10 (in the case of decimal values) raised to the power of the exponent value (where the exponent value is the part of the number after the
e). So if I wrote, the following:
2.56e2 it would be the equivalent of writing
2.56 * 10^2 or
256. Similarly, if I wrote:
2.56e-2, (notice here that I used a negative exponent), it would be the equivalent of writing:
2.56 * 10^-2 or
0.0256. In Swift, we can also write floating point numbers in hexadecimal format and still make use of scientific notation. When writing in hexadecimal format, instead of using an
E to separate the base value from the exponent we , we use an upper or lowercase letter
p. When written in hexadecimal format, the literal value is equivalent to multiplying the base value (which is written in hexadecimal) by 2 raised to the power of the exponent (which is also written in hexadecimal). So for example if we wrote the following floating point literal:
0xAp2 It would be the equivalent of writing:
10 * 2^2 or
40.0. Notice how the number is still prefixed with the
0x to indicate that is is a hexadecimal number. Similarly if I wrote
0xAp-2It would be the equivalent of writing:
10 * 2^-2 or
Formatting for Numeric Literal Values
In addition to the different syntaxes that we can use to write numeric literal values, Swift also allows us to some syntactic sugar that helps us make those values easier to read. Firstly, both integers and floating point numbers can be written with additional leading zeros e.g.:
let pi = 00003.14159
They an also be written with underscores between group of digits to help with readability:
let largeInteger = 3_000_000_000
let largeDouble = 2_345_678.910_111_213
In both cases, the additional syntactic sugar has no effect on the underlying value that is represented, it is simply ignored by the compiler.
– – – – –
Integer Value Limits
Now, as I mentioned earlier, integer types in Swift use a fixed number of bits in which to store their values. In doing so, the number of bits they use, provides a direct limit on the range of values that variables or constants of that type can store. For example, the
Int8 type, a signed integer using 8 bits of storage can store values ranging from
127 whereas it’s unsigned equivalent (
UInt8) can store values that range from
0 through to
255. In both cases, if you attempt to store a value that does not fit within the range of values supported by the type, the compiler will indicate it as an error at the point you compile your code. So given the fact that the integer values in Swift have a particular range of values that they can store, how do we find out how do we find out what this range of values is without doing some fancy binary maths? Well, in Swift, built into each of the integer types are a couple of type properties that allow us to discover this information.
Value Ranges of Types
To find the smallest value a particular integer type can store we use the
min property and access it using dot notation. Note: All the dot notation is, is using a dot or full stop to separate a item we want to know something about (in this case a type) from the information we want to know about it (in this case the minimum value it can store which is represented by the
min property). For example if I wanted to access the minimum value that can be stored in say an
Int8 I would write:
Int8.min // Returns -127
We can also access the maximum value that can be stored in a particular type. You can do this using the
max property. For example:
Int8.max // Returns 128
In both these cases, the values returned from these properties are of the same type as the type whose properties we’ve accessed so in the examples above the values returned from the
max properties would both be of type
Int8. This allows us to easily use these returned values in calculations of that type without the need to convert them. With that said, though, the need to convert values between different types is not unusual in Swift, so we’ll take a look at that next.
Numeric Type Conversion
Converting Between Integer Types
As we saw just now, each of the integer value types in Swift have a different range of values that they can store and to convert a value from one data type to another Swift forces us to explicitly opt-in to the conversion on a case-by-case basis. By doing so, it helps avoid hidden conversion errors by making us indicate these conversions explicitly. The mechanism for converting one type to another in Swift is simple. We simply create a new value of the desired type and initialise it with the existing value:
let daysInAYear : Int16 = 365
let daysInJanuary : Int8 = 31
let totalDays = daysInAYear + UInt16(daysInJanuary)
In this example, we initially create two constants, the first,
daysInAYear is of type
Int16 and the second, the
daysInJanuary is of type
UInt8. In the last line, we then create a new constant (
totalDays) by combining the values held in the two previous constants. To do this though, the values that we are combining have to be of the same time so we first have to convert the
daysInJanuary constant into an
Int16 to match the type of the
daysInAYear constant. To do this we create a new value of type
UInt16 using initialisation syntax passing in the value from the
daysInJanuary constant. When then combine the new value with the “daysInAYear
constant to create thetotalDays
constant which Swift infers to be of typeInt16`.
typeName(initialValue) we used in the example above is an example of using the default
UInt16 initializer and as you can see, we provided it with an initialisation value as part of that call (in this case the value held in the
daysInJanuary constant). Behind the scenes, the
UInt16 type has a number of different initialisers, each of which accepts an initialisation parameter of a different type. In this case we make use of the initialiser that accepts a
UInt8 parameter but there are others. It’s a similar story with the other numeric types in Swift. Each them has a specific set of initialisers, each tailored to accept initialisation parameters of specific types. This means that you can’t simply initialise numeric values with any old type. There are also some subtleties to these initialisations as well. For example, the
UInt8 type has an initialiser that accepts a
UInt16 parameter but if the value you provide doesn’t fit within the range of values supported by a
255), the compiler will, by default, give you an error.
let littleUInt16 = 120
let littleUInt8 : UInt8 = UInt8(littleUInt16)
let bigUInt16 = 1440
let bitUInt8 : UInt8 = UInt8(bigUInt16) // Compiler error.
Converting Between Integer and Floating-Point Types
In addition to being able to convert between different integer types, Swift also allows us to convert between integer and floating-point types as well. As with the integer types, any conversion must be explicitly stated though:
let startingRatio = 1 // Inferred as an Int
let frationalRatio = 0.61803398875 // Inferred as a Double
let goldenRatio = Double(startingRatio) + fractionalRatio
// goldenRatio equals 1.61803398875 and is inferred to be of type Double.
As we saw with the integer example earlier, in order to be able to combine the two values, we must first ensure that they are all of the same type. To achieve this, we create a new
Double value using the value stored in the
startingRatio constant (in this case, we’re calling an initialiser on the
Double type that accepts an
Int as a parameter) and then add that to the existing
fractionalRatio constant. The result is the
goldenRatio constant and which is inferred to be of type
Double by Swift.
Converting Between Floating Point and Integer Types
In addition to being able to convert from integers to floating point numbers, we can also convert the other way, from floating point number to integers and as we’ve seen with all the conversions so far, we must explicitly opt-in to this. In the case of floating point to integer conversions, this is even more important than normal as when a floating point value is converted into an integer value in Swift, the fractional part of the floating point number is truncated:
let integerGoldenRatio = Int(goldenRatio)
// integerGoldenRatio is equal to 1 and is inferred to be of type Int
This applies for both
Converting Numeric Literals
One thing to point out at this point is are the rules for converting numeric literals. When we include numeric literals in our code they aren’t actually typed until they are evaluated by the compiler. This means that any literal values you write in your code don’t have to be converted before they can be combined. For example, if we revisited our golden ratio example notice how in this example, we don’t need to convert the integer literal
1 before we combine it:
let otherGoldenRatio = 1 + fractionalRatio
Value Types vs Reference Types
Variables of reference types store references to their data (objects), while variables of value types directly contain their data. With reference types, two variables can reference the same object; therefore, operations on one variable can affect the object referenced by the other variable. With value types, each variable has its own copy of the data, and it is not possible for operations on one variable to affect the other.
A Value Type holds the data within its own memory allocation and a Reference Type contains a pointer to another memory location that holds the real data. Reference Type variables are stored in the heap while Value Type variables are stored in the stack.
Stack and Heap
Stack is used for static memory allocation and Heap for dynamic memory allocation, both stored in the computer’s RAM.
Classes and Structs
Whereas a class is passed by reference a struct is passed by copy; This means that a Class is a reference type and its object is created in the heap memory whereas a struct is a value type and its object is created on the stack memory.
Structures and classes in Swift have many things in common. Both can:
Define properties to store values
Define methods to provide functionality
Define subscripts to provide access to their values using subscript syntax
Define initializers to set up their initial state
Be extended to expand their functionality beyond a default implementation
Conform to protocols to provide standard functionality of a certain kind
For more information, see Properties, Methods, Subscripts, Initialization, Extensions, and Protocols.
Classes have additional capabilities that structures don’t have:
Inheritance enables one class to inherit the characteristics of another.
Type casting enables you to check and interpret the type of a class instance at runtime.
Deinitializers enable an instance of a class to free up any resources it has assigned.
Reference counting allows more than one reference to a class instance.
Structs are preferable if they are relatively small and copiable because copying is way safer than having multiple references to the same instance as happens with classes. This is especially important when passing around a variable to many classes and/or in a multithreaded environment. If you can always send a copy of your variable to other places, you never have to worry about that other place changing the value of your variable underneath you.
With Structs, there is much less need to worry about memory leaks or multiple threads racing to access/modify a single instance of a variable. (For the more technically minded, the exception to that is when capturing a struct inside a closure because then it is actually capturing a reference to the instance unless you explicitly mark it to be copied).
Classes can also become bloated because a class can only inherit from a single superclass. That encourages us to create huge superclasses that encompass many different abilities that are only loosely related. Using protocols, especially with protocol extensions where you can provide implementations to protocols, allows you to eliminate the need for classes to achieve this sort of behavior.
Integers are value types
var x = 1
var y = x
x += 1
// x is now 2, whereas y is still one
Structs are Value Types
Unlike classes, which are passed by reference, structures are passed through copying:
first = “Hello”
second = first
first += ” World!”
// first == “Hello World!”
// second == “Hello”
The swift type String is a structure, therefore it is copied on assignment.
Structures also cannot be compared using identity operator:
window0 === window1 // works because a window is a class instance
“hello” === “hello” // error: binary operator ‘===’ cannot be applied to two ‘String’ operands
Every two structure instances are deemed identical if they compare equal.
Collectively, these traits that differentiate structures from classes are what make structures value types.
Because structs are value types and therefore copied when passed around, they are allocated on the stack. This makes structs more efficient than classes, however, if you do need a notion of identity and/or reference semantics, a struct cannot provide you with those things.
(Source: Swift Notes for Professionals)